dataset
stringclasses
9 values
input
stringlengths
351
18.6k
output
stringlengths
8
7.48k
id
stringlengths
20
47
messages
listlengths
2
2
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches. Introduction Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors. One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable. In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting. In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows: We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation. We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces. Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method. Approach ::: Transfer Protocol We consider the typical zero-shot translation scenario in which a high resource pivot language has parallel data with both source and target languages, while source and target languages has no parallel data between themselves. Our proposed cross-lingual pretraining based transfer approach for source$\rightarrow $target zero-shot translation is mainly divided into two phrases: the pretraining phase and the transfer phase. In the pretraining phase, we first pretrain MLM on monolingual corpora of both source and pivot languages, and continue to pretrain TLM or the proposed BRLM on the available parallel data between source and pivot languages, in order to build a cross-lingual encoder shared by the source and pivot languages. In the transfer phase, we train pivot$\rightarrow $target NMT model initialized by the cross-lingually pre-trained encoder, and finally transfer the trained NMT model to source$\rightarrow $target translation thanks to the shared encoder. Note that during training pivot$\rightarrow $target NMT model, we freeze several layers of the cross-lingually pre-trained encoder to avoid the degeneracy issue. For the more complicated scenario that either the source side or the target side has multiple languages, the encoder and the decoder are also shared across each side languages for efficient deployment of translation between multiple languages. Experiments ::: Setup ::: Datasets. The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\rightarrow $Es and De$\rightarrow $Fr. For distant language pair Ro$\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33. For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus. Experiments ::: Main Results ::: Results on Europarl Dataset. Regarding comparison between the baselines in table TABREF19, we find that pivoting is the strongest baseline that has significant advantage over other two baselines. Cross-lingual transfer for languages without shared vocabularies BIBREF16 manifests the worst performance because of not using source$\leftrightarrow $pivot parallel data, which is utilized as beneficial supervised signal for the other two baselines. Our best approach of MLM+BRLM-SA achieves the significant superior performance to all baselines in the zero-shot directions, improving by 0.9-4.8 BLEU points over the strong pivoting. Meanwhile, in the supervised direction of pivot$\rightarrow $target, our approaches performs even better than the original supervised Transformer thanks to the shared encoder trained on both large-scale monolingual data and parallel data between multiple languages. MLM alone that does not use source$\leftrightarrow $pivot parallel data performs much better than the cross-lingual transfer, and achieves comparable results to pivoting. When MLM is combined with TLM or the proposed BRLM, the performance is further improved. MLM+BRLM-SA performs the best, and is better than MLM+BRLM-HA indicating that soft alignment is helpful than hard alignment for the cross-lingual pretraining. Experiments ::: Main Results ::: Results on MultiUN Dataset. Like experimental results on Europarl, MLM+BRLM-SA performs the best among all proposed cross-lingual pretraining based transfer approaches as shown in Table TABREF26. When comparing systems consisting of one encoder-decoder model for all zero-shot translation, our approaches performs significantly better than MNMT BIBREF19. Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\rightarrow $ Ar and Es $\rightarrow $ Ru than strong pivoting$_{\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer. Experiments ::: Analysis ::: Sentence Representation. We first evaluate the representational invariance across languages for all cross-lingual pre-training methods. Following BIBREF23, we adopt max-pooling operation to collect the sentence representation of each encoder layer for all source-pivot sentence pairs in the Europarl validation sets. Then we calculate the cosine similarity for each sentence pair and average all cosine scores. As shown in Figure FIGREF27, we can observe that, MLM+BRLM-SA has the most stable and similar cross-lingual representations of sentence pairs on all layers, while it achieves the best performance in zero-shot translation. This demonstrates that better cross-lingual representations can benefit for the process of transfer learning. Besides, MLM+BRLM-HA is not as superior as MLM+BRLM-SA and even worse than MLM+TLM on Fr-En, since MLM+BRLM-HA may suffer from the wrong alignment knowledge from an external aligner tool. We also find an interesting phenomenon that as the number of layers increases, the cosine similarity decreases. Experiments ::: Analysis ::: Contextualized Word Representation. We further sample an English-Russian sentence pair from the MultiUN validation sets and visualize the cosine similarity between hidden states of the top encoder layer to further investigate the difference of all cross-lingual pre-training methods. As shown in Figure FIGREF38, the hidden states generated by MLM+BRLM-SA have higher similarity for two aligned words. It indicates that MLM+BRLM-SA can gain better word-level representation alignment between source and pivot languages, which better relieves the burden of the domain shift problem. Conclusion In this paper, we propose a cross-lingual pretraining based transfer approach for the challenging zero-shot translation task, in which source and target languages have no parallel data, while they both have parallel data with a high resource pivot language. With the aim of building the language invariant representation between source and pivot languages for smooth transfer of the parent model of pivot$\rightarrow $target direction to the child model of source$\rightarrow $target direction, we introduce one monolingual pretraining method and two bilingual pretraining methods to construct an universal encoder for the source and pivot languages. Experiments on public datasets show that our approaches significantly outperforms several strong baseline systems, and manifest the language invariance characteristics in both sentence level and word level neural representations. Acknowledgments We would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289). This work was also partially supported by Alibaba Group through Alibaba Innovative Research Program and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions. Figure 1: The circle and triangle dots represent source sentences in different language l1 and l2, and the square dots means target sentences in language l3. A sample of translation pairs is connected by the dashed line. We would like to force each of the translation pairs has the same latent representation as the right part of the figure so as to transfer l1 → l3 model directly to l2 → l3 model. Figure 2: The overview of BRidge Language Modeling (BRLM). The BRLM extends MLM (Lample and Conneau 2019) to pairs of parallel sentences and leverages explicit alignment information obtained by external aligner tool or additional attention layer to encourage word representation alignment across different languages. Table 2: Results on Europarl test sets. Three pivot settings are conducted in our experiments. In each setting, the left column presents the zero-shot performances (source→target), and the right column denotes the performances in the supervised parent model direction (pivot→target). Table 3: Results on MultiUN test sets. The six zero-shot translation directions are evaluated. The column “A-ZST" reports averaged BLEU of zero-shot translation, while the column “A-ST" reports averaged BLEU of supervised pivot→target direction. Figure 4: Cosine similarity visualization at word level given an English-Russian sentence pair from the MultiUN validation sets. Brighter indicates higher similarity. Query: what language pairs are explored? JSON Response:
{"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", "evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33."]}
science.qasper_abstractive_qa.0.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation\n\nTransfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches.\n\nIntroduction\nAlthough Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.\nOne common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\\rightarrow $target model (parent) to initialize a low-resource source$\\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable.\nIn this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting.\nIn this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\\leftrightarrow $pivot and pivot$\\leftrightarrow $target parallel data but no source$\\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\\rightarrow $target model and then test this model in source$\\rightarrow $target direction directly. The main contributions of this paper are as follows:\nWe propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.\nWe propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces.\nOur proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method.\n\nApproach ::: Transfer Protocol\nWe consider the typical zero-shot translation scenario in which a high resource pivot language has parallel data with both source and target languages, while source and target languages has no parallel data between themselves. Our proposed cross-lingual pretraining based transfer approach for source$\\rightarrow $target zero-shot translation is mainly divided into two phrases: the pretraining phase and the transfer phase.\nIn the pretraining phase, we first pretrain MLM on monolingual corpora of both source and pivot languages, and continue to pretrain TLM or the proposed BRLM on the available parallel data between source and pivot languages, in order to build a cross-lingual encoder shared by the source and pivot languages.\nIn the transfer phase, we train pivot$\\rightarrow $target NMT model initialized by the cross-lingually pre-trained encoder, and finally transfer the trained NMT model to source$\\rightarrow $target translation thanks to the shared encoder. Note that during training pivot$\\rightarrow $target NMT model, we freeze several layers of the cross-lingually pre-trained encoder to avoid the degeneracy issue.\nFor the more complicated scenario that either the source side or the target side has multiple languages, the encoder and the decoder are also shared across each side languages for efficient deployment of translation between multiple languages.\n\nExperiments ::: Setup ::: Datasets.\nThe statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.\nFor MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.\n\nExperiments ::: Main Results ::: Results on Europarl Dataset.\nRegarding comparison between the baselines in table TABREF19, we find that pivoting is the strongest baseline that has significant advantage over other two baselines. Cross-lingual transfer for languages without shared vocabularies BIBREF16 manifests the worst performance because of not using source$\\leftrightarrow $pivot parallel data, which is utilized as beneficial supervised signal for the other two baselines.\nOur best approach of MLM+BRLM-SA achieves the significant superior performance to all baselines in the zero-shot directions, improving by 0.9-4.8 BLEU points over the strong pivoting. Meanwhile, in the supervised direction of pivot$\\rightarrow $target, our approaches performs even better than the original supervised Transformer thanks to the shared encoder trained on both large-scale monolingual data and parallel data between multiple languages.\nMLM alone that does not use source$\\leftrightarrow $pivot parallel data performs much better than the cross-lingual transfer, and achieves comparable results to pivoting. When MLM is combined with TLM or the proposed BRLM, the performance is further improved. MLM+BRLM-SA performs the best, and is better than MLM+BRLM-HA indicating that soft alignment is helpful than hard alignment for the cross-lingual pretraining.\n\nExperiments ::: Main Results ::: Results on MultiUN Dataset.\nLike experimental results on Europarl, MLM+BRLM-SA performs the best among all proposed cross-lingual pretraining based transfer approaches as shown in Table TABREF26. When comparing systems consisting of one encoder-decoder model for all zero-shot translation, our approaches performs significantly better than MNMT BIBREF19.\nAlthough it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer.\n\nExperiments ::: Analysis ::: Sentence Representation.\nWe first evaluate the representational invariance across languages for all cross-lingual pre-training methods. Following BIBREF23, we adopt max-pooling operation to collect the sentence representation of each encoder layer for all source-pivot sentence pairs in the Europarl validation sets. Then we calculate the cosine similarity for each sentence pair and average all cosine scores. As shown in Figure FIGREF27, we can observe that, MLM+BRLM-SA has the most stable and similar cross-lingual representations of sentence pairs on all layers, while it achieves the best performance in zero-shot translation. This demonstrates that better cross-lingual representations can benefit for the process of transfer learning. Besides, MLM+BRLM-HA is not as superior as MLM+BRLM-SA and even worse than MLM+TLM on Fr-En, since MLM+BRLM-HA may suffer from the wrong alignment knowledge from an external aligner tool. We also find an interesting phenomenon that as the number of layers increases, the cosine similarity decreases.\n\nExperiments ::: Analysis ::: Contextualized Word Representation.\nWe further sample an English-Russian sentence pair from the MultiUN validation sets and visualize the cosine similarity between hidden states of the top encoder layer to further investigate the difference of all cross-lingual pre-training methods. As shown in Figure FIGREF38, the hidden states generated by MLM+BRLM-SA have higher similarity for two aligned words. It indicates that MLM+BRLM-SA can gain better word-level representation alignment between source and pivot languages, which better relieves the burden of the domain shift problem.\n\nConclusion\nIn this paper, we propose a cross-lingual pretraining based transfer approach for the challenging zero-shot translation task, in which source and target languages have no parallel data, while they both have parallel data with a high resource pivot language. With the aim of building the language invariant representation between source and pivot languages for smooth transfer of the parent model of pivot$\\rightarrow $target direction to the child model of source$\\rightarrow $target direction, we introduce one monolingual pretraining method and two bilingual pretraining methods to construct an universal encoder for the source and pivot languages. Experiments on public datasets show that our approaches significantly outperforms several strong baseline systems, and manifest the language invariance characteristics in both sentence level and word level neural representations.\n\nAcknowledgments\nWe would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289). This work was also partially supported by Alibaba Group through Alibaba Innovative Research Program and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.\n\nFigure 1: The circle and triangle dots represent source sentences in different language l1 and l2, and the square dots means target sentences in language l3. A sample of translation pairs is connected by the dashed line. We would like to force each of the translation pairs has the same latent representation as the right part of the figure so as to transfer l1 → l3 model directly to l2 → l3 model.\n\nFigure 2: The overview of BRidge Language Modeling (BRLM). The BRLM extends MLM (Lample and Conneau 2019) to pairs of parallel sentences and leverages explicit alignment information obtained by external aligner tool or additional attention layer to encourage word representation alignment across different languages.\n\nTable 2: Results on Europarl test sets. Three pivot settings are conducted in our experiments. In each setting, the left column presents the zero-shot performances (source→target), and the right column denotes the performances in the supervised parent model direction (pivot→target).\n\nTable 3: Results on MultiUN test sets. The six zero-shot translation directions are evaluated. The column “A-ZST\" reports averaged BLEU of zero-shot translation, while the column “A-ST\" reports averaged BLEU of supervised pivot→target direction.\n\nFigure 4: Cosine similarity visualization at word level given an English-Russian sentence pair from the MultiUN validation sets. Brighter indicates higher similarity.\n\nQuery: what language pairs are explored?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru\", \"evidence\": [\"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.\", \"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\\\rightarrow $Es and De$\\\\rightarrow $Fr. For distant language pair Ro$\\\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: This dataset <cite>[7]</cite> consists of about 121K (98K) images, 438K (220K) questions, and 4.4M (2.2M) answers in the train (test) set; and it is created so that the distribution of the answers between train and test splits differ, and hence the models cannot excessively rely on the language prior <cite>[7]</cite> . As expected, <cite>[7]</cite> show that performance of all Visual QA approaches they tested drops significantly between train to test sets.
["Background"]
science.multicite_intent_classification.1141
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThis dataset <cite>[7]</cite> consists of about 121K (98K) images, 438K (220K) questions, and 4.4M (2.2M) answers in the train (test) set; and it is created so that the distribution of the answers between train and test splits differ, and hence the models cannot excessively rely on the language prior <cite>[7]</cite> . As expected, <cite>[7]</cite> show that performance of all Visual QA approaches they tested drops significantly between train to test sets.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.discomat_te
| Band position (cm-1) | Assignment | Reference | | 350 | Typical vibration of V-O-V bond | [20] | | 488 | The bending vibrations of the bridging V-O-V (doubly coordinated oxygen) | [21] | | 516 | Loose diborate groups and loose [BO4] groups | [22] | | 697 | Doubly coordinated oxygen (V2-O) stretching mode which results from corner-shared oxygens common to two pyramids | [21] | | 815 | Stretching vibrations of the V-O-V bridges | [23] | | 837 | Vibrations of pyroborate groups | [22] | | 917-931 | The VO vibrations of the tetrahedra [VO4] or trigonal bipyramidal [VO5] units inside the network | [24] | | 948 | Vibrations of orthoborate groups | [22] | | 1200-1500 | The B-O- vibration occurring in a large borate network and not to a specific group | [25,26] | Caption: Assignments of Raman bands present in the spectra Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "350", "type": "Other"} {"value": "488", "type": "Other"} {"value": "516", "type": "Other"} {"value": "697", "type": "Other"} {"value": "815", "type": "Other"} {"value": "837", "type": "Other"} {"value": "917", "type": "Other"} {"value": "931", "type": "Other"} {"value": "948", "type": "Other"} {"value": "1200", "type": "Other"} {"value": "1500", "type": "Other"}
science.discomat_te.403
[ { "content": "| Band position (cm-1) | Assignment | Reference |\n| 350 | Typical vibration of V-O-V bond | [20] |\n| 488 | The bending vibrations of the bridging V-O-V (doubly coordinated oxygen) | [21] |\n| 516 | Loose diborate groups and loose [BO4] groups | [22] |\n| 697 | Doubly coordinated oxygen (V2-O) stretching mode which results from corner-shared oxygens common to two pyramids | [21] |\n| 815 | Stretching vibrations of the V-O-V bridges | [23] |\n| 837 | Vibrations of pyroborate groups | [22] |\n| 917-931 | The VO vibrations of the tetrahedra [VO4] or trigonal bipyramidal [VO5] units inside the network | [24] |\n| 948 | Vibrations of orthoborate groups | [22] |\n| 1200-1500 | The B-O- vibration occurring in a large borate network and not to a specific group | [25,26] |\n\nCaption: Assignments of Raman bands present in the spectra\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"350\", \"type\": \"Other\"}\n{\"value\": \"488\", \"type\": \"Other\"}\n{\"value\": \"516\", \"type\": \"Other\"}\n{\"value\": \"697\", \"type\": \"Other\"}\n{\"value\": \"815\", \"type\": \"Other\"}\n{\"value\": \"837\", \"type\": \"Other\"}\n{\"value\": \"917\", \"type\": \"Other\"}\n{\"value\": \"931\", \"type\": \"Other\"}\n{\"value\": \"948\", \"type\": \"Other\"}\n{\"value\": \"1200\", \"type\": \"Other\"}\n{\"value\": \"1500\", \"type\": \"Other\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We follow the implementation of coattention in<cite> (Zhong et al., 2019)</cite> .
["Uses"]
science.multicite_intent_classification.617
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe follow the implementation of coattention in<cite> (Zhong et al., 2019)</cite> .", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Ufd1-Npl4 Recruit Cdc48 for Disassembly of Ubiquitylated CMG Helicase at the End of Chromosome Replication. We demonstrate that the ubiquitin-binding Ufd1-Npl4 complex recruits Cdc48 to ubiquitylated CMG. Mutation of K29 abrogates in vitro recruitment of Ufd1-Npl4-Cdc48 to the CMG helicase, supporting a model whereby Ufd1-Npl4 recruits Cdc48 to ubiquitylated CMG at the end of chromosome replication, thereby driving the disassembly reaction. Mutation of K29 abrogates in vitro recruitment of Ufd1-Npl4-Cdc48 to the CMG helicase, supporting a model whereby Ufd1-Npl4 recruits Cdc48 to ubiquitylated CMG at the end of chromosome replication, thereby driving the disassembly reaction. Question: List the partners of budding yeast Cdc48 that are important for disassembly of ubiquitylated CMG helicase at the end of chromosome replication
["Ufd1", "Npl4"]
science.bioasq_list_qa.489
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Ufd1-Npl4 Recruit Cdc48 for Disassembly of Ubiquitylated CMG Helicase at the End of Chromosome Replication.\nWe demonstrate that the ubiquitin-binding Ufd1-Npl4 complex recruits Cdc48 to ubiquitylated CMG.\nMutation of K29 abrogates in vitro recruitment of Ufd1-Npl4-Cdc48 to the CMG helicase, supporting a model whereby Ufd1-Npl4 recruits Cdc48 to ubiquitylated CMG at the end of chromosome replication, thereby driving the disassembly reaction.\nMutation of K29 abrogates in vitro recruitment of Ufd1-Npl4-Cdc48 to the CMG helicase, supporting a model whereby Ufd1-Npl4 recruits Cdc48 to ubiquitylated CMG at the end of chromosome replication, thereby driving the disassembly reaction.\n\nQuestion: List the partners of budding yeast Cdc48 that are important for disassembly of ubiquitylated CMG helicase at the end of chromosome replication", "role": "user" }, { "content": "[\"Ufd1\", \"Npl4\"]", "role": "assistant" } ]
science.discomat_te
| | SiO2 | Al2O3 | MgO | | Stoichiometry of cordierite | 51.46 | 34.86 | 13.78 | | Analysis by EPMA | 51.7+-0.5 | 33.8+-0.5 | 14.5+-0.5 | Caption: Composition of the glass (wt%) Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "51.46", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"} {"value": "34.86", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"} {"value": "13.78", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "xx"} {"value": "51.7", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "xx"} {"value": "33.8", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "xx"} {"value": "14.5", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "xx"}
science.discomat_te.273
[ { "content": "| | SiO2 | Al2O3 | MgO |\n| Stoichiometry of cordierite | 51.46 | 34.86 | 13.78 |\n| Analysis by EPMA | 51.7+-0.5 | 33.8+-0.5 | 14.5+-0.5 |\n\nCaption: Composition of the glass (wt%)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"51.46\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"34.86\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"13.78\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"51.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"33.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"xx\"}\n{\"value\": \"14.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"xx\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: This joint probability distribution, that is illustrated by the part of Fig. 2 enclosed in the dashed box, is estimated by the robot in an ego-centric way through interaction with the environment, as in <cite>[10]</cite> .
["Background"]
science.multicite_intent_classification.56
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThis joint probability distribution, that is illustrated by the part of Fig. 2 enclosed in the dashed box, is estimated by the robot in an ego-centric way through interaction with the environment, as in <cite>[10]</cite> .", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Drosophila supracellular actomyosin structures are found at boundaries in wing imaginal discs. Research Abstract: As organisms develop, their tissues can become separated into distinct cell populations through the establishment of compartment boundaries. Compartment boundaries have been discovered in a wide variety of tissues, but in many cases the molecular mechanisms that separate cells remain poorly understood. In the Drosophila wing, a stripe of Notch activation maintains the dorsal-ventral compartment boundary, through a process that depends on the actin cytoskeleton. Here, we show that the dorsal-ventral boundary exhibits a distinct accumulation of Myosin II, and that this accumulation is regulated downstream of Notch signaling. Conversely, the dorsal-ventral boundary is depleted for the Par-3 homologue Bazooka. We further show that mutations in the Myosin heavy chain subunit encoded by zipper can impair dorsal-ventral compartmentalization without affecting anterior-posterior compartmentalization. These observations identify a distinct accumulation and requirement for Myosin activity in dorsal-ventral compartmentalization, and suggest a novel mechanism in which contractile tension along an F-actin cable at the compartment boundary contributes to compartmentalization.
{"verdict": "SUPPORT", "evidence": ["Here, we show that the dorsal-ventral boundary exhibits a distinct accumulation of Myosin II, and that this accumulation is regulated downstream of Notch signaling."]}
science.scifact_entailment.231
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nDrosophila supracellular actomyosin structures are found at boundaries in wing imaginal discs.\n\nResearch Abstract:\nAs organisms develop, their tissues can become separated into distinct cell populations through the establishment of compartment boundaries.\nCompartment boundaries have been discovered in a wide variety of tissues, but in many cases the molecular mechanisms that separate cells remain poorly understood.\nIn the Drosophila wing, a stripe of Notch activation maintains the dorsal-ventral compartment boundary, through a process that depends on the actin cytoskeleton.\nHere, we show that the dorsal-ventral boundary exhibits a distinct accumulation of Myosin II, and that this accumulation is regulated downstream of Notch signaling.\nConversely, the dorsal-ventral boundary is depleted for the Par-3 homologue Bazooka.\nWe further show that mutations in the Myosin heavy chain subunit encoded by zipper can impair dorsal-ventral compartmentalization without affecting anterior-posterior compartmentalization.\nThese observations identify a distinct accumulation and requirement for Myosin activity in dorsal-ventral compartmentalization, and suggest a novel mechanism in which contractile tension along an F-actin cable at the compartment boundary contributes to compartmentalization.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Here, we show that the dorsal-ventral boundary exhibits a distinct accumulation of Myosin II, and that this accumulation is regulated downstream of Notch signaling.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Amisulpride related tic-like symptoms in an adolescent schizophrenic. Tic disorders can be effectively treated by atypical antipsychotics such as risperidone, olanzapine and ziprasidone. However, there are two case reports that show tic-like symptoms, including motor and phonic variants, occurring during treatment with quetiapine or clozapine. We present a 15-year-old girl schizophrenic who developed frequent involuntary eye-blinking movements after 5 months of amisulpride treatment (1000 mg per day). The tic-like symptoms resolved completely after we reduced the dose of amisulpride down to 800 mg per day. However, her psychosis recurred after the dose reduction. We then placed her on an additional 100 mg per day of quetiapine. She has been in complete remission under the combined medications for more than one year and maintains a fair role function. No more tic-like symptoms or other side effects have been reported. Together with previously reported cases, our patient suggests that tic-like symptoms might occur in certain vulnerable individuals during treatment with atypical antipsychotics such as quetiapine, clozapine, or amisulpride.
{"CellLine": [], "Chemical": ["ziprasidone", "olanzapine", "quetiapine", "antipsychotics", "clozapine", "risperidone", "Amisulpride", "amisulpride"], "Disease": ["involuntary eye-blinking movements", "schizophrenic", "tic-like symptoms", "psychosis", "Tic disorders"], "Gene": [], "Species": ["patient"], "Variant": []}
science.biored_ner.64
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nAmisulpride related tic-like symptoms in an adolescent schizophrenic. Tic disorders can be effectively treated by atypical antipsychotics such as risperidone, olanzapine and ziprasidone. However, there are two case reports that show tic-like symptoms, including motor and phonic variants, occurring during treatment with quetiapine or clozapine. We present a 15-year-old girl schizophrenic who developed frequent involuntary eye-blinking movements after 5 months of amisulpride treatment (1000 mg per day). The tic-like symptoms resolved completely after we reduced the dose of amisulpride down to 800 mg per day. However, her psychosis recurred after the dose reduction. We then placed her on an additional 100 mg per day of quetiapine. She has been in complete remission under the combined medications for more than one year and maintains a fair role function. No more tic-like symptoms or other side effects have been reported. Together with previously reported cases, our patient suggests that tic-like symptoms might occur in certain vulnerable individuals during treatment with atypical antipsychotics such as quetiapine, clozapine, or amisulpride.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"ziprasidone\", \"olanzapine\", \"quetiapine\", \"antipsychotics\", \"clozapine\", \"risperidone\", \"Amisulpride\", \"amisulpride\"], \"Disease\": [\"involuntary eye-blinking movements\", \"schizophrenic\", \"tic-like symptoms\", \"psychosis\", \"Tic disorders\"], \"Gene\": [], \"Species\": [\"patient\"], \"Variant\": []}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.INTRODUCTION: The aim of this study was to compare TOMOX versus FOLFOX4 as first-line treatment of advanced colorectal cancer (CRC). ABSTRACT.MATERIALS AND METHODS: 191 chemotherapy-naïve patients were randomized to receive TOMOX or FOLFOX4. Patients were evaluated every 3 months and chemotherapy was continued until disease progression or unacceptable toxicity. Overall response rate was the primary endpoint. ABSTRACT.RESULTS: 183 patients were included in the intent-to-treat analysis (92 TOMOX and 91 FOLFOX4). Overall response rate was 45.6 and 36.3 % (p = 0.003) for TOMOX and FOLFOX4, respectively. No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively. Grades 3 and 4 neutropenia (p < 0.0001) and leukopenia (p = 0.028) were more common with the FOLFOX4 regimen, while hepatic disorders and asthenia were higher in TOMOX group (p = ns). There were two treatment-related deaths in the FOLFOX4 arm and one in the TOMOX arm. Quality of life analysis based on the SF-36 revealed differences between the two regimens for physical and mental composite scores after 6 weeks, and for body pain and emotional role functioning after 6 and 12 weeks; all of these favored the FOLFOX4 arm (p ≤ 0.05). ABSTRACT.CONCLUSIONS: TOMOX and FOLFOX4 seem to have similar efficacy and are well tolerated in the first-line treatment for advanced CRC with different profiles of toxicity. The convenient TOMOX regimen may offer an alternative to fluoropyrimidine-based regimens. Extracted Data:
[["TOMOX", "FOLFOX4", "Hepatic disorders and asthenia", "no significant difference", "hepatic disorders and asthenia were higher in TOMOX group (p = ns)."], ["TOMOX", "FOLFOX4", "Progression-free survival", "no significant difference", "No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively"], ["TOMOX", "FOLFOX4", "Progression-free survival", "no significant difference", ". No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292)"], ["TOMOX", "FOLFOX4", "Survival rate", "no significant difference", "No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively"], ["TOMOX", "FOLFOX4", "Response duration", "no significant difference", "No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively."], ["TOMOX", "FOLFOX4", "Neutropenia and leukopenia", "significantly decreased", "Grades 3 and 4 neutropenia (p \u003c 0.0001) and leukopenia (p = 0.028) were more common with the FOLFOX4 regimen"]]
science.evidence_inference.48
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.INTRODUCTION: The aim of this study was to compare TOMOX versus FOLFOX4 as first-line treatment of advanced colorectal cancer (CRC).\n\nABSTRACT.MATERIALS AND METHODS: 191 chemotherapy-naïve patients were randomized to receive TOMOX or FOLFOX4. Patients were evaluated every 3 months and chemotherapy was continued until disease progression or unacceptable toxicity. Overall response rate was the primary endpoint.\n\nABSTRACT.RESULTS: 183 patients were included in the intent-to-treat analysis (92 TOMOX and 91 FOLFOX4). Overall response rate was 45.6 and 36.3 % (p = 0.003) for TOMOX and FOLFOX4, respectively. No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively. Grades 3 and 4 neutropenia (p < 0.0001) and leukopenia (p = 0.028) were more common with the FOLFOX4 regimen, while hepatic disorders and asthenia were higher in TOMOX group (p = ns). There were two treatment-related deaths in the FOLFOX4 arm and one in the TOMOX arm. Quality of life analysis based on the SF-36 revealed differences between the two regimens for physical and mental composite scores after 6 weeks, and for body pain and emotional role functioning after 6 and 12 weeks; all of these favored the FOLFOX4 arm (p ≤ 0.05).\n\nABSTRACT.CONCLUSIONS: TOMOX and FOLFOX4 seem to have similar efficacy and are well tolerated in the first-line treatment for advanced CRC with different profiles of toxicity. The convenient TOMOX regimen may offer an alternative to fluoropyrimidine-based regimens.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"TOMOX\", \"FOLFOX4\", \"Hepatic disorders and asthenia\", \"no significant difference\", \"hepatic disorders and asthenia were higher in TOMOX group (p = ns).\"], [\"TOMOX\", \"FOLFOX4\", \"Progression-free survival\", \"no significant difference\", \"No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively\"], [\"TOMOX\", \"FOLFOX4\", \"Progression-free survival\", \"no significant difference\", \". No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292)\"], [\"TOMOX\", \"FOLFOX4\", \"Survival rate\", \"no significant difference\", \"No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively\"], [\"TOMOX\", \"FOLFOX4\", \"Response duration\", \"no significant difference\", \"No statistically significant differences were observed in overall survival (15.6 and 17.2 months; p = 0.475); progression-free survival (7.7 and 8.7 months; p = 0.292), and response duration (6.4 and 7.6 months; p = 0.372) for TOMOX and FOLFOX4, respectively.\"], [\"TOMOX\", \"FOLFOX4\", \"Neutropenia and leukopenia\", \"significantly decreased\", \"Grades 3 and 4 neutropenia (p \\u003c 0.0001) and leukopenia (p = 0.028) were more common with the FOLFOX4 regimen\"]]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: However, it is not flexible with regards to (C3), or the sense granularity problem, as it requires the users to specify the number of senses: Current systems (<cite>Wang et al. 2015</cite>; Chang, Pei, and Chen 2014) required to set the number of senses to a small number (set to 3 or 5 in the literature) to get a good accuracy, however many words may have a large number of senses, e.g. play in Figure 1 .
["Motivation"]
science.multicite_intent_classification.1068
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nHowever, it is not flexible with regards to (C3), or the sense granularity problem, as it requires the users to specify the number of senses: Current systems (<cite>Wang et al. 2015</cite>; Chang, Pei, and Chen 2014) required to set the number of senses to a small number (set to 3 or 5 in the literature) to get a good accuracy, however many words may have a large number of senses, e.g. play in Figure 1 .", "role": "user" }, { "content": "[\"Motivation\"]", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Although these supplementations are increasingly used by master athletes, very few data are available on their effects on oxidative stress, muscle recovery, and physical performance. The potential benefits of supplement use in athletes are thus questionable. Some studies indicate no benefits, while others highlight potential negative side effects of vitamin supplementation. These data indicate that RQ significantly reduces exercise-induced lipid peroxidation without associated changes in inflammation or plasma antioxidant status. Cr supplementation inhibited the increase of inflammation markers TNF-α and CRP, but not oxidative stress markers, due to acute exercise. Quercetin and vitamin C supplementation may not be beneficial in lipid profile improvement, although it may reduce induce muscle damage and body fat percent. Vitamins C and E supplementation had no significant effect on any of the studied parameters. Effects of the two treatments relative to placebo on mean performance in the incremental test and time trial were unclear, but runners faster by 1 SD of peak speed demonstrated a possible improvement on peak running speed with BC juice (1.9%; ±2.5%). Training status correlates more strongly with antioxidant status than diet does Consequently, we can conclude that Biostimine supplementation reduces the postexercise level of TBARS by increasing the antioxidant activity of plasma but has no effect on inflammatory markers. Supplementation with Asx could prevent exercise induced free radical production and depletion of non-enzymatic antioxidant defense in young soccer players. In conclusion these results indicated that treatment with melatonin in acute sports exercise reversed oxidative stress, improved defenses and lipid metabolism, which would result in an improvement in fitness. CoQ(10) supplementation before strenuous exercise decreases the oxidative stress and modulates the inflammatory signaling, reducing the subsequent muscle damage. Heretofore, Cr's positive therapeutic benefits in various oxidative stress-associated diseases have been reported in the literature and, recently, Cr has also been shown to exert direct antioxidant effects. These results indicate that Cr supplementation reduced oxidative DNA damage and lipid peroxidation induced by a single bout of RE. Nevertheless, based upon the growing evidence that many athletic populations are vitamin D deficient or insufficient, it is recommended that athletes monitor their serum vitamin D concentration and consult with their health care professional and/or nutritionist to determine if they would derive health benefits from vitamin D supplementation. This study examined the effects of 1,000 mg quercetin + 1,000 mg vitamin C (QC); 1,000 mg quercetin, 1,000 mg vitamin C, 400 mg isoquercetin, 30 mg epigallocatechin gallate, and 400 mg n-3 fatty acids (QFO) The two strains, L. rhamnosus IMC 501(®) and L. paracasei IMC 502(®), exert strong antioxidant activity. Athletes and all those exposed to oxidative stress may benefit from the ability of these probiotics to increase antioxidant levels and neutralize the effects of reactive oxygen species. These results indicate that supplementation with n-3 LCPUFAs significantly increased oxidative stress at rest and after a judo-training session. Rhodiola Rosea, is an adaptogen plant which has been reported to promote fatty acids utilisation, to ameliorate antioxidant function, and to improve body resistance to physical strenuous efforts. Supplementation with antioxidant vitamin E to athletes at 267 mg (400 IUs) or greater has been reported to reduce levels of LP associated with exercise. Results obtained at the end of the study indicate that zinc supplementation prevents production of free radicals by activating the antioxidant system. The significant increase in the UI of erythrocyte membranes indicates the potential for harm, because a high intake of PUFA might increase susceptibility to lipid peroxidation not counterbalanced by a higher increase in TAA. Adherence to the Mediterranean diet seems to be the better choice. A number of clinical trials have successfully been performed using whey and/or soy proteins in the treatment of many diseases. They both have antioxidant properties, which appears to be a factor in aerobic physical performance as well. The antioxidant effect of the two proteins is based on different mechanisms of action. Question: Which are the supplemental antioxidant in athletes?
["Resveratrol", "Quercetin", "Vitamin C", "Biostimine", "Astaxanthin", "Melatonin", "coenzyme Q(10)", "Creatine", "Isoquercetin", "Epigallocatechin gallate", "n-3 fatty acids", "L. rhamnosus IMC 501", "L. paracasei IMC 502", "docosahexanoic acid (DHA)", "eicosapentanoic acid (EPA)", "Rhodiola Rosea", "Vitamin E", "Zinc", "whey and/or soy proteins"]
science.bioasq_list_qa.40
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Although these supplementations are increasingly used by master athletes, very few data are available on their effects on oxidative stress, muscle recovery, and physical performance. The potential benefits of supplement use in athletes are thus questionable. Some studies indicate no benefits, while others highlight potential negative side effects of vitamin supplementation.\nThese data indicate that RQ significantly reduces exercise-induced lipid peroxidation without associated changes in inflammation or plasma antioxidant status.\nCr supplementation inhibited the increase of inflammation markers TNF-α and CRP, but not oxidative stress markers, due to acute exercise.\nQuercetin and vitamin C supplementation may not be beneficial in lipid profile improvement, although it may reduce induce muscle damage and body fat percent.\nVitamins C and E supplementation had no significant effect on any of the studied parameters.\nEffects of the two treatments relative to placebo on mean performance in the incremental test and time trial were unclear, but runners faster by 1 SD of peak speed demonstrated a possible improvement on peak running speed with BC juice (1.9%; ±2.5%).\nTraining status correlates more strongly with antioxidant status than diet does\nConsequently, we can conclude that Biostimine supplementation reduces the postexercise level of TBARS by increasing the antioxidant activity of plasma but has no effect on inflammatory markers.\nSupplementation with Asx could prevent exercise induced free radical production and depletion of non-enzymatic antioxidant defense in young soccer players.\nIn conclusion these results indicated that treatment with melatonin in acute sports exercise reversed oxidative stress, improved defenses and lipid metabolism, which would result in an improvement in fitness.\nCoQ(10) supplementation before strenuous exercise decreases the oxidative stress and modulates the inflammatory signaling, reducing the subsequent muscle damage.\nHeretofore, Cr's positive therapeutic benefits in various oxidative stress-associated diseases have been reported in the literature and, recently, Cr has also been shown to exert direct antioxidant effects.\nThese results indicate that Cr supplementation reduced oxidative DNA damage and lipid peroxidation induced by a single bout of RE.\nNevertheless, based upon the growing evidence that many athletic populations are vitamin D deficient or insufficient, it is recommended that athletes monitor their serum vitamin D concentration and consult with their health care professional and/or nutritionist to determine if they would derive health benefits from vitamin D supplementation.\nThis study examined the effects of 1,000 mg quercetin + 1,000 mg vitamin C (QC); 1,000 mg quercetin, 1,000 mg vitamin C, 400 mg isoquercetin, 30 mg epigallocatechin gallate, and 400 mg n-3 fatty acids (QFO)\nThe two strains, L. rhamnosus IMC 501(®) and L. paracasei IMC 502(®), exert strong antioxidant activity. Athletes and all those exposed to oxidative stress may benefit from the ability of these probiotics to increase antioxidant levels and neutralize the effects of reactive oxygen species.\nThese results indicate that supplementation with n-3 LCPUFAs significantly increased oxidative stress at rest and after a judo-training session.\nRhodiola Rosea, is an adaptogen plant which has been reported to promote fatty acids utilisation, to ameliorate antioxidant function, and to improve body resistance to physical strenuous efforts.\nSupplementation with antioxidant vitamin E to athletes at 267 mg (400 IUs) or greater has been reported to reduce levels of LP associated with exercise.\nResults obtained at the end of the study indicate that zinc supplementation prevents production of free radicals by activating the antioxidant system.\nThe significant increase in the UI of erythrocyte membranes indicates the potential for harm, because a high intake of PUFA might increase susceptibility to lipid peroxidation not counterbalanced by a higher increase in TAA. Adherence to the Mediterranean diet seems to be the better choice.\nA number of clinical trials have successfully been performed using whey and/or soy proteins in the treatment of many diseases. They both have antioxidant properties, which appears to be a factor in aerobic physical performance as well.\nThe antioxidant effect of the two proteins is based on different mechanisms of action.\n\nQuestion: Which are the supplemental antioxidant in athletes?", "role": "user" }, { "content": "[\"Resveratrol\", \"Quercetin\", \"Vitamin C\", \"Biostimine\", \"Astaxanthin\", \"Melatonin\", \"coenzyme Q(10)\", \"Creatine\", \"Isoquercetin\", \"Epigallocatechin gallate\", \"n-3 fatty acids\", \"L. rhamnosus IMC 501\", \"L. paracasei IMC 502\", \"docosahexanoic acid (DHA)\", \"eicosapentanoic acid (EPA)\", \"Rhodiola Rosea\", \"Vitamin E\", \"Zinc\", \"whey and/or soy proteins\"]", "role": "assistant" } ]
science.discomat_te
| Sample no. | N 4 | IR absorption peaks (cm-1) | Crystal phases | | 4CR | 0.362+-0.008 | 451,472, 510, 543 * , 581, 655, 680, 704, 727, 772, 798, 844 * , 863(s), 893, 914, 944, 1039, 1070, 1090, 1126, 1157, 1248, 1289, 1317 | Bi3B5O12 | | 4CR | 0.362+-0.008 | 1196(s) | Bi2B8O15 | | 5CR | 0.399 +- 0.008 | 472, 494(s), 510, 541 * , 551(s), 580, 624(s), 656, 642(s), 681, 704, 728, 772, 799, 843 * , 863(s), 892(s), 912, 944, 1007(s), 1039, 1069, 1091, 1126, 1157, 1248, 1288, 1317 | Bi3B5O12 | | 8CR | 0.376 +- 0.007 | 473, 510, 540, 580, 656, 681, 704, 727, 771, 800, 844 ** , 863(s), 892, 910, 944, 1006(s), 1039, 1068, 1068, 1088, 1126, 1157, 1248, 1289, 1316 | Bi3B5O12 | | 8CR | 0.376 +- 0.007 | 489, 530, 607, 627(s) | Bi4B2O9 | | 8CR | 0.376 +- 0.007 | 1195(s) | Bi2B8O15 | | 8CR | 0.376 +- 0.007 | 1210(s) | BiB3O6 | | 14CR | 0.012 | 488, 528, 580, 607, 627(s), 683, 717, 909, 1100(s), 1147, 1171, 1206, 1256, 1310 | Bi4B2O9 | Caption: N 4 values and IR absorption peak positions in crystalline phases produced by devitrification of 4 bismuth borate glasses Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.362", "type": "Other"} {"value": "0.362", "type": "Other"} {"value": "1196", "type": "Other"} {"value": "0.399", "type": "Other"} {"value": "0.376", "type": "Other"} {"value": "0.376", "type": "Other"} {"value": "0.376", "type": "Other"} {"value": "1195", "type": "Other"} {"value": "0.376", "type": "Other"} {"value": "1210", "type": "Other"} {"value": "0.012", "type": "Other"}
science.discomat_te.45
[ { "content": "| Sample no. | N 4 | IR absorption peaks (cm-1) | Crystal phases |\n| 4CR | 0.362+-0.008 | 451,472, 510, 543 * , 581, 655, 680, 704, 727, 772, 798, 844 * , 863(s), 893, 914, 944, 1039, 1070, 1090, 1126, 1157, 1248, 1289, 1317 | Bi3B5O12 |\n| 4CR | 0.362+-0.008 | 1196(s) | Bi2B8O15 |\n| 5CR | 0.399 +- 0.008 | 472, 494(s), 510, 541 * , 551(s), 580, 624(s), 656, 642(s), 681, 704, 728, 772, 799, 843 * , 863(s), 892(s), 912, 944, 1007(s), 1039, 1069, 1091, 1126, 1157, 1248, 1288, 1317 | Bi3B5O12 |\n| 8CR | 0.376 +- 0.007 | 473, 510, 540, 580, 656, 681, 704, 727, 771, 800, 844 ** , 863(s), 892, 910, 944, 1006(s), 1039, 1068, 1068, 1088, 1126, 1157, 1248, 1289, 1316 | Bi3B5O12 |\n| 8CR | 0.376 +- 0.007 | 489, 530, 607, 627(s) | Bi4B2O9 |\n| 8CR | 0.376 +- 0.007 | 1195(s) | Bi2B8O15 |\n| 8CR | 0.376 +- 0.007 | 1210(s) | BiB3O6 |\n| 14CR | 0.012 | 488, 528, 580, 607, 627(s), 683, 717, 909, 1100(s), 1147, 1171, 1206, 1256, 1310 | Bi4B2O9 |\n\nCaption: N 4 values and IR absorption peak positions in crystalline phases produced by devitrification of 4 bismuth borate glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.362\", \"type\": \"Other\"}\n{\"value\": \"0.362\", \"type\": \"Other\"}\n{\"value\": \"1196\", \"type\": \"Other\"}\n{\"value\": \"0.399\", \"type\": \"Other\"}\n{\"value\": \"0.376\", \"type\": \"Other\"}\n{\"value\": \"0.376\", \"type\": \"Other\"}\n{\"value\": \"0.376\", \"type\": \"Other\"}\n{\"value\": \"1195\", \"type\": \"Other\"}\n{\"value\": \"0.376\", \"type\": \"Other\"}\n{\"value\": \"1210\", \"type\": \"Other\"}\n{\"value\": \"0.012\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Collapse by Conditioning: Training Class-conditional GANs with Limited Data Main Content: 1 INTRODUCTION . Since the introduction of generative adversarial networks ( GANs ) by Goodfellow et al . ( 2014 ) , there has been a substantial progress in realistic image and video generations . The contents of such generations are often controlled by conditioning the process by means of conditional GANs ( cGANS ) ( Mirza & Osindero , 2014 ) . In practice , cGANs are of high interest , as they can generate and control a wide variety of outputs using a single model . Some example applications of cGANs include ; category-conditioned generation ( Brock et al. , 2018 ) , image manipulation ( Yu et al. , 2018 ) , image-to-image translation ( Zhu et al. , 2017 ) , and text-to-image generation ( Xu et al. , 2018 ) . Despite the remarkable success , training cGANs requires large training data , including conditioning labels , for realistic generation and stable training behavior ( Tseng et al. , 2021 ) . Collecting large enough data is challenging in many frequent scenarios , due to the privacy , the quality and the diversity required , among other reasons . This difficulty is often worsened further for datasets for conditional training , where also labels need to be collected . The case of fine-grained conditioning adds an additional challenge for data collection , since the availability of the data samples and their variability are expected to deteriorate with the increasingly fine-grained details ( Gupta et al. , 2019 ) . While training GANs with limited data has recently received some attention ( Karras et al. , 2020a ; Wang et al. , 2018 ; Tseng et al. , 2021 ) , the influence of conditioning in this setting remains unexplored . Compared to the unconditional case , the conditional information provides additional supervision and input to the generator . Intuitively , this additional information can guide the generation process better and ensure the success of cGANs whenever its unconditional counterpart succeeds . In fact , one may even argue that the additional supervision by conditioning can even alleviate the problem of limited data , to an extent . Surprisingly however , we observe an opposite effect ( to our preliminary intuition ) in our experiments for class-conditional GANs . As visualized in Fig . 1 , the conditional GAN trained on limited data suffers from severe mode collapse . Its unconditional counterpart , on the other hand , trained on the same data , is able to generate diverse images of high fidelity with stable training process . To our knowledge , these counter-intuitive observations of cGANs have not been observed or reported in previous works . In this paper , we first study the behaviour of a state-of-the-art class-conditional GAN with varying number of classes and image samples per class , and contrast it to the unconditional case . Our study in the limited data regime reveals that the unconditional GANs compare favorable with conditional ones , in terms of generated image quality . We , however , are interested in the conditional case , so as to be able to control the image generation process using a single generative model . In this work , we therefore set out to mitigate the aforementioned mode collapse problem . Motivated by our experimental observations , we propose a method for training cGANs that leverages the stable training of the unconditional GANs . During the training process , we integrate a gradual transition from unconditional to conditional generative learning . The early stage of the proposed training method favours the unconditional objective for the sake of stability , whereas the later stage favours the conditional objective for the desired control over the output by conditioning . Our transitional training procedure only requires minimal changes in existing state-of-the-art generator and discriminator architectures . We demonstrate the advantage of the proposed method over the existing ones , by evaluating our method on four benchmark datasets under the limited data setups . The major contributions of this study are summarized as follows : • We identify and characterize the problem of conditioning-induced mode collapse when training class-conditional GANs under limited data setups . • We propose a training method for cGANs that exploits the training stability of unconditional GANs to mitigate the observed conditioning collapse . • The effectiveness of the proposed method is demonstrated on four benchmark datasets . The method is shown to significantly outperform the state-of-the art and the compared baselines . 2 CLASS-CONDITIONING MODE COLLAPSE . Learning conditional image generation networks is becoming an increasingly important task . The ability to control the generator is a fundamental feature in many applications . However , even in the context of unconditional GANs , previous studies suggest that class information as extra supervision can be used to improve the generated image quality ( Salimans et al. , 2016 ; Zhou et al. , 2018 ; Kavalerov et al. , 2020 ) . This , in turn , may set an expectation that , the extra supervision by conditioning must not lead to the mode collapse of cGANs in setups where the unconditional GANs succeed . Furthermore , one may also expect to resolve the issue of training cGANs on limited data , to an extent , due to the availability of the additional conditional labels . As the first part of our study , we investigate the effect of conditioning on GANs under the limited data setup . We base our experiments on StyleGAN2 with adaptive data augmentation ( ADA ) , which is a recent state-of-the-art method for unconditional and class conditional image generation under limited-data setup ( Karras et al. , 2020a ) . Both unconditional and conditional versions of StyleGAN2 are trained on four benchmark datasets ( more details in Section 4.1 ) , with the setup of this paper . The selected datasets are somewhat of fine-grained in nature , where the problem of limited data , concerning their availability and labelling difficulty , is often expected to be encountered . In Fig . 1 , we analyze the training of conditional and unconditional GANs by plotting the Fréchet inception distance ( FID ) ( Heusel et al. , 2017 ) during training . The analysis is performed on four different datasets , each containing between 1170 and 2410 images for training . Contrary to our initial expectation , the conditional version obtains consistently worse FID during training , compared to the unconditional training . To analyze its cause , in Fig . 1 we also visualize samples from the best model attained during training in terms of FID . For each dataset , the unconditional model learns to generate diverse and realistic images , while lacking the ability to perform class-conditional generation . On the other hand , the conditional model suffers from severe mode collapse . Specifically , the intra-class variations are very small and mainly limited to color changes , while retaining the same structure and pose . Moreover , the images lack realism and contain pronounced artifacts . Next , we further characterize the mode collapse problem observed in Fig . 1 by analyzing its dependence on the size of the training dataset . To this end , we employ the ImageNet Carnivores dataset ( Liu et al. , 2019 ) , which includes a larger number of classes and images . We perform the analysis by gradually reducing the size of the training set in two ways . In Fig . 2a , we reduce the number of classes while keeping a number of 100 training images per class . In Fig . 2b , we reduce the number of images per class while using a 50 classes in all cases . In both cases , the conditional GAN achieves a better FID for larger datasets , here above 5k images . This observation is in line with previous work ( Salimans et al. , 2016 ; Zhou et al. , 2018 ; Kavalerov et al. , 2020 ) . However , when reducing the dataset size , the order is inverted . Instead , the unconditional model achieves consistently better FID , while the conditional model degrades rapidly . Inspired by these observations , we set out to design a training strategy for conditional GANs that eliminates the mode collapse induced by the conditioning itself . Our proposed approach , presented next , achieves stable training that attains low FID , as visualized in Fig . 1 . The images generated from our model exhibit natural intra-class variations with substantial changes in pose , appearance , and color . Furthermore , our training strategy outperforms both the conditional and unconditional models in terms of FID in a wide range of dataset sizes , as shown in Fig . 2 . 3 METHOD . 3.1 FROM UNCONDITIONAL TO CONDITIONAL GANS . As the analysis in Section 2 reveals , conditional GAN training leads to mode collapse when training data is limited , while the unconditional counterpart achieves good performance for the same number of training samples . This discovery motivates us to design an approach capable of leveraging the advantages of unconditional training in order to prevent mode-collapse in conditional GANs . Our initial inspections of the generated images during conditional training indicated that the mode collapse appears from the very early stages of the training . On the contrary , the corresponding unconditional GAN learns to generate diverse images with gradually improved photo-realism during training . In order to avoid mode-collapse , we aim to exploit unconditional learning in the beginning of the training process . Unconditional training in early stages allows the GAN model to learn the distribution of the real images without the complications caused by conditioning . We then introduce the conditioning in the later stages of the training . This allows the network to adapt to conditional generation in a more stable way , by exploiting the partially learned data manifold . In general , we control the transition between unconditional and conditional generative learning during the training process with a function λt ≥ 0 . The subscript t denotes the iteration number during training . Specifically , a zero value λt = 0 implies a purely unconditional learning , i.e . the conditional information does not affect the generator or discriminator networks . Our goal is to design a training strategy capable of gradually incorporating conditioning during training by increasing the value of λt . While any monotonically increasing function λt may be chosen , we only consider a simple linear transition from 0 to 1 as , λt = min ( max ( t− Ts Te − Ts , 0 ) , 1 ) . ( 1 ) Here , Ts and Te denote the start and end time steps of the transition , respectively . We achieve the desired transition during training by introducing a method for controlling the behavior of the generator and discriminator using the function λt . An overview of our approach , detailed in next sections , is illustrated in Fig . 3 . 3.2 TRANSITION IN THE GENERATOR . The generator G ( z ) of a GAN is trained to map the latent vector z , drawn from a prior distribution p ( z ) , to a generated image Ig = G ( z ) belonging to the distribution of the training data . A conditional generator G ( z , c ) additionally receives the conditioning variable c as input , aiming to learn the image distribution of the training data conditioned on c. The transition from an unconditional G ( z ) to a conditional G ( z , c ) generator seemingly requires a discrete architectural change during the training process . We circumvent this by additionally conditioning our generator on the transition function λt as G ( z , c , λt ) . More specifically , we gradually incorporate conditional information during training by using a generator of the form , G ( z , c , λt ) = G ( S ( z ) + λt · E ( c ) ) . ( 2 ) Here , S and E are neural network modules that transform the latent and conditional vector respectively . In case of λt = 0 , the conditional information is masked out , leading to a purely unconditional generator G ( S ( z ) ) . During the transition step Ts < t < Te , the importance of the conditional information is gradually increased during training . To achieve the generator in equation 2 , we perform a minimal modification to the class-conditional StyleGAN2 ( Karras et al. , 2020b ) baseline . The generator consist of a style-mapping network and an image synthesis network . The style-mapping network maps the input latent vector z and the embedding of the condition c to the intermediate representation w , known as style code . The image synthesis network , then generates images from the style codes . As illustrated in Fig . 3 , we modify the conditioning in the generator by feeding the class embeddings to a fully-connected layer , which is then weighted by λt before adding to the output of the style-mapping network ’ s first layer .
In this paper, the authors work towards training conditional GANs with limited data. Based on the observation that conditional GAN training suffers worse mode collapse than unconditional training, the authors proposed a training strategy that gradually injects conditional information into unconditional training. In other words, a learning curriculum that gradually replaces unconditional GANs training (losses and structure) with conditional ones is proposed. Overall, the proposed learning curriculum is reasonable and achieved promising performance on data-efficient cGANs training.
science.mup_single_document_summarization.8292
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCollapse by Conditioning: Training Class-conditional GANs with Limited Data\n\nMain Content:\n1 INTRODUCTION . Since the introduction of generative adversarial networks ( GANs ) by Goodfellow et al . ( 2014 ) , there has been a substantial progress in realistic image and video generations . The contents of such generations are often controlled by conditioning the process by means of conditional GANs ( cGANS ) ( Mirza & Osindero , 2014 ) . In practice , cGANs are of high interest , as they can generate and control a wide variety of outputs using a single model . Some example applications of cGANs include ; category-conditioned generation ( Brock et al. , 2018 ) , image manipulation ( Yu et al. , 2018 ) , image-to-image translation ( Zhu et al. , 2017 ) , and text-to-image generation ( Xu et al. , 2018 ) . Despite the remarkable success , training cGANs requires large training data , including conditioning labels , for realistic generation and stable training behavior ( Tseng et al. , 2021 ) . Collecting large enough data is challenging in many frequent scenarios , due to the privacy , the quality and the diversity required , among other reasons . This difficulty is often worsened further for datasets for conditional training , where also labels need to be collected . The case of fine-grained conditioning adds an additional challenge for data collection , since the availability of the data samples and their variability are expected to deteriorate with the increasingly fine-grained details ( Gupta et al. , 2019 ) . While training GANs with limited data has recently received some attention ( Karras et al. , 2020a ; Wang et al. , 2018 ; Tseng et al. , 2021 ) , the influence of conditioning in this setting remains unexplored . Compared to the unconditional case , the conditional information provides additional supervision and input to the generator . Intuitively , this additional information can guide the generation process better and ensure the success of cGANs whenever its unconditional counterpart succeeds . In fact , one may even argue that the additional supervision by conditioning can even alleviate the problem of limited data , to an extent . Surprisingly however , we observe an opposite effect ( to our preliminary intuition ) in our experiments for class-conditional GANs . As visualized in Fig . 1 , the conditional GAN trained on limited data suffers from severe mode collapse . Its unconditional counterpart , on the other hand , trained on the same data , is able to generate diverse images of high fidelity with stable training process . To our knowledge , these counter-intuitive observations of cGANs have not been observed or reported in previous works . In this paper , we first study the behaviour of a state-of-the-art class-conditional GAN with varying number of classes and image samples per class , and contrast it to the unconditional case . Our study in the limited data regime reveals that the unconditional GANs compare favorable with conditional ones , in terms of generated image quality . We , however , are interested in the conditional case , so as to be able to control the image generation process using a single generative model . In this work , we therefore set out to mitigate the aforementioned mode collapse problem . Motivated by our experimental observations , we propose a method for training cGANs that leverages the stable training of the unconditional GANs . During the training process , we integrate a gradual transition from unconditional to conditional generative learning . The early stage of the proposed training method favours the unconditional objective for the sake of stability , whereas the later stage favours the conditional objective for the desired control over the output by conditioning . Our transitional training procedure only requires minimal changes in existing state-of-the-art generator and discriminator architectures . We demonstrate the advantage of the proposed method over the existing ones , by evaluating our method on four benchmark datasets under the limited data setups . The major contributions of this study are summarized as follows : • We identify and characterize the problem of conditioning-induced mode collapse when training class-conditional GANs under limited data setups . • We propose a training method for cGANs that exploits the training stability of unconditional GANs to mitigate the observed conditioning collapse . • The effectiveness of the proposed method is demonstrated on four benchmark datasets . The method is shown to significantly outperform the state-of-the art and the compared baselines . 2 CLASS-CONDITIONING MODE COLLAPSE . Learning conditional image generation networks is becoming an increasingly important task . The ability to control the generator is a fundamental feature in many applications . However , even in the context of unconditional GANs , previous studies suggest that class information as extra supervision can be used to improve the generated image quality ( Salimans et al. , 2016 ; Zhou et al. , 2018 ; Kavalerov et al. , 2020 ) . This , in turn , may set an expectation that , the extra supervision by conditioning must not lead to the mode collapse of cGANs in setups where the unconditional GANs succeed . Furthermore , one may also expect to resolve the issue of training cGANs on limited data , to an extent , due to the availability of the additional conditional labels . As the first part of our study , we investigate the effect of conditioning on GANs under the limited data setup . We base our experiments on StyleGAN2 with adaptive data augmentation ( ADA ) , which is a recent state-of-the-art method for unconditional and class conditional image generation under limited-data setup ( Karras et al. , 2020a ) . Both unconditional and conditional versions of StyleGAN2 are trained on four benchmark datasets ( more details in Section 4.1 ) , with the setup of this paper . The selected datasets are somewhat of fine-grained in nature , where the problem of limited data , concerning their availability and labelling difficulty , is often expected to be encountered . In Fig . 1 , we analyze the training of conditional and unconditional GANs by plotting the Fréchet inception distance ( FID ) ( Heusel et al. , 2017 ) during training . The analysis is performed on four different datasets , each containing between 1170 and 2410 images for training . Contrary to our initial expectation , the conditional version obtains consistently worse FID during training , compared to the unconditional training . To analyze its cause , in Fig . 1 we also visualize samples from the best model attained during training in terms of FID . For each dataset , the unconditional model learns to generate diverse and realistic images , while lacking the ability to perform class-conditional generation . On the other hand , the conditional model suffers from severe mode collapse . Specifically , the intra-class variations are very small and mainly limited to color changes , while retaining the same structure and pose . Moreover , the images lack realism and contain pronounced artifacts . Next , we further characterize the mode collapse problem observed in Fig . 1 by analyzing its dependence on the size of the training dataset . To this end , we employ the ImageNet Carnivores dataset ( Liu et al. , 2019 ) , which includes a larger number of classes and images . We perform the analysis by gradually reducing the size of the training set in two ways . In Fig . 2a , we reduce the number of classes while keeping a number of 100 training images per class . In Fig . 2b , we reduce the number of images per class while using a 50 classes in all cases . In both cases , the conditional GAN achieves a better FID for larger datasets , here above 5k images . This observation is in line with previous work ( Salimans et al. , 2016 ; Zhou et al. , 2018 ; Kavalerov et al. , 2020 ) . However , when reducing the dataset size , the order is inverted . Instead , the unconditional model achieves consistently better FID , while the conditional model degrades rapidly . Inspired by these observations , we set out to design a training strategy for conditional GANs that eliminates the mode collapse induced by the conditioning itself . Our proposed approach , presented next , achieves stable training that attains low FID , as visualized in Fig . 1 . The images generated from our model exhibit natural intra-class variations with substantial changes in pose , appearance , and color . Furthermore , our training strategy outperforms both the conditional and unconditional models in terms of FID in a wide range of dataset sizes , as shown in Fig . 2 . 3 METHOD . 3.1 FROM UNCONDITIONAL TO CONDITIONAL GANS . As the analysis in Section 2 reveals , conditional GAN training leads to mode collapse when training data is limited , while the unconditional counterpart achieves good performance for the same number of training samples . This discovery motivates us to design an approach capable of leveraging the advantages of unconditional training in order to prevent mode-collapse in conditional GANs . Our initial inspections of the generated images during conditional training indicated that the mode collapse appears from the very early stages of the training . On the contrary , the corresponding unconditional GAN learns to generate diverse images with gradually improved photo-realism during training . In order to avoid mode-collapse , we aim to exploit unconditional learning in the beginning of the training process . Unconditional training in early stages allows the GAN model to learn the distribution of the real images without the complications caused by conditioning . We then introduce the conditioning in the later stages of the training . This allows the network to adapt to conditional generation in a more stable way , by exploiting the partially learned data manifold . In general , we control the transition between unconditional and conditional generative learning during the training process with a function λt ≥ 0 . The subscript t denotes the iteration number during training . Specifically , a zero value λt = 0 implies a purely unconditional learning , i.e . the conditional information does not affect the generator or discriminator networks . Our goal is to design a training strategy capable of gradually incorporating conditioning during training by increasing the value of λt . While any monotonically increasing function λt may be chosen , we only consider a simple linear transition from 0 to 1 as , λt = min ( max ( t− Ts Te − Ts , 0 ) , 1 ) . ( 1 ) Here , Ts and Te denote the start and end time steps of the transition , respectively . We achieve the desired transition during training by introducing a method for controlling the behavior of the generator and discriminator using the function λt . An overview of our approach , detailed in next sections , is illustrated in Fig . 3 . 3.2 TRANSITION IN THE GENERATOR . The generator G ( z ) of a GAN is trained to map the latent vector z , drawn from a prior distribution p ( z ) , to a generated image Ig = G ( z ) belonging to the distribution of the training data . A conditional generator G ( z , c ) additionally receives the conditioning variable c as input , aiming to learn the image distribution of the training data conditioned on c. The transition from an unconditional G ( z ) to a conditional G ( z , c ) generator seemingly requires a discrete architectural change during the training process . We circumvent this by additionally conditioning our generator on the transition function λt as G ( z , c , λt ) . More specifically , we gradually incorporate conditional information during training by using a generator of the form , G ( z , c , λt ) = G ( S ( z ) + λt · E ( c ) ) . ( 2 ) Here , S and E are neural network modules that transform the latent and conditional vector respectively . In case of λt = 0 , the conditional information is masked out , leading to a purely unconditional generator G ( S ( z ) ) . During the transition step Ts < t < Te , the importance of the conditional information is gradually increased during training . To achieve the generator in equation 2 , we perform a minimal modification to the class-conditional StyleGAN2 ( Karras et al. , 2020b ) baseline . The generator consist of a style-mapping network and an image synthesis network . The style-mapping network maps the input latent vector z and the embedding of the condition c to the intermediate representation w , known as style code . The image synthesis network , then generates images from the style codes . As illustrated in Fig . 3 , we modify the conditioning in the generator by feeding the class embeddings to a fully-connected layer , which is then weighted by λt before adding to the output of the style-mapping network ’ s first layer .", "role": "user" }, { "content": "In this paper, the authors work towards training conditional GANs with limited data. Based on the observation that conditional GAN training suffers worse mode collapse than unconditional training, the authors proposed a training strategy that gradually injects conditional information into unconditional training. In other words, a learning curriculum that gradually replaces unconditional GANs training (losses and structure) with conditional ones is proposed. Overall, the proposed learning curriculum is reasonable and achieved promising performance on data-efficient cGANs training.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: The tissue surrounding the granuloma in an immune cell induces an anti-inflammatory immune response. Research Abstract: Granulomas are the pathological hallmark of tuberculosis (TB). However, their function and mechanisms of formation remain poorly understood. To understand the role of granulomas in TB, we analyzed the proteomes of granulomas from subjects with tuberculosis in an unbiased manner. Using laser-capture microdissection, mass spectrometry and confocal microscopy, we generated detailed molecular maps of human granulomas. We found that the centers of granulomas have a pro-inflammatory environment that is characterized by the presence of antimicrobial peptides, reactive oxygen species and pro-inflammatory eicosanoids. Conversely, the tissue surrounding the caseum has a comparatively anti-inflammatory signature. These findings are consistent across a set of six human subjects and in rabbits. Although the balance between systemic pro- and anti-inflammatory signals is crucial to TB disease outcome, here we find that these signals are physically segregated within each granuloma. From the protein and lipid snapshots of human and rabbit lesions analyzed here, we hypothesize that the pathologic response to TB is shaped by the precise anatomical localization of these inflammatory pathways during the development of the granuloma.
{"verdict": "SUPPORT", "evidence": ["Conversely, the tissue surrounding the caseum has a comparatively anti-inflammatory signature."]}
science.scifact_entailment.828
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe tissue surrounding the granuloma in an immune cell induces an anti-inflammatory immune response.\n\nResearch Abstract:\nGranulomas are the pathological hallmark of tuberculosis (TB).\nHowever, their function and mechanisms of formation remain poorly understood.\nTo understand the role of granulomas in TB, we analyzed the proteomes of granulomas from subjects with tuberculosis in an unbiased manner.\nUsing laser-capture microdissection, mass spectrometry and confocal microscopy, we generated detailed molecular maps of human granulomas.\nWe found that the centers of granulomas have a pro-inflammatory environment that is characterized by the presence of antimicrobial peptides, reactive oxygen species and pro-inflammatory eicosanoids.\nConversely, the tissue surrounding the caseum has a comparatively anti-inflammatory signature.\nThese findings are consistent across a set of six human subjects and in rabbits.\nAlthough the balance between systemic pro- and anti-inflammatory signals is crucial to TB disease outcome, here we find that these signals are physically segregated within each granuloma.\nFrom the protein and lipid snapshots of human and rabbit lesions analyzed here, we hypothesize that the pathologic response to TB is shaped by the precise anatomical localization of these inflammatory pathways during the development of the granuloma.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Conversely, the tissue surrounding the caseum has a comparatively anti-inflammatory signature.\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Deep Residual Learning in Spiking Neural Networks Main Content: Deep Spiking Neural Networks ( SNNs ) present optimization difficulties for gradient-based approaches due to discrete binary activation and complex spatialtemporal dynamics . Considering the huge success of ResNet in deep learning , it would be natural to train deep SNNs with residual learning . Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons , which suffers the degradation problem and can hardly implement residual learning . In this paper , we propose the spikeelement-wise ( SEW ) ResNet to realize residual learning in deep SNNs . We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems of Spiking ResNet . We evaluate our SEW ResNet on ImageNet , DVS Gesture , and CIFAR10-DVS datasets , and show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps . Moreover , SEW ResNet can achieve higher performance by simply adding more layers , providing a simple method to train deep SNNs . To our best knowledge , this is the first time that directly training deep SNNs with more than 100 layers becomes possible . Our codes are available at https : //github.com/fangwei123456/Spike-Element-Wise-ResNet . 1 Introduction . Artificial Neural Networks ( ANNs ) have achieved great success in many tasks , including image classification [ 28 , 52 , 55 ] , object detection [ 9 , 34 , 44 ] , machine translation [ 2 ] , and gaming [ 37 , 51 ] . One of the critical factors for ANNs ’ success is deep learning [ 29 ] , which uses multi-layers to learn representations of data with multiple levels of abstraction . It has been proved that deeper networks have advantages over shallower networks in computation cost and generalization ability [ 3 ] . The function represented by a deep network can require an exponential number of hidden units by a shallow network with one hidden layer [ 38 ] . In addition , the depth of the network is closely related to the network ’ s performance in practical tasks [ 52 , 55 , 27 , 52 ] . Nevertheless , recent evidence [ 13 , 53 , 14 ] reveals that with the network depth increasing , the accuracy gets saturated and then degrades rapidly . To solve this degradation problem , residual learning is proposed [ 14 , 15 ] and the residual structure is widely exploited in “ very deep ” networks that achieve the leading performance [ 22 , 59 , 18 , 57 ] . Spiking Neural Networks ( SNNs ) are regarded as a potential competitor of ANNs for their high biological plausibility , event-driven property , and low power consumption [ 45 ] . Recently , deep learning methods are introduced into SNNs , and deep SNNs have achieved close performance as ANNs in some simple classification datasets [ 56 ] , but still worse than ANNs in complex tasks , e.g. , classifying the ImageNet dataset [ 47 ] . To obtain higher performance SNNs , it would be natural to ∗Corresponding author 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . explore deeper network structures like ResNet . Spiking ResNet [ 25 , 60 , 21 , 17 , 49 , 12 , 30 , 64 , 48 , 42 , 43 ] , as the spiking version of ResNet , is proposed by mimicking the residual block in ANNs and replacing ReLU activation layers with spiking neurons . Spiking ResNet converted from ANN achieves state-of-the-art accuracy on nearly all datasets , while the directly trained Spiking ResNet has not been validated to solve the degradation problem . In this paper , we show that Spiking ResNet is inapplicable to all neuron models to achieve identity mapping . Even if the identity mapping condition is met , Spiking ResNet suffers from the problems of vanishing/exploding gradient . Thus , we propose the Spike-Element-Wise ( SEW ) ResNet to realize residual learning in SNNs . We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems at the same time . We evaluate Spiking ResNet and SEW ResNet on both the static ImageNet dataset and the neuromorphic DVS Gesture dataset [ 1 ] , CIFAR10-DVS dataset [ 32 ] . The experiment results are consistent with our analysis , indicating that the deeper Spiking ResNet suffers from the degradation problem — the deeper network has higher training loss than the shallower network , while SEW ResNet can achieve higher performance by simply increasing the network ’ s depth . Moreover , we show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps . To the best of our knowledge , this is the first time to explore the directly-trained deep SNNs with more than 100 layers . 2 Related Work . 2.1 Learning Methods of Spiking Neural Networks . ANN to SNN conversion ( ANN2SNN ) [ 20 , 4 , 46 , 49 , 12 , 11 , 6 , 54 , 33 ] and backpropagation with surrogate gradient [ 40 ] are the two main methods to get deep SNNs . The ANN2SNN method firstly trains an ANN with ReLU activation , then converts the ANN to an SNN by replacing ReLU with spiking neurons and adding scaling operations like weight normalization and threshold balancing . Some recent conversion methods have achieved near loss-less accuracy with VGG-16 and ResNet [ 12 , 11 , 6 , 33 ] . However , the converted SNN needs a longer time to rival the original ANN in precision as the conversion is based on rate-coding [ 46 ] , which increases the SNN ’ s latency and restricts the practical application . The backpropagation methods can be classified into two categories [ 26 ] . The method in the first category computes the gradient by unfolding the network over the simulation timesteps [ 31 , 19 , 58 , 50 , 30 , 40 ] , which is similar to the idea of backpropagation through time ( BPTT ) . As the gradient with respect to the threshold-triggered firing is non-differentiable , the surrogate gradient is often used . The SNN trained by the surrogate method is not limited to rate-coding , and can also be applied on temporal tasks , e.g. , classifying neuromorphic datasets [ 58 , 8 , 16 ] . The second method computes the gradients of the timings of existing spikes with respect to the membrane potential at the spike timing [ 5 , 39 , 24 , 65 , 63 ] . 2.2 Spiking Residual Structure . Previous ANN2SNN methods noticed the distinction between plain feedforward ANNs and residual ANNs , and made specific normalization for conversion . Hu et al . [ 17 ] were the first to apply the residual structure in ANN2SNN with scaled shortcuts in SNN to match the activations of the original ANN . Sengupta et al . [ 49 ] proposed Spike-Norm to balance SNN ’ s threshold and verified their method by converting VGG and ResNet to SNNs . Existing backpropagation-based methods use nearly the same structure from ResNet . Lee et al . [ 30 ] evaluated their custom surrogate methods on shallow ResNets whose depths are no more than ResNet-11 . Zheng et al . [ 64 ] proposed the threshold-dependent batch normalization ( td-BN ) to replace naive batch normalization ( BN ) [ 23 ] and successfully trained Spiking ResNet-34 and Spiking ResNet-50 directly with surrogate gradient by adding td-BN in shortcuts . 3 Methods . 3.1 Spiking Neuron Model . The spiking neuron is the fundamental computing unit of SNNs . Similar to Fang et al . [ 8 ] , we use a unified model to describe the dynamics of all kinds of spiking neurons , which includes the following discrete-time equations : H [ t ] = f ( V [ t− 1 ] , X [ t ] ) , ( 1 ) S [ t ] = Θ ( H [ t ] − Vth ) , ( 2 ) V [ t ] = H [ t ] ( 1− S [ t ] ) + Vreset S [ t ] , ( 3 ) where X [ t ] is the input current at time-step t , H [ t ] and V [ t ] denote the membrane potential after neuronal dynamics and after the trigger of a spike at time-step t , respectively . Vth is the firing threshold , Θ ( x ) is the Heaviside step function and is defined by Θ ( x ) = 1 for x ≥ 0 and Θ ( x ) = 0 for x < 0 . S [ t ] is the output spike at time-step t , which equals 1 if there is a spike and 0 otherwise . Vreset denotes the reset potential . The function f ( · ) in Eq . ( 1 ) describes the neuronal dynamics and takes different forms for different spiking neuron models . For example , the function f ( · ) for the Integrate-and-Fire ( IF ) model and Leaky Integrate-and-Fire ( LIF ) model can be described by Eq . ( 4 ) and Eq . ( 5 ) , respectively . H [ t ] = V [ t− 1 ] +X [ t ] , ( 4 ) H [ t ] = V [ t− 1 ] + 1 τ ( X [ t ] − ( V [ t− 1 ] − Vreset ) ) , ( 5 ) where τ represents the membrane time constant . Eq . ( 2 ) and Eq . ( 3 ) describe the spike generation and resetting processes , which are the same for all kinds of spiking neuron models . In this paper , the surrogate gradient method is used to define Θ′ ( x ) , σ′ ( x ) during error back-propagation , with σ ( x ) denoting the surrogate function . 3.2 Drawbacks of Spiking ResNet . The residual block is the key component of ResNet . Fig . 1 ( a ) shows the basic block in ResNet [ 14 ] , whereX l , Y l are the input and output of the l-th block in ResNet , Conv is the convolutional layer , BN denotes batch normalization , and ReLU denotes the rectified linear unit activation layer . The basic block of Spiking ResNet used in [ 64 , 17 , 30 ] simply mimics the block in ANNs by replacing ReLU activation layers with spiking neurons ( SN ) , which is illustrated in Fig . 1 ( b ) . Here Sl [ t ] , Ol [ t ] are the input and output of the l-th block in Spiking ResNet at time-step t. Based on the above definition , we will analyze the drawbacks of Spiking ResNet below . Spiking ResNet is inapplicable to all neuron models to achieve identity mapping . One of the critical concepts in ResNet is identity mapping . He et al . [ 14 ] noted that if the added layers implement the identity mapping , a deeper model should have training error no greater than its shallower counterpart . However , it is unable to train the added layers to implement identity mapping in a feasible time , resulting in deeper models performing worse than shallower models ( the degradation problem ) . To solve this problem , the residual learning is proposed by adding a shortcut connection ( shown in Fig . 1 ( a ) ) . If we use F l to denote the residual mapping , e.g. , a stack of two convolutional layers , of the l-th residual block in ResNet and Spiking ResNet , then the residual block in Fig.1 ( a ) and Fig.1 ( b ) can be formulated as Y l = ReLU ( F l ( X l ) +X l ) , ( 6 ) Ol [ t ] = SN ( F l ( Sl [ t ] ) + Sl [ t ] ) . ( 7 ) The residual block of Eq . ( 6 ) make it easy to implement identity mapping in ANNs . To see this , when F l ( X l ) ≡ 0 , Y l = ReLU ( Xl ) . In most cases , X l is the activation of the previous ReLU layer and X l ≥ 0 . Thus , Y l = ReLU ( X l ) = X l , which is identity mapping . Different from ResNet , the residual block in Spiking ResNet ( Eq . ( 7 ) ) restricts the models of spiking neuron to implement identity mapping . When F l ( Sl [ t ] ) ≡ 0 , Ol [ t ] = SN ( Sl [ t ] ) 6= Sl [ t ] . To transmit Sl [ t ] and make SN ( Sl [ t ] ) = Sl [ t ] , the last spiking neuron ( SN ) in the l-th residual block needs to fire a spike after receiving a spike , and keep silent after receiving no spike at time-step t. It works for IF neuron described by Eq . ( 4 ) . Specifically , we can set 0 < Vth ≤ 1 and V [ t− 1 ] = 0 to ensure that X [ t ] = 1 leads to H [ t ] ≥ Vth , and X [ t ] = 0 leads to H [ t ] < Vth . However , when considering some spiking neuron models with complex neuronal dynamics , it is hard to achieve SN ( Sl [ t ] ) = Sl [ t ] . For example , the LIF neuron used in [ 66 , 8 , 61 ] considers a learnable membrane time constant τ , the neuronal dynamics of which can be described with Eq . ( 5 ) . When X [ t ] = 1 and V [ t− 1 ] = 0 , H [ t ] = 1τ . It is difficult to find a firing threshold that ensures H [ t ] > Vth as τ is being changed in training by the optimizer . Spiking ResNet suffers from the problems of vanishing/exploding gradient . Consider a spiking ResNet with k sequential blocks to transmit Sl [ t ] , and the identity mapping condition is met , e.g. , the spiking neurons are the IF neurons with 0 < Vth ≤ 1 , then we have Sl [ t ] = Sl+1 [ t ] = ... = Sl+k−1 [ t ] = Ol+k−1 [ t ] . Denote the j-th element in Sl [ t ] and Ol [ t ] as Slj [ t ] and O l j [ t ] respectively , the gradient of the output of the ( l + k − 1 ) -th residual block with respect to the input of the l-th residual block can be calculated layer by layer : ∂Ol+k−1j [ t ] ∂Slj [ t ] = k−1∏ i=0 ∂Ol+ij [ t ] ∂Sl+ij [ t ] = k−1∏ i=0 Θ′ ( Sl+ij [ t ] − Vth ) → 0 , if 0 < Θ′ ( Slj [ t ] − Vth ) < 1 1 , if Θ′ ( Slj [ t ] − Vth ) = 1 +∞ , if Θ′ ( Slj [ t ] − Vth ) > 1 , ( 8 ) where Θ ( x ) is the Heaviside step function and Θ′ ( x ) is defined by the surrogate gradient . The second equality hold as Ol+ij [ t ] = SN ( S l+i j [ t ] ) . In view of the fact that S l j [ t ] can only take 0 or 1 , Θ′ ( Slj [ t ] − Vth ) = 1 is not satisfied for commonly used surrogate functions mentioned in [ 40 ] . Thus , the vanishing/exploding gradient problems are prone to happen in deeper Spiking ResNet . Based on the above analysis , we believe that the previous Spiking ResNet ignores the highly nonlinear caused by spiking neurons , and can hardly implement residual learning . Nonetheless , the basic block in Fig . 1 ( b ) is still decent for ANN2SNN with extra normalization [ 17 , 49 ] , as the SNN converted from ANN aims to use firing rates to match the origin ANN ’ s activations .
This paper wants to apply the residual block into deep spiking neural networks to achieve better classification accuracy. They proposed a spike element-wise (SEW) to realize residual learning. This top and idea are very important in training a robust deep spiking neural network and the residual block is approved useful in the deep learning field. The authors evaluated the SEW on several datasets such as ImageNet and DVS gesture datasets, the experimental results show the proposed method can achieve better performance.
science.mup_single_document_summarization.15155
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDeep Residual Learning in Spiking Neural Networks\n\nMain Content:\nDeep Spiking Neural Networks ( SNNs ) present optimization difficulties for gradient-based approaches due to discrete binary activation and complex spatialtemporal dynamics . Considering the huge success of ResNet in deep learning , it would be natural to train deep SNNs with residual learning . Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons , which suffers the degradation problem and can hardly implement residual learning . In this paper , we propose the spikeelement-wise ( SEW ) ResNet to realize residual learning in deep SNNs . We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems of Spiking ResNet . We evaluate our SEW ResNet on ImageNet , DVS Gesture , and CIFAR10-DVS datasets , and show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps . Moreover , SEW ResNet can achieve higher performance by simply adding more layers , providing a simple method to train deep SNNs . To our best knowledge , this is the first time that directly training deep SNNs with more than 100 layers becomes possible . Our codes are available at https : //github.com/fangwei123456/Spike-Element-Wise-ResNet . 1 Introduction . Artificial Neural Networks ( ANNs ) have achieved great success in many tasks , including image classification [ 28 , 52 , 55 ] , object detection [ 9 , 34 , 44 ] , machine translation [ 2 ] , and gaming [ 37 , 51 ] . One of the critical factors for ANNs ’ success is deep learning [ 29 ] , which uses multi-layers to learn representations of data with multiple levels of abstraction . It has been proved that deeper networks have advantages over shallower networks in computation cost and generalization ability [ 3 ] . The function represented by a deep network can require an exponential number of hidden units by a shallow network with one hidden layer [ 38 ] . In addition , the depth of the network is closely related to the network ’ s performance in practical tasks [ 52 , 55 , 27 , 52 ] . Nevertheless , recent evidence [ 13 , 53 , 14 ] reveals that with the network depth increasing , the accuracy gets saturated and then degrades rapidly . To solve this degradation problem , residual learning is proposed [ 14 , 15 ] and the residual structure is widely exploited in “ very deep ” networks that achieve the leading performance [ 22 , 59 , 18 , 57 ] . Spiking Neural Networks ( SNNs ) are regarded as a potential competitor of ANNs for their high biological plausibility , event-driven property , and low power consumption [ 45 ] . Recently , deep learning methods are introduced into SNNs , and deep SNNs have achieved close performance as ANNs in some simple classification datasets [ 56 ] , but still worse than ANNs in complex tasks , e.g. , classifying the ImageNet dataset [ 47 ] . To obtain higher performance SNNs , it would be natural to ∗Corresponding author 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . explore deeper network structures like ResNet . Spiking ResNet [ 25 , 60 , 21 , 17 , 49 , 12 , 30 , 64 , 48 , 42 , 43 ] , as the spiking version of ResNet , is proposed by mimicking the residual block in ANNs and replacing ReLU activation layers with spiking neurons . Spiking ResNet converted from ANN achieves state-of-the-art accuracy on nearly all datasets , while the directly trained Spiking ResNet has not been validated to solve the degradation problem . In this paper , we show that Spiking ResNet is inapplicable to all neuron models to achieve identity mapping . Even if the identity mapping condition is met , Spiking ResNet suffers from the problems of vanishing/exploding gradient . Thus , we propose the Spike-Element-Wise ( SEW ) ResNet to realize residual learning in SNNs . We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems at the same time . We evaluate Spiking ResNet and SEW ResNet on both the static ImageNet dataset and the neuromorphic DVS Gesture dataset [ 1 ] , CIFAR10-DVS dataset [ 32 ] . The experiment results are consistent with our analysis , indicating that the deeper Spiking ResNet suffers from the degradation problem — the deeper network has higher training loss than the shallower network , while SEW ResNet can achieve higher performance by simply increasing the network ’ s depth . Moreover , we show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps . To the best of our knowledge , this is the first time to explore the directly-trained deep SNNs with more than 100 layers . 2 Related Work . 2.1 Learning Methods of Spiking Neural Networks . ANN to SNN conversion ( ANN2SNN ) [ 20 , 4 , 46 , 49 , 12 , 11 , 6 , 54 , 33 ] and backpropagation with surrogate gradient [ 40 ] are the two main methods to get deep SNNs . The ANN2SNN method firstly trains an ANN with ReLU activation , then converts the ANN to an SNN by replacing ReLU with spiking neurons and adding scaling operations like weight normalization and threshold balancing . Some recent conversion methods have achieved near loss-less accuracy with VGG-16 and ResNet [ 12 , 11 , 6 , 33 ] . However , the converted SNN needs a longer time to rival the original ANN in precision as the conversion is based on rate-coding [ 46 ] , which increases the SNN ’ s latency and restricts the practical application . The backpropagation methods can be classified into two categories [ 26 ] . The method in the first category computes the gradient by unfolding the network over the simulation timesteps [ 31 , 19 , 58 , 50 , 30 , 40 ] , which is similar to the idea of backpropagation through time ( BPTT ) . As the gradient with respect to the threshold-triggered firing is non-differentiable , the surrogate gradient is often used . The SNN trained by the surrogate method is not limited to rate-coding , and can also be applied on temporal tasks , e.g. , classifying neuromorphic datasets [ 58 , 8 , 16 ] . The second method computes the gradients of the timings of existing spikes with respect to the membrane potential at the spike timing [ 5 , 39 , 24 , 65 , 63 ] . 2.2 Spiking Residual Structure . Previous ANN2SNN methods noticed the distinction between plain feedforward ANNs and residual ANNs , and made specific normalization for conversion . Hu et al . [ 17 ] were the first to apply the residual structure in ANN2SNN with scaled shortcuts in SNN to match the activations of the original ANN . Sengupta et al . [ 49 ] proposed Spike-Norm to balance SNN ’ s threshold and verified their method by converting VGG and ResNet to SNNs . Existing backpropagation-based methods use nearly the same structure from ResNet . Lee et al . [ 30 ] evaluated their custom surrogate methods on shallow ResNets whose depths are no more than ResNet-11 . Zheng et al . [ 64 ] proposed the threshold-dependent batch normalization ( td-BN ) to replace naive batch normalization ( BN ) [ 23 ] and successfully trained Spiking ResNet-34 and Spiking ResNet-50 directly with surrogate gradient by adding td-BN in shortcuts . 3 Methods . 3.1 Spiking Neuron Model . The spiking neuron is the fundamental computing unit of SNNs . Similar to Fang et al . [ 8 ] , we use a unified model to describe the dynamics of all kinds of spiking neurons , which includes the following discrete-time equations : H [ t ] = f ( V [ t− 1 ] , X [ t ] ) , ( 1 ) S [ t ] = Θ ( H [ t ] − Vth ) , ( 2 ) V [ t ] = H [ t ] ( 1− S [ t ] ) + Vreset S [ t ] , ( 3 ) where X [ t ] is the input current at time-step t , H [ t ] and V [ t ] denote the membrane potential after neuronal dynamics and after the trigger of a spike at time-step t , respectively . Vth is the firing threshold , Θ ( x ) is the Heaviside step function and is defined by Θ ( x ) = 1 for x ≥ 0 and Θ ( x ) = 0 for x < 0 . S [ t ] is the output spike at time-step t , which equals 1 if there is a spike and 0 otherwise . Vreset denotes the reset potential . The function f ( · ) in Eq . ( 1 ) describes the neuronal dynamics and takes different forms for different spiking neuron models . For example , the function f ( · ) for the Integrate-and-Fire ( IF ) model and Leaky Integrate-and-Fire ( LIF ) model can be described by Eq . ( 4 ) and Eq . ( 5 ) , respectively . H [ t ] = V [ t− 1 ] +X [ t ] , ( 4 ) H [ t ] = V [ t− 1 ] + 1 τ ( X [ t ] − ( V [ t− 1 ] − Vreset ) ) , ( 5 ) where τ represents the membrane time constant . Eq . ( 2 ) and Eq . ( 3 ) describe the spike generation and resetting processes , which are the same for all kinds of spiking neuron models . In this paper , the surrogate gradient method is used to define Θ′ ( x ) , σ′ ( x ) during error back-propagation , with σ ( x ) denoting the surrogate function . 3.2 Drawbacks of Spiking ResNet . The residual block is the key component of ResNet . Fig . 1 ( a ) shows the basic block in ResNet [ 14 ] , whereX l , Y l are the input and output of the l-th block in ResNet , Conv is the convolutional layer , BN denotes batch normalization , and ReLU denotes the rectified linear unit activation layer . The basic block of Spiking ResNet used in [ 64 , 17 , 30 ] simply mimics the block in ANNs by replacing ReLU activation layers with spiking neurons ( SN ) , which is illustrated in Fig . 1 ( b ) . Here Sl [ t ] , Ol [ t ] are the input and output of the l-th block in Spiking ResNet at time-step t. Based on the above definition , we will analyze the drawbacks of Spiking ResNet below . Spiking ResNet is inapplicable to all neuron models to achieve identity mapping . One of the critical concepts in ResNet is identity mapping . He et al . [ 14 ] noted that if the added layers implement the identity mapping , a deeper model should have training error no greater than its shallower counterpart . However , it is unable to train the added layers to implement identity mapping in a feasible time , resulting in deeper models performing worse than shallower models ( the degradation problem ) . To solve this problem , the residual learning is proposed by adding a shortcut connection ( shown in Fig . 1 ( a ) ) . If we use F l to denote the residual mapping , e.g. , a stack of two convolutional layers , of the l-th residual block in ResNet and Spiking ResNet , then the residual block in Fig.1 ( a ) and Fig.1 ( b ) can be formulated as Y l = ReLU ( F l ( X l ) +X l ) , ( 6 ) Ol [ t ] = SN ( F l ( Sl [ t ] ) + Sl [ t ] ) . ( 7 ) The residual block of Eq . ( 6 ) make it easy to implement identity mapping in ANNs . To see this , when F l ( X l ) ≡ 0 , Y l = ReLU ( Xl ) . In most cases , X l is the activation of the previous ReLU layer and X l ≥ 0 . Thus , Y l = ReLU ( X l ) = X l , which is identity mapping . Different from ResNet , the residual block in Spiking ResNet ( Eq . ( 7 ) ) restricts the models of spiking neuron to implement identity mapping . When F l ( Sl [ t ] ) ≡ 0 , Ol [ t ] = SN ( Sl [ t ] ) 6= Sl [ t ] . To transmit Sl [ t ] and make SN ( Sl [ t ] ) = Sl [ t ] , the last spiking neuron ( SN ) in the l-th residual block needs to fire a spike after receiving a spike , and keep silent after receiving no spike at time-step t. It works for IF neuron described by Eq . ( 4 ) . Specifically , we can set 0 < Vth ≤ 1 and V [ t− 1 ] = 0 to ensure that X [ t ] = 1 leads to H [ t ] ≥ Vth , and X [ t ] = 0 leads to H [ t ] < Vth . However , when considering some spiking neuron models with complex neuronal dynamics , it is hard to achieve SN ( Sl [ t ] ) = Sl [ t ] . For example , the LIF neuron used in [ 66 , 8 , 61 ] considers a learnable membrane time constant τ , the neuronal dynamics of which can be described with Eq . ( 5 ) . When X [ t ] = 1 and V [ t− 1 ] = 0 , H [ t ] = 1τ . It is difficult to find a firing threshold that ensures H [ t ] > Vth as τ is being changed in training by the optimizer . Spiking ResNet suffers from the problems of vanishing/exploding gradient . Consider a spiking ResNet with k sequential blocks to transmit Sl [ t ] , and the identity mapping condition is met , e.g. , the spiking neurons are the IF neurons with 0 < Vth ≤ 1 , then we have Sl [ t ] = Sl+1 [ t ] = ... = Sl+k−1 [ t ] = Ol+k−1 [ t ] . Denote the j-th element in Sl [ t ] and Ol [ t ] as Slj [ t ] and O l j [ t ] respectively , the gradient of the output of the ( l + k − 1 ) -th residual block with respect to the input of the l-th residual block can be calculated layer by layer : ∂Ol+k−1j [ t ] ∂Slj [ t ] = k−1∏ i=0 ∂Ol+ij [ t ] ∂Sl+ij [ t ] = k−1∏ i=0 Θ′ ( Sl+ij [ t ] − Vth ) → 0 , if 0 < Θ′ ( Slj [ t ] − Vth ) < 1 1 , if Θ′ ( Slj [ t ] − Vth ) = 1 +∞ , if Θ′ ( Slj [ t ] − Vth ) > 1 , ( 8 ) where Θ ( x ) is the Heaviside step function and Θ′ ( x ) is defined by the surrogate gradient . The second equality hold as Ol+ij [ t ] = SN ( S l+i j [ t ] ) . In view of the fact that S l j [ t ] can only take 0 or 1 , Θ′ ( Slj [ t ] − Vth ) = 1 is not satisfied for commonly used surrogate functions mentioned in [ 40 ] . Thus , the vanishing/exploding gradient problems are prone to happen in deeper Spiking ResNet . Based on the above analysis , we believe that the previous Spiking ResNet ignores the highly nonlinear caused by spiking neurons , and can hardly implement residual learning . Nonetheless , the basic block in Fig . 1 ( b ) is still decent for ANN2SNN with extra normalization [ 17 , 49 ] , as the SNN converted from ANN aims to use firing rates to match the origin ANN ’ s activations .", "role": "user" }, { "content": "This paper wants to apply the residual block into deep spiking neural networks to achieve better classification accuracy. They proposed a spike element-wise (SEW) to realize residual learning. This top and idea are very important in training a robust deep spiking neural network and the residual block is approved useful in the deep learning field. The authors evaluated the SEW on several datasets such as ImageNet and DVS gesture datasets, the experimental results show the proposed method can achieve better performance.", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Comparison of sequential cytomegalovirus isolates in a patient with lymphoma and failing antiviral therapy. BACKGROUND: Long-term anti-cytomegalovirus (CMV) treatments in immunocompromised patients are hampered by resistance to antiviral drugs. Longitudinal changes in the resistance genotype may depend on changes in selective pressure and the complexity of CMV isolates. OBJECTIVE: To evaluate longitudinal changes in the CMV resistance genotype and phenotype along with strain-specific variability in a patient with non-Hodgkin's lymphoma in whom successive anti-CMV treatments failed. STUDY DESIGN: The resistance phenotype and genotype of seven CMV isolates collected from one patient during a 2-year follow-up period were retrospectively analysed. In parallel, we used glycoprotein B (gB) genotyping, and a- and UL10-13-sequence analysis to study CMV interstrain variability. RESULTS: The patient was infected by at least three CMV strains plus variants of the parental strains. Resistance to ganciclovir, cidofovir and foscarnet was successively detected during the follow-up period. UL97 protein kinase changes responsible for resistance to ganciclovir were initially detected at residues 591 and 592, and then at position 594. Decreased sensitivity to foscarnet coincided with the appearance of amino acid substitution N495K in DNA polymerase, whereas cross-resistance to ganciclovir and cidofovir was due to the L501I substitution. CONCLUSIONS: The CMV isolates obtained from our patient were complex mixtures of strains. Changes in resistance genotypes depended on resistance selective pressure and were not linked to interstrain variation.
{"CellLine": [], "Chemical": ["cidofovir", "ganciclovir", "foscarnet"], "Disease": ["Long-term anti-cytomegalovirus", "lymphoma", "CMV", "non-Hodgkin\u0027s lymphoma"], "Gene": ["UL97 protein kinase", "gB", "glycoprotein B"], "Species": ["cytomegalovirus", "patient", "patients"], "Variant": ["L501I", "N495K"]}
science.biored_ner.49
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nComparison of sequential cytomegalovirus isolates in a patient with lymphoma and failing antiviral therapy. BACKGROUND: Long-term anti-cytomegalovirus (CMV) treatments in immunocompromised patients are hampered by resistance to antiviral drugs. Longitudinal changes in the resistance genotype may depend on changes in selective pressure and the complexity of CMV isolates. OBJECTIVE: To evaluate longitudinal changes in the CMV resistance genotype and phenotype along with strain-specific variability in a patient with non-Hodgkin's lymphoma in whom successive anti-CMV treatments failed. STUDY DESIGN: The resistance phenotype and genotype of seven CMV isolates collected from one patient during a 2-year follow-up period were retrospectively analysed. In parallel, we used glycoprotein B (gB) genotyping, and a- and UL10-13-sequence analysis to study CMV interstrain variability. RESULTS: The patient was infected by at least three CMV strains plus variants of the parental strains. Resistance to ganciclovir, cidofovir and foscarnet was successively detected during the follow-up period. UL97 protein kinase changes responsible for resistance to ganciclovir were initially detected at residues 591 and 592, and then at position 594. Decreased sensitivity to foscarnet coincided with the appearance of amino acid substitution N495K in DNA polymerase, whereas cross-resistance to ganciclovir and cidofovir was due to the L501I substitution. CONCLUSIONS: The CMV isolates obtained from our patient were complex mixtures of strains. Changes in resistance genotypes depended on resistance selective pressure and were not linked to interstrain variation.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"cidofovir\", \"ganciclovir\", \"foscarnet\"], \"Disease\": [\"Long-term anti-cytomegalovirus\", \"lymphoma\", \"CMV\", \"non-Hodgkin\\u0027s lymphoma\"], \"Gene\": [\"UL97 protein kinase\", \"gB\", \"glycoprotein B\"], \"Species\": [\"cytomegalovirus\", \"patient\", \"patients\"], \"Variant\": [\"L501I\", \"N495K\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Patient participation in medical consultations has been demonstrated to benefit their subsequent psychological well being. Question asking is one way in which patients can be active. We investigated 2 means of promoting cancer patient question asking. One was the provision of a question prompt sheet to patients prior to their initial consultation with their oncologist. The second was the active endorsement and systematic review of the question prompt sheet by their oncologist. 318 patients with heterogeneous cancers, seeing one of 5 medical and 4 radiation oncologists for the first time, were randomised to either receive or not receive a question prompt sheet. Doctors were randomised to either proactively address or passively respond to the question prompt sheet in the subsequent consultation. Anxiety was assessed prior to the consultation. Consultations were audiotaped and content analysed. Anxiety was assessed again immediately following the consultation. Within the next 10 days patients completed questionnaires assessing information needs, anxiety and satisfaction and were given a structured telephone interview assessing information recall. Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients. Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved. A patient question prompt sheet, used proactively by the doctor, is a powerful addition to the oncology consultation. © 2001 Cancer Research Campaign Extracted Data:
[["Patients who received a question prompt sheet", "Patients who didn\u0027t receive a question prompt sheet", "Information recall", "significantly increased", "Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved."], ["Patients who received a question prompt sheet", "Patients who didn\u0027t receive a question prompt sheet", "Anxiety", "significantly decreased", "Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved."], ["Patients who received a question prompt sheet", "Patients who didn\u0027t receive a question prompt sheet", "Prognosis information given", "significantly increased", "Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients."], ["Patients who received a question prompt sheet", "Patients who didn\u0027t receive a question prompt sheet", "Prognosis enquiry", "significantly increased", "Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients."]]
science.evidence_inference.0
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Patient participation in medical consultations has been demonstrated to benefit their subsequent psychological well being. Question asking is one way in which patients can be active. We investigated 2 means of promoting cancer patient question asking. One was the provision of a question prompt sheet to patients prior to their initial consultation with their oncologist. The second was the active endorsement and systematic review of the question prompt sheet by their oncologist. 318 patients with heterogeneous cancers, seeing one of 5 medical and 4 radiation oncologists for the first time, were randomised to either receive or not receive a question prompt sheet. Doctors were randomised to either proactively address or passively respond to the question prompt sheet in the subsequent consultation. Anxiety was assessed prior to the consultation. Consultations were audiotaped and content analysed. Anxiety was assessed again immediately following the consultation. Within the next 10 days patients completed questionnaires assessing information needs, anxiety and satisfaction and were given a structured telephone interview assessing information recall. Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients. Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved. A patient question prompt sheet, used proactively by the doctor, is a powerful addition to the oncology consultation. © 2001 Cancer Research Campaign\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Patients who received a question prompt sheet\", \"Patients who didn\\u0027t receive a question prompt sheet\", \"Information recall\", \"significantly increased\", \"Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved.\"], [\"Patients who received a question prompt sheet\", \"Patients who didn\\u0027t receive a question prompt sheet\", \"Anxiety\", \"significantly decreased\", \"Provision of the question prompt sheet prolonged consultations and increased patient anxiety; however, when oncologists specifically addressed the prompt sheet, anxiety levels were significantly reduced, consultation duration was decreased and recall was significantly improved.\"], [\"Patients who received a question prompt sheet\", \"Patients who didn\\u0027t receive a question prompt sheet\", \"Prognosis information given\", \"significantly increased\", \"Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients.\"], [\"Patients who received a question prompt sheet\", \"Patients who didn\\u0027t receive a question prompt sheet\", \"Prognosis enquiry\", \"significantly increased\", \"Patients provided with a question prompt sheet asked more questions about prognosis compared with controls and oncologists gave significantly more prognostic information to these patients.\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Accurate Point Cloud Registration with Robust Optimal Transport Main Content: 1 Introduction . Shape registration is a fundamental but difficult problem in computer vision . The task is to determine plausible spatial correspondences between pairs of shapes , with use cases that range from pose estimation for noisy point clouds [ 14 ] to the nonparametric registration of high-resolution medical images [ 17 ] . As illustrated in Fig . 1 , most existing approaches to this problem consist of a combination of three steps , possibly fused together by some deep learning ( DL ) methods : ( 1 ) feature extraction ; ( 2 ) feature matching ; and ( 3 ) regularization using a class of acceptable transformations that is specified through a parametric or nonparametric model . This work discusses how tools derived from ∗Equal Contribution . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . optimal transport ( OT ) theory [ 87 ] can improve the second step of this pipeline ( feature matching ) on challenging problems . To put these results in context , we first present an overview of related methods . 1 . Feature extraction . To establish spatial correspondences , one first computes descriptive local features . When dealing with ( possibly annotated ) point clouds , a simple choice is to rely on Cartesian coordinates ( x , y , z ) [ 3 , 26 ] . Going further , stronger descriptors capture local geometric and topological properties : examples include shape orientation and curvatures [ 21 , 96 ] , shape contexts [ 6 ] , spectral eigenvalues [ 84 , 70 ] and annotations such as color [ 76 ] or chemical fingerprints [ 43 , 113 ] . Recently , expressive feature representations have also been learned using deep neural networks ( DNN ) : see [ 16 ] and subsequent works on geometric deep learning . Generally , feature extractors are designed to make shape registration as unambiguous as possible . In order to get closer to the ideal case of landmark matching [ 11 ] , we associate discriminative features to the salient points of our shapes : this increases the robustness of the subsequent matching and regularization steps . 2 . Feature matching . Once computed on both of the source and target shapes , feature vectors are put in correspondence with each other . This assignment is often encoded as an explicit mapping between the two shapes ; alternatively , the vector field relating the shapes can be defined implicitly as the gradient of a geometric loss function that quantifies discrepancies between two distributions of features [ 35 ] : a ) A first major approach is to rely on nearest neighbor projections and the related chamfer [ 12 ] and Hausdorff distances [ 13 ] , as in the Iterative Closest Point ( ICP ) algorithm [ 7 ] . This method can be softened through the use of a softmax ( log-sum-exp ) operator as in the many variants of the Coherent Point Drift ( CPD ) method [ 81 , 73 , 72 , 44 ] , or made robust to outliers in the specific context of rigid and affine registrations [ 41 , 129 , 128 , 14 ] . b ) Alternatively , a second approach is to rely on convolutional kernel norms such as the Energy Distance [ 94 ] , which are also known as Maximum Mean Discrepancies ( MMD ) in statistics [ 48 ] . These loss functions are common in imaging science [ 88 ] and computational anatomy [ 115 , 21 ] but are prone to vanishing gradients [ 40 , 39 ] . c ) Finally , a third type of approach is to rely on optimal transport ( OT ) theory [ 87 ] and solutions of the earth mover ’ s problem [ 97 ] . This method is equivalent to a nearest neighbor projection under a global constraint of bijectivity that enforces consistency in the matching . On the one hand , OT has been known to provide reliable correspondences in computer vision for more than two decades [ 26 , 47 , 64 ] . On the other hand , it has often faced major issues of scalability and robustness to outliers on noisy data . As detailed below , the main purpose of this work is to overcome these limitations and enable the widespread use of OT tools for challenging registration problems . 3 . Regularization with a deformation model . The output of the two steps above is a non-smooth vector field that may not be suitable for downstream tasks due to e.g . tears and compression artifacts . As a third step , most registration methods thus rely on regularization to obtain plausible deformations . This process is task-specific , with applications that range from rigid registration [ 133 , 46 , 30 , 2 , 121 , 122 ] to free-form motion estimation [ 91 , 126 , 69 , 49 ] . In Sec . 3 , we address the interaction of OT matching layers with a varied collection of regularization strategies – from optimization-based spline and diffeomorphic models to DNNs . Recent progresses . Research works on shape registration combine ideas from the three paragraphs above to best fit the characteristics of computer vision problems [ 71 , 31 , 107 ] . Over the past few years , significant progress has been made on all fronts . On the one hand , ( geometric ) deep learning networks have been used to define data-driven feature maps [ 92 , 92 , 123 ] and multiscale regularization modules [ 126 , 68 , 108 ] , sometimes fused within end-to-end architectures [ 133 , 91 , 132 , 30 ] . On the other hand , nearest neighbor projections , kernel convolutions and transport-based matching strategies have all been generalized to take advantage of these modern descriptors : they can now be used in high-dimensional feature spaces [ 59 , 37 ] . Challenges . Nevertheless , state-of-the-art ( SOTA ) methods in the field still have important limitations . First , modern deep learning pipelines are often hard to train to “ pixel-perfect ” accuracy on nonsmooth shapes , with diminishing returns in terms of model size and training data [ 2 ] . Second , scaling up point neural networks to finely sampled shapes ( N > 10k points ) remains a challenging research topic [ 49 , 135 , 37 ] . Third , the impact of the choice of a specific feature matching method on the performance of deep learning models remains only partially understood [ 58 ] . Related works . Following major progress on computational OT in the mathematical literature [ 79 , 27 , 66 , 103 ] , improved modules for feature matching have attracted interest as a possible solution to these challenges . Works on sliced partial OT [ 9 ] and dustbin-OT [ 28 ] have shown that outliers can be handled effectively by OT methods for rigid registration , beyond the classic Robust Point Matching method ( RPM ) [ 47 , 26 ] . Going further , the Sinkhorn algorithm for entropy-regularized OT [ 27 , 64 , 104 , 65 ] has been studied extensively for shape registration in computational anatomy [ 36 , 45 ] and computer graphics [ 74 , 33 , 85 ] . The Gromov–Wasserstein distance has also been used for shape analysis [ 110 , 118 ] , albeit at a higher computational cost . These applications have driven interest in the development of a complete theory for Robust Optimal Transport ( RobOT ) , outlined in Sec . 2 , which handles both sampling artifacts and outliers [ 25 , 105 , 67 , 24 , 106 , 80 ] . Most recently , this framework has started to be used in shape analysis with applications to shape matching [ 36 ] , the segmentation of brain tractograms [ 38 ] and deep deformation estimation with the FLOT architecture [ 91 ] . Contributions . We build upon the work above to tackle challenging point cloud registration problems for scene flow estimation and computational anatomy . Our key contributions are : 1 . Accurate feature matching with scalable OT solvers . For the first time , we scale up RobOT for deep feature matching to high-resolution shapes with more than 10k points . To this end , we leverage the latest generation of OT solvers [ 39 , 35 ] and overcome significant issues of memory usage and numerical stability . This allows us to handle fine-grained details effectively , which is key for e.g . most medical applications . 2 . Interaction with task-specific regularization strategies . We show how to interface RobOT matchings with advanced deformation models . This is in contrast with e.g . the FLOT architecture , which focuses on the direct prediction of a vector field and can not be used for applications that require guarantees on the smoothness of the registration . 3 . Challenging new dataset . We release a large dataset of lung vascular trees that should be registered between inhalation and exhalation . This relevant medical problem involves large and complex deformations of high-resolution 3D point clouds . As a new benchmark for the community , we provide two strong baselines that rely respectively on global feature matching and on deep deformation estimation . 4 . Consistent SOTA performance . Our proposed models achieve SOTA results for scene flow on Kitti [ 78 , 77 ] and for point-cloud-based lung registration on DirLab-COPDGene [ 19 ] . Notably , we show that RobOT is highly suited to fine-tuning tasks : it consistently turns “ good ” matchings into nearly perfect registrations at an affordable numerical cost . Main experimental observations . Is OT relevant in the deep learning era ? To answer this question decisively , we perform extensive numerical experiments and ablation studies . We fully document “ underwhelming ” results in the Supplementary Material and distill the key lessons that we learned in the Deep-RobOT architecture ( Section 3.2 ) . This model relies on fast RobOT layers to cover for the main weaknesses of point neural networks for shape registration . It is remarkably easy to deploy and generalizes well from synthetic training data to real test samples . We thus believe that it will have a stimulating impact on both of the computer vision and medical imaging literature . 2 Robust optimal transport . This section introduces the mathematical foundations of our work . After a brief overview of Robust Optimal Transport ( RobOT ) theory , we discuss the main challenges that one encounters when using this framework for computer vision . To avoid memory overflows and numerical stability issues , we introduce the weighted “ RobOT matching ” : a vector field that summarizes the information of a full transport plan with a linear memory footprint . As detailed in the next sections , this representation lets us scale up to high-resolution shapes without compromising on accuracy . 2.1 Mathematical background . The assignment problem . If A = ( x1 , . . . , xN ) and B = ( y1 , . . . , yM ) are two point clouds in R3 with N = M , the assignment problem between A and B reads : Assignment ( A , B ) = min s : [ [ 1 , N ] ] → [ [ 1 , N ] ] 1 2N N∑ i=1 ‖xi − ys ( i ) ‖2R3 , where s is a permutation . ( 1 ) This problem generalizes sorting to R3 : if the points xi and yj all belong to a line , the optimal permutation s∗ corresponds to a non-decreasing re-ordering of the point sets A and B [ 87 ] . Robust optimal transport . Further , OT theory allows us to consider problems where N 6= M. Non-negative weights α1 , . . . , αN , β1 , . . . , βM > 0 are attached to the points xi , yj and account for variations of the sampling densities , while feature vectors p1 , . . . , pN and q1 , . . . , qM in RD may advantageously replace raw point coordinates xi and yj in R3 . Following [ 67 , 25 ] , the robust OT problem between the shapes A = ( αi , xi , pi ) and B = ( βj , yj , qj ) reads : OTσ , τ ( A , B ) = min ( πi , j ) ∈RN×M > 0 N∑ i=1 M∑ j=1 πi , j · 12‖pi − qj‖ 2 RD ( 2 ) + σ2 KL ( πi , j ||αi ⊗ βj ) ︸ ︷︷ ︸ Entropic blur at scale σ . + τ2 KL ( ∑ j πi , j ||αi ) ︸ ︷︷ ︸ π should match A. . . + τ2 KL ( ∑ i πi , j ||βj ) ︸ ︷︷ ︸ . . . onto B. , for any choice of the regularization parameters σ > 0 and τ > 0 . In the equation above , the Kullback-Leibler divergence KL ( ai||bi ) = ∑ ai log ( ai/bi ) − ai + bi is a relative entropy that penalizes deviations of a non-negative vector of weights ( ai ) to a reference measure ( bi ) . Parameters . The first regularization term is scaled by the square of a blur radius σ . This characteristic length quantifies the fuzziness of the probabilistic assignment ( πi , j ) between points xi and yj [ 35 ] . The last two regularization terms promote the matching of the full distribution of points A onto the target shape B : they generalize the constraints of injectivity and surjectivity of Eq . ( 1 ) to the probabilistic setting . They are scaled by the square of a maximum reach distance τ : this parameter acts as a soft upper bound on the distance between feature vectors pi and qj that should be matched with each other [ 38 , 105 ] . For shape registration , we use simple heuristics for the values of these two characteristic scales : the blur σ should be equal to the average sampling distance in feature space RD while the reach τ should be equal to the largest plausible displacement for any given feature vector pi . These rules are easy to follow if point features correspond to Cartesian coordinates xi and yj in R3 but may lead to unexpected behaviors if features are output by a DNN . In the latter case , we thus normalize our feature vectors so that ‖pi‖RD = ‖qj‖RD = 1 and pick values for σ and τ between 0 and 2 . Working with a soft , probabilistic transport plan . As detailed in [ 35 ] , scalable OT solvers for Eq . ( 2 ) return a pair of dual vectors ( fi ) ∈ RN and ( gj ) ∈ RM that encode implicitly an optimal transport plan ( πi , j ) ∈ RN×M with coefficients : πi , j = αiβj · exp 1σ2 [ fi + gj − 12‖pi − qj‖ 2 RD ] > 0 . ( 3 ) In the limit case where σ tends to 0 and τ tends to +∞ , for generic point clouds ( xi ) and ( yj ) with N = M and equal weights αi = βj = 1/N , πi , j is a permutation matrix [ 87 ] . We retrieve the simple assignment problem of Eq . ( 1 ) : πi , j = 1/N if j = s∗ ( i ) and 0 otherwise . However , in general the transport plan must be understood as a probabilistic map between the point distributions A and B that assigns a weight πi , j to the coupling “ xi ↔ yj ” . For shape registration , this implies that the main difficulties for using robust OT are two-fold : first , the coupling π is not one-to-one , but one-to-many ; second , the lines and columns of the transport plan π do not sum up to one . Notably , this implies that when τ < +∞ , the gradient of the OT cost with respect to the point positions xi is not homogeneous : we observe vanishing and inflated values across the domain [ 105 ] .
This paper discusses the application of optimal transport theory in and contributed several useful practices to shape matching. First, the proposed RobOT method implements the traditional entropy-regularized unbalanced optimal transport using the kernel method and reduces the memory footprint to linear, enabling the processing of over 60k points. Second, the paper introduces several effective strategies to regularize/project the transport vectors given a deformation model (RobOTP). Third, the deep deformation framework (DRobOT) is proposed by sandwiching the parameter prediction module with pre-alignment and post-optimization steps, which lead to SoTA performance on a new lung dataset and scene flow datasets. Apart from the algorithmic side, the authors also give a clear summary which cover most of the vision algorithms using optimal transport.
science.mup_single_document_summarization.13858
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAccurate Point Cloud Registration with Robust Optimal Transport\n\nMain Content:\n1 Introduction . Shape registration is a fundamental but difficult problem in computer vision . The task is to determine plausible spatial correspondences between pairs of shapes , with use cases that range from pose estimation for noisy point clouds [ 14 ] to the nonparametric registration of high-resolution medical images [ 17 ] . As illustrated in Fig . 1 , most existing approaches to this problem consist of a combination of three steps , possibly fused together by some deep learning ( DL ) methods : ( 1 ) feature extraction ; ( 2 ) feature matching ; and ( 3 ) regularization using a class of acceptable transformations that is specified through a parametric or nonparametric model . This work discusses how tools derived from ∗Equal Contribution . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . optimal transport ( OT ) theory [ 87 ] can improve the second step of this pipeline ( feature matching ) on challenging problems . To put these results in context , we first present an overview of related methods . 1 . Feature extraction . To establish spatial correspondences , one first computes descriptive local features . When dealing with ( possibly annotated ) point clouds , a simple choice is to rely on Cartesian coordinates ( x , y , z ) [ 3 , 26 ] . Going further , stronger descriptors capture local geometric and topological properties : examples include shape orientation and curvatures [ 21 , 96 ] , shape contexts [ 6 ] , spectral eigenvalues [ 84 , 70 ] and annotations such as color [ 76 ] or chemical fingerprints [ 43 , 113 ] . Recently , expressive feature representations have also been learned using deep neural networks ( DNN ) : see [ 16 ] and subsequent works on geometric deep learning . Generally , feature extractors are designed to make shape registration as unambiguous as possible . In order to get closer to the ideal case of landmark matching [ 11 ] , we associate discriminative features to the salient points of our shapes : this increases the robustness of the subsequent matching and regularization steps . 2 . Feature matching . Once computed on both of the source and target shapes , feature vectors are put in correspondence with each other . This assignment is often encoded as an explicit mapping between the two shapes ; alternatively , the vector field relating the shapes can be defined implicitly as the gradient of a geometric loss function that quantifies discrepancies between two distributions of features [ 35 ] : a ) A first major approach is to rely on nearest neighbor projections and the related chamfer [ 12 ] and Hausdorff distances [ 13 ] , as in the Iterative Closest Point ( ICP ) algorithm [ 7 ] . This method can be softened through the use of a softmax ( log-sum-exp ) operator as in the many variants of the Coherent Point Drift ( CPD ) method [ 81 , 73 , 72 , 44 ] , or made robust to outliers in the specific context of rigid and affine registrations [ 41 , 129 , 128 , 14 ] . b ) Alternatively , a second approach is to rely on convolutional kernel norms such as the Energy Distance [ 94 ] , which are also known as Maximum Mean Discrepancies ( MMD ) in statistics [ 48 ] . These loss functions are common in imaging science [ 88 ] and computational anatomy [ 115 , 21 ] but are prone to vanishing gradients [ 40 , 39 ] . c ) Finally , a third type of approach is to rely on optimal transport ( OT ) theory [ 87 ] and solutions of the earth mover ’ s problem [ 97 ] . This method is equivalent to a nearest neighbor projection under a global constraint of bijectivity that enforces consistency in the matching . On the one hand , OT has been known to provide reliable correspondences in computer vision for more than two decades [ 26 , 47 , 64 ] . On the other hand , it has often faced major issues of scalability and robustness to outliers on noisy data . As detailed below , the main purpose of this work is to overcome these limitations and enable the widespread use of OT tools for challenging registration problems . 3 . Regularization with a deformation model . The output of the two steps above is a non-smooth vector field that may not be suitable for downstream tasks due to e.g . tears and compression artifacts . As a third step , most registration methods thus rely on regularization to obtain plausible deformations . This process is task-specific , with applications that range from rigid registration [ 133 , 46 , 30 , 2 , 121 , 122 ] to free-form motion estimation [ 91 , 126 , 69 , 49 ] . In Sec . 3 , we address the interaction of OT matching layers with a varied collection of regularization strategies – from optimization-based spline and diffeomorphic models to DNNs . Recent progresses . Research works on shape registration combine ideas from the three paragraphs above to best fit the characteristics of computer vision problems [ 71 , 31 , 107 ] . Over the past few years , significant progress has been made on all fronts . On the one hand , ( geometric ) deep learning networks have been used to define data-driven feature maps [ 92 , 92 , 123 ] and multiscale regularization modules [ 126 , 68 , 108 ] , sometimes fused within end-to-end architectures [ 133 , 91 , 132 , 30 ] . On the other hand , nearest neighbor projections , kernel convolutions and transport-based matching strategies have all been generalized to take advantage of these modern descriptors : they can now be used in high-dimensional feature spaces [ 59 , 37 ] . Challenges . Nevertheless , state-of-the-art ( SOTA ) methods in the field still have important limitations . First , modern deep learning pipelines are often hard to train to “ pixel-perfect ” accuracy on nonsmooth shapes , with diminishing returns in terms of model size and training data [ 2 ] . Second , scaling up point neural networks to finely sampled shapes ( N > 10k points ) remains a challenging research topic [ 49 , 135 , 37 ] . Third , the impact of the choice of a specific feature matching method on the performance of deep learning models remains only partially understood [ 58 ] . Related works . Following major progress on computational OT in the mathematical literature [ 79 , 27 , 66 , 103 ] , improved modules for feature matching have attracted interest as a possible solution to these challenges . Works on sliced partial OT [ 9 ] and dustbin-OT [ 28 ] have shown that outliers can be handled effectively by OT methods for rigid registration , beyond the classic Robust Point Matching method ( RPM ) [ 47 , 26 ] . Going further , the Sinkhorn algorithm for entropy-regularized OT [ 27 , 64 , 104 , 65 ] has been studied extensively for shape registration in computational anatomy [ 36 , 45 ] and computer graphics [ 74 , 33 , 85 ] . The Gromov–Wasserstein distance has also been used for shape analysis [ 110 , 118 ] , albeit at a higher computational cost . These applications have driven interest in the development of a complete theory for Robust Optimal Transport ( RobOT ) , outlined in Sec . 2 , which handles both sampling artifacts and outliers [ 25 , 105 , 67 , 24 , 106 , 80 ] . Most recently , this framework has started to be used in shape analysis with applications to shape matching [ 36 ] , the segmentation of brain tractograms [ 38 ] and deep deformation estimation with the FLOT architecture [ 91 ] . Contributions . We build upon the work above to tackle challenging point cloud registration problems for scene flow estimation and computational anatomy . Our key contributions are : 1 . Accurate feature matching with scalable OT solvers . For the first time , we scale up RobOT for deep feature matching to high-resolution shapes with more than 10k points . To this end , we leverage the latest generation of OT solvers [ 39 , 35 ] and overcome significant issues of memory usage and numerical stability . This allows us to handle fine-grained details effectively , which is key for e.g . most medical applications . 2 . Interaction with task-specific regularization strategies . We show how to interface RobOT matchings with advanced deformation models . This is in contrast with e.g . the FLOT architecture , which focuses on the direct prediction of a vector field and can not be used for applications that require guarantees on the smoothness of the registration . 3 . Challenging new dataset . We release a large dataset of lung vascular trees that should be registered between inhalation and exhalation . This relevant medical problem involves large and complex deformations of high-resolution 3D point clouds . As a new benchmark for the community , we provide two strong baselines that rely respectively on global feature matching and on deep deformation estimation . 4 . Consistent SOTA performance . Our proposed models achieve SOTA results for scene flow on Kitti [ 78 , 77 ] and for point-cloud-based lung registration on DirLab-COPDGene [ 19 ] . Notably , we show that RobOT is highly suited to fine-tuning tasks : it consistently turns “ good ” matchings into nearly perfect registrations at an affordable numerical cost . Main experimental observations . Is OT relevant in the deep learning era ? To answer this question decisively , we perform extensive numerical experiments and ablation studies . We fully document “ underwhelming ” results in the Supplementary Material and distill the key lessons that we learned in the Deep-RobOT architecture ( Section 3.2 ) . This model relies on fast RobOT layers to cover for the main weaknesses of point neural networks for shape registration . It is remarkably easy to deploy and generalizes well from synthetic training data to real test samples . We thus believe that it will have a stimulating impact on both of the computer vision and medical imaging literature . 2 Robust optimal transport . This section introduces the mathematical foundations of our work . After a brief overview of Robust Optimal Transport ( RobOT ) theory , we discuss the main challenges that one encounters when using this framework for computer vision . To avoid memory overflows and numerical stability issues , we introduce the weighted “ RobOT matching ” : a vector field that summarizes the information of a full transport plan with a linear memory footprint . As detailed in the next sections , this representation lets us scale up to high-resolution shapes without compromising on accuracy . 2.1 Mathematical background . The assignment problem . If A = ( x1 , . . . , xN ) and B = ( y1 , . . . , yM ) are two point clouds in R3 with N = M , the assignment problem between A and B reads : Assignment ( A , B ) = min s : [ [ 1 , N ] ] → [ [ 1 , N ] ] 1 2N N∑ i=1 ‖xi − ys ( i ) ‖2R3 , where s is a permutation . ( 1 ) This problem generalizes sorting to R3 : if the points xi and yj all belong to a line , the optimal permutation s∗ corresponds to a non-decreasing re-ordering of the point sets A and B [ 87 ] . Robust optimal transport . Further , OT theory allows us to consider problems where N 6= M. Non-negative weights α1 , . . . , αN , β1 , . . . , βM > 0 are attached to the points xi , yj and account for variations of the sampling densities , while feature vectors p1 , . . . , pN and q1 , . . . , qM in RD may advantageously replace raw point coordinates xi and yj in R3 . Following [ 67 , 25 ] , the robust OT problem between the shapes A = ( αi , xi , pi ) and B = ( βj , yj , qj ) reads : OTσ , τ ( A , B ) = min ( πi , j ) ∈RN×M > 0 N∑ i=1 M∑ j=1 πi , j · 12‖pi − qj‖ 2 RD ( 2 ) + σ2 KL ( πi , j ||αi ⊗ βj ) ︸ ︷︷ ︸ Entropic blur at scale σ . + τ2 KL ( ∑ j πi , j ||αi ) ︸ ︷︷ ︸ π should match A. . . + τ2 KL ( ∑ i πi , j ||βj ) ︸ ︷︷ ︸ . . . onto B. , for any choice of the regularization parameters σ > 0 and τ > 0 . In the equation above , the Kullback-Leibler divergence KL ( ai||bi ) = ∑ ai log ( ai/bi ) − ai + bi is a relative entropy that penalizes deviations of a non-negative vector of weights ( ai ) to a reference measure ( bi ) . Parameters . The first regularization term is scaled by the square of a blur radius σ . This characteristic length quantifies the fuzziness of the probabilistic assignment ( πi , j ) between points xi and yj [ 35 ] . The last two regularization terms promote the matching of the full distribution of points A onto the target shape B : they generalize the constraints of injectivity and surjectivity of Eq . ( 1 ) to the probabilistic setting . They are scaled by the square of a maximum reach distance τ : this parameter acts as a soft upper bound on the distance between feature vectors pi and qj that should be matched with each other [ 38 , 105 ] . For shape registration , we use simple heuristics for the values of these two characteristic scales : the blur σ should be equal to the average sampling distance in feature space RD while the reach τ should be equal to the largest plausible displacement for any given feature vector pi . These rules are easy to follow if point features correspond to Cartesian coordinates xi and yj in R3 but may lead to unexpected behaviors if features are output by a DNN . In the latter case , we thus normalize our feature vectors so that ‖pi‖RD = ‖qj‖RD = 1 and pick values for σ and τ between 0 and 2 . Working with a soft , probabilistic transport plan . As detailed in [ 35 ] , scalable OT solvers for Eq . ( 2 ) return a pair of dual vectors ( fi ) ∈ RN and ( gj ) ∈ RM that encode implicitly an optimal transport plan ( πi , j ) ∈ RN×M with coefficients : πi , j = αiβj · exp 1σ2 [ fi + gj − 12‖pi − qj‖ 2 RD ] > 0 . ( 3 ) In the limit case where σ tends to 0 and τ tends to +∞ , for generic point clouds ( xi ) and ( yj ) with N = M and equal weights αi = βj = 1/N , πi , j is a permutation matrix [ 87 ] . We retrieve the simple assignment problem of Eq . ( 1 ) : πi , j = 1/N if j = s∗ ( i ) and 0 otherwise . However , in general the transport plan must be understood as a probabilistic map between the point distributions A and B that assigns a weight πi , j to the coupling “ xi ↔ yj ” . For shape registration , this implies that the main difficulties for using robust OT are two-fold : first , the coupling π is not one-to-one , but one-to-many ; second , the lines and columns of the transport plan π do not sum up to one . Notably , this implies that when τ < +∞ , the gradient of the OT cost with respect to the point positions xi is not homogeneous : we observe vanishing and inflated values across the domain [ 105 ] .", "role": "user" }, { "content": "This paper discusses the application of optimal transport theory in and contributed several useful practices to shape matching. First, the proposed RobOT method implements the traditional entropy-regularized unbalanced optimal transport using the kernel method and reduces the memory footprint to linear, enabling the processing of over 60k points. Second, the paper introduces several effective strategies to regularize/project the transport vectors given a deformation model (RobOTP). Third, the deep deformation framework (DRobOT) is proposed by sandwiching the parameter prediction module with pre-alignment and post-optimization steps, which lead to SoTA performance on a new lung dataset and scene flow datasets. Apart from the algorithmic side, the authors also give a clear summary which cover most of the vision algorithms using optimal transport.", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: FacTweet: Profiling Fake News Twitter Accounts We present an approach to detect fake news in Twitter at the account level using a neural recurrent model and a variety of different semantic and stylistic features. Our method extracts a set of features from the timelines of news Twitter accounts by reading their posts as chunks, rather than dealing with each tweet independently. We show the experimental benefits of modeling latent stylistic signatures of mixed fake and real news with a sequential model over a wide range of strong baselines. Introduction Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level. The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts. In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are: [leftmargin=4mm] We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution. We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts. Methodology Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks. Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \langle ck_1, \ldots , ck_n \rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \in {\rm I\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \langle f_1, \ldots , f_n \rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors. Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9. [leftmargin=4mm] Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon. Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative. Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation). Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length. Words embeddings: We extract words embeddings of the words of the tweet using $Glove\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings. Model. To account for chunk sequences we make use of a de facto standard approach and opt for a recurrent neural model using long short-term memory (LSTM) BIBREF18. In our model, the sequence consists of a sequence of tweets belonging to one chunk (Figure FIGREF11). The LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past tweets). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a `context' vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by: Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. Experiments and Results Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset. Baselines. We compare our approach (FacTweet) to the following set of baselines: [leftmargin=4mm] LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier. Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts. LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently. LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier. FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets. Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline. Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance. Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work. Conclusions In this paper, we proposed a model that utilizes chunked timelines of tweets and a recurrent neural model in order to infer the factuality of a Twitter news account. Our experimental results indicate the importance of analyzing tweet stream into chunks, as well as the benefits of heterogeneous knowledge source (i.e., lexica as well as text) in order to capture factuality. In future work, we would like to extend this line of research with further in-depth analysis to understand the flow change of the used features in the accounts' streams. Moreover, we would like to take our approach one step further incorporating explicit temporal information, e.g., using timestamps. Crucially, we are also interested in developing a multilingual version of our approach, for instance by leveraging the now ubiquitous cross-lingual embeddings BIBREF22, BIBREF23. Fig. 1: The FacTweet’s architecture. Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R). Fig. 3: The FacTweet performance on difference chunk sizes. Table 2: FacTweet: experimental results. Query: How are chunks defined? JSON Response:
{"answer": "Chunks is group of tweets from single account that is consecutive in time - idea is that this group can show secret intention of malicious accounts.", "evidence": ["Given a news Twitter account, we read its tweets from the account\u0027s timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks\u0027 tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account\u0027s chunks.", "The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts."]}
science.qasper_abstractive_qa.124.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nFacTweet: Profiling Fake News Twitter Accounts\n\nWe present an approach to detect fake news in Twitter at the account level using a neural recurrent model and a variety of different semantic and stylistic features. Our method extracts a set of features from the timelines of news Twitter accounts by reading their posts as chunks, rather than dealing with each tweet independently. We show the experimental benefits of modeling latent stylistic signatures of mixed fake and real news with a sequential model over a wide range of strong baselines.\n\nIntroduction\nSocial media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.\nThe main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.\nIn this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:\n[leftmargin=4mm]\nWe propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.\nWe benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.\n\nMethodology\nGiven a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks.\nInput Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors.\nFeatures. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.\n[leftmargin=4mm]\nEmotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.\nSentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.\nMorality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).\nStyle: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.\nWords embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings.\nModel. To account for chunk sequences we make use of a de facto standard approach and opt for a recurrent neural model using long short-term memory (LSTM) BIBREF18. In our model, the sequence consists of a sequence of tweets belonging to one chunk (Figure FIGREF11). The LSTM learns the hidden state $h\\textsubscript {t}$ by capturing the previous timesteps (past tweets). The produced hidden state $h\\textsubscript {t}$ at each time step is passed to the attention layer which computes a `context' vector $c\\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:\nWhere $T$ is the total number of timesteps in the input sequence and $\\alpha \\textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj.\n\nExperiments and Results\nData. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.\nBaselines. We compare our approach (FacTweet) to the following set of baselines:\n[leftmargin=4mm]\nLR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline.\nExperimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance.\nResults. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work.\n\nConclusions\nIn this paper, we proposed a model that utilizes chunked timelines of tweets and a recurrent neural model in order to infer the factuality of a Twitter news account. Our experimental results indicate the importance of analyzing tweet stream into chunks, as well as the benefits of heterogeneous knowledge source (i.e., lexica as well as text) in order to capture factuality. In future work, we would like to extend this line of research with further in-depth analysis to understand the flow change of the used features in the accounts' streams. Moreover, we would like to take our approach one step further incorporating explicit temporal information, e.g., using timestamps. Crucially, we are also interested in developing a multilingual version of our approach, for instance by leveraging the now ubiquitous cross-lingual embeddings BIBREF22, BIBREF23.\n\nFig. 1: The FacTweet’s architecture.\n\nTable 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R).\n\nFig. 3: The FacTweet performance on difference chunk sizes.\n\nTable 2: FacTweet: experimental results.\n\nQuery: How are chunks defined?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Chunks is group of tweets from single account that is consecutive in time - idea is that this group can show secret intention of malicious accounts.\", \"evidence\": [\"Given a news Twitter account, we read its tweets from the account\\u0027s timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks\\u0027 tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account\\u0027s chunks.\", \"The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: There is no known interaction between Pioneer factor OCT3/4 and major chromatin remodeling factors. Research Abstract: Transcription factors, such as Oct4, are critical for establishing and maintaining pluripotent cell identity. Whereas the genomic locations of several pluripotency transcription factors have been reported, the spectrum of their interaction partners is underexplored. Here, we use an improved affinity protocol to purify Oct4-interacting proteins from mouse embryonic stem cells (ESCs). Subsequent purification of Oct4 partners Sall4, Tcfcp2l1, Dax1, and Esrrb resulted in an Oct4 interactome of 166 proteins, including transcription factors and chromatin-modifying complexes with documented roles in self-renewal, but also many factors not previously associated with the ESC network. We find that Esrrb associated with the basal transcription machinery and also detect interactions between transcription factors and components of the TGF-beta, Notch, and Wnt signaling pathways. Acute depletion of Oct4 reduced binding of Tcfcp2l1, Dax1, and Esrrb to several target genes. In conclusion, our purification protocol allowed us to bring greater definition to the circuitry controlling pluripotent cell identity.
{"verdict": "CONTRADICT", "evidence": ["Subsequent purification of Oct4 partners Sall4, Tcfcp2l1, Dax1, and Esrrb resulted in an Oct4 interactome of 166 proteins, including transcription factors and chromatin-modifying complexes with documented roles in self-renewal, but also many factors not previously associated with the ESC network."]}
science.scifact_entailment.835
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThere is no known interaction between Pioneer factor OCT3/4 and major chromatin remodeling factors.\n\nResearch Abstract:\nTranscription factors, such as Oct4, are critical for establishing and maintaining pluripotent cell identity.\nWhereas the genomic locations of several pluripotency transcription factors have been reported, the spectrum of their interaction partners is underexplored.\nHere, we use an improved affinity protocol to purify Oct4-interacting proteins from mouse embryonic stem cells (ESCs).\nSubsequent purification of Oct4 partners Sall4, Tcfcp2l1, Dax1, and Esrrb resulted in an Oct4 interactome of 166 proteins, including transcription factors and chromatin-modifying complexes with documented roles in self-renewal, but also many factors not previously associated with the ESC network.\nWe find that Esrrb associated with the basal transcription machinery and also detect interactions between transcription factors and components of the TGF-beta, Notch, and Wnt signaling pathways.\nAcute depletion of Oct4 reduced binding of Tcfcp2l1, Dax1, and Esrrb to several target genes.\nIn conclusion, our purification protocol allowed us to bring greater definition to the circuitry controlling pluripotent cell identity.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Subsequent purification of Oct4 partners Sall4, Tcfcp2l1, Dax1, and Esrrb resulted in an Oct4 interactome of 166 proteins, including transcription factors and chromatin-modifying complexes with documented roles in self-renewal, but also many factors not previously associated with the ESC network.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Single nucleotide polymorphisms of the HNF4alpha gene are associated with the conversion to type 2 diabetes mellitus: the STOP-NIDDM trial. Hepatocyte nuclear factor 4alpha (HNF4alpha) is a transcription factor, which is necessary for normal function of human liver and pancreatic islets. We investigated whether single nucleotide polymorphisms (SNPs) of HNF4A, encoding HNF4alpha, influenced the conversion from impaired glucose tolerance (IGT) to type 2 diabetes mellitus in subjects of the STOP-NIDDM trial. This trial aimed at evaluating the effect of acarbose compared to placebo in the prevention of type 2 diabetes mellitus. Eight SNPs covering the intragenic and alternate P2 promoter regions of HNF4A were genotyped in study samples using the TaqMan Allelic Discrimination Assays. Three SNPs in the P2 promoter region (rs4810424, rs1884614, and rs2144908) were in almost complete association (D'>0.97, r (2)>0.95) and, therefore, only rs4810424 was included in further analyses. Female carriers of the less frequent C allele of rs4810424 had a 1.7-fold elevated risk [95% confidence interval (CI) 1.09-2.66; P=0.020] for the conversion to diabetes compared to women with the common genotype after the adjustment for age, treatment group (placebo or acarbose), smoking, weight at baseline, and weight change. No association was found in men. Haplotype analysis based on three SNPs (rs4810424, rs2071197, and rs3818247) representing the linkage disequilibrium blocks in our study population indicated that the conversion to type 2 diabetes mellitus was dependent on the number of risk alleles in different haplotypes in women. Our results suggest that SNPs of HNF4A and their haplotypes predispose to type 2 diabetes mellitus in female subjects of the STOP-NIDDM study population.
{"CellLine": [], "Chemical": ["acarbose"], "Disease": ["diabetes", "IGT", "type 2 diabetes mellitus", "impaired glucose tolerance"], "Gene": ["HNF4A", "Hepatocyte nuclear factor 4alpha", "HNF4alpha"], "Species": ["women", "men", "human"], "Variant": ["rs2144908", "rs4810424", "rs3818247", "rs1884614", "rs2071197"]}
science.biored_ner.18
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nSingle nucleotide polymorphisms of the HNF4alpha gene are associated with the conversion to type 2 diabetes mellitus: the STOP-NIDDM trial. Hepatocyte nuclear factor 4alpha (HNF4alpha) is a transcription factor, which is necessary for normal function of human liver and pancreatic islets. We investigated whether single nucleotide polymorphisms (SNPs) of HNF4A, encoding HNF4alpha, influenced the conversion from impaired glucose tolerance (IGT) to type 2 diabetes mellitus in subjects of the STOP-NIDDM trial. This trial aimed at evaluating the effect of acarbose compared to placebo in the prevention of type 2 diabetes mellitus. Eight SNPs covering the intragenic and alternate P2 promoter regions of HNF4A were genotyped in study samples using the TaqMan Allelic Discrimination Assays. Three SNPs in the P2 promoter region (rs4810424, rs1884614, and rs2144908) were in almost complete association (D'>0.97, r (2)>0.95) and, therefore, only rs4810424 was included in further analyses. Female carriers of the less frequent C allele of rs4810424 had a 1.7-fold elevated risk [95% confidence interval (CI) 1.09-2.66; P=0.020] for the conversion to diabetes compared to women with the common genotype after the adjustment for age, treatment group (placebo or acarbose), smoking, weight at baseline, and weight change. No association was found in men. Haplotype analysis based on three SNPs (rs4810424, rs2071197, and rs3818247) representing the linkage disequilibrium blocks in our study population indicated that the conversion to type 2 diabetes mellitus was dependent on the number of risk alleles in different haplotypes in women. Our results suggest that SNPs of HNF4A and their haplotypes predispose to type 2 diabetes mellitus in female subjects of the STOP-NIDDM study population.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"acarbose\"], \"Disease\": [\"diabetes\", \"IGT\", \"type 2 diabetes mellitus\", \"impaired glucose tolerance\"], \"Gene\": [\"HNF4A\", \"Hepatocyte nuclear factor 4alpha\", \"HNF4alpha\"], \"Species\": [\"women\", \"men\", \"human\"], \"Variant\": [\"rs2144908\", \"rs4810424\", \"rs3818247\", \"rs1884614\", \"rs2071197\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Sickness absence from work due to experienced distress and mental health issues has continuously increased over the past years in Germany. To investigate how this alarming development can be counteracted, we conducted a randomized controlled trial evaluating a job coaching intervention to maintain the working capacity of members of staff and ultimately prevent sickness absence. Our sample included N = 99 employees who reported mental distress due to work-related problems. The intervention group (n = 58) received between 8 and 12 individual job coaching sessions in which they worked with a professional job coach to reduce their mental distress. The control group (n = 41) received a brochure about mental distress. Data were collected before the start of the study, at the end of the job coaching intervention, and at a 3-month follow-up. These data included the number of sickness absence days as the primary outcome and questionnaire measures to assess burnout indicators, life satisfaction, and work-related experiences and behaviors. Compared with the control group, the results indicated no reduction in sickness absence in the intervention group but fewer depressive symptoms, a heightened ability of the participants to distance themselves from work, more experience of work-related success, less depletion of emotional resources, and a greater satisfaction with life when participants had received the job coaching. Thus, although we could not detect a reduction in sickness absence between the groups, job coaching was shown to be a viable intervention technique to benefit employees by contributing to re-establish their mental health. We discuss the implications of the study and outline future research. Extracted Data:
[["Job coaching sessions", "Control", "Sickness absence", "no significant difference", "Compared with the control group, the results indicated no reduction in sickness absence in the intervention group"]]
science.evidence_inference.138
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Sickness absence from work due to experienced distress and mental health issues has continuously increased over the past years in Germany. To investigate how this alarming development can be counteracted, we conducted a randomized controlled trial evaluating a job coaching intervention to maintain the working capacity of members of staff and ultimately prevent sickness absence. Our sample included N = 99 employees who reported mental distress due to work-related problems. The intervention group (n = 58) received between 8 and 12 individual job coaching sessions in which they worked with a professional job coach to reduce their mental distress. The control group (n = 41) received a brochure about mental distress. Data were collected before the start of the study, at the end of the job coaching intervention, and at a 3-month follow-up. These data included the number of sickness absence days as the primary outcome and questionnaire measures to assess burnout indicators, life satisfaction, and work-related experiences and behaviors. Compared with the control group, the results indicated no reduction in sickness absence in the intervention group but fewer depressive symptoms, a heightened ability of the participants to distance themselves from work, more experience of work-related success, less depletion of emotional resources, and a greater satisfaction with life when participants had received the job coaching. Thus, although we could not detect a reduction in sickness absence between the groups, job coaching was shown to be a viable intervention technique to benefit employees by contributing to re-establish their mental health. We discuss the implications of the study and outline future research.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Job coaching sessions\", \"Control\", \"Sickness absence\", \"no significant difference\", \"Compared with the control group, the results indicated no reduction in sickness absence in the intervention group\"]]", "role": "assistant" } ]
science.discomat_te
| Glasses | W 0 (s-1) | a (cm) | W mp (s-1) | ho (cm-1) | | Borate [15] | 2.90E+12 | 3.80E-03 | 47.4147 | 1400 | | Silicate [15] | 1.40E+12 | 4.70E-03 | 0.0638 | 1100 | | Tellurite [15] | 6.30E+10 | 4.70E-03 | 0.0029 | 700 | | Phosphate [15] | 5.40E+12 | 4.70E-03 | 0.2462 | 1200 | | ZBLA [15] | 1.90E+10 | 5.77E-03 | 0.0000 | 460-500 | | Germanate [15] | 3.40E+10 | 4.90E-03 | 0.0004 | 900 | | PbO-Bi2O3-Ga2O3 [15] | 1.10E+09 | 4.20E-03 | 0.0013 | 500 | Caption: Multiphonon relaxation rates of some glass systems Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "2.90E+12", "type": "Other"} {"value": "47.4147", "type": "Other"} {"value": "1400", "type": "Other"} {"value": "1.40E+12", "type": "Other"} {"value": "0.0638", "type": "Other"} {"value": "1100", "type": "Other"} {"value": "6.30E+10", "type": "Other"} {"value": "0.0029", "type": "Other"} {"value": "700", "type": "Other"} {"value": "5.40E+12", "type": "Other"} {"value": "0.2462", "type": "Other"} {"value": "1200", "type": "Other"} {"value": "1.90E+10", "type": "Other"} {"value": "0.0000", "type": "Other"} {"value": "460", "type": "Other"} {"value": "500", "type": "Other"} {"value": "3.40E+10", "type": "Other"} {"value": "0.0004", "type": "Other"} {"value": "900", "type": "Other"} {"value": "1.10E+09", "type": "Other"} {"value": "0.0013", "type": "Other"} {"value": "500", "type": "Other"}
science.discomat_te.443
[ { "content": "| Glasses | W 0 (s-1) | a (cm) | W mp (s-1) | ho (cm-1) |\n| Borate [15] | 2.90E+12 | 3.80E-03 | 47.4147 | 1400 |\n| Silicate [15] | 1.40E+12 | 4.70E-03 | 0.0638 | 1100 |\n| Tellurite [15] | 6.30E+10 | 4.70E-03 | 0.0029 | 700 |\n| Phosphate [15] | 5.40E+12 | 4.70E-03 | 0.2462 | 1200 |\n| ZBLA [15] | 1.90E+10 | 5.77E-03 | 0.0000 | 460-500 |\n| Germanate [15] | 3.40E+10 | 4.90E-03 | 0.0004 | 900 |\n| PbO-Bi2O3-Ga2O3 [15] | 1.10E+09 | 4.20E-03 | 0.0013 | 500 |\n\nCaption: Multiphonon relaxation rates of some glass systems\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"2.90E+12\", \"type\": \"Other\"}\n{\"value\": \"47.4147\", \"type\": \"Other\"}\n{\"value\": \"1400\", \"type\": \"Other\"}\n{\"value\": \"1.40E+12\", \"type\": \"Other\"}\n{\"value\": \"0.0638\", \"type\": \"Other\"}\n{\"value\": \"1100\", \"type\": \"Other\"}\n{\"value\": \"6.30E+10\", \"type\": \"Other\"}\n{\"value\": \"0.0029\", \"type\": \"Other\"}\n{\"value\": \"700\", \"type\": \"Other\"}\n{\"value\": \"5.40E+12\", \"type\": \"Other\"}\n{\"value\": \"0.2462\", \"type\": \"Other\"}\n{\"value\": \"1200\", \"type\": \"Other\"}\n{\"value\": \"1.90E+10\", \"type\": \"Other\"}\n{\"value\": \"0.0000\", \"type\": \"Other\"}\n{\"value\": \"460\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"3.40E+10\", \"type\": \"Other\"}\n{\"value\": \"0.0004\", \"type\": \"Other\"}\n{\"value\": \"900\", \"type\": \"Other\"}\n{\"value\": \"1.10E+09\", \"type\": \"Other\"}\n{\"value\": \"0.0013\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: LLBoost: Last Layer Perturbation to Boost Pre-trained Neural Networks Main Content: In this work , we present LLBoost , a theoretically-grounded , computationallyefficient method to boost the validation accuracy of pre-trained overparameterized models without impacting the original training accuracy . LLBoost adjusts the last layer of a neural network by adding a term that is orthogonal to the training feature matrix , which is constructed by applying all layers but the last to the training data . We provide an efficient implementation of LLBoost on the GPU and demonstrate that LLBoost , run using only 1 GPU , improves the test/validation accuracy of pre-trained models on CIFAR10 , ImageNet32 , and ImageNet . In the over-parameterized linear regression setting , we prove that LLBoost reduces the generalization error of any interpolating solution with high probability without affecting training error . 1 INTRODUCTION . Over the past decade , deep networks have produced a number of state-of-the-art results including surpassing human performance on the ImageNet classification task ( 26 ; 14 ) . However , tuning hyperparameters to produce the best possible neural network in a given application remains a computationally expensive task . State-of-the-art results often involve selecting the best model using multiple random seeds ( 14 ; 27 ; 12 ; 28 ) or ensembling ( 15 ) , and training even a single model can take several days even when using multiple GPUs . Hence , it is critical to identify computationally efficient approaches to improve the performance of pre-trained models without the need of re-training them . In this work , we present LLBoost , a theoretically-grounded , computationally-efficient method to boost the validation accuracy of pre-trained , over-parameterized models without impacting the training accuracy . Figure 1 provides an overview of our method as well as the main results . As shown in Figure 1A , our method adjusts the last fully-connected layer of a neural network by selecting the best performing perturbation out of several orthogonal to the training feature matrix , which is constructed by applying all but the last layer to the training data . In Figure 1B , we provide an example showing how our method applied to a model trained under a poorly chosen random seed can boost validation accuracy comparable to that of a model trained under a better seed . Lastly , Figure 1C shows that our method can significantly improve the validation accuracy of pre-trained neural networks on large datasets using a fraction of the training time . The intuition for our method is based on characterizing the benefit of random initialization in overparameterized linear regression . In particular , consider a dataset ( X , y ) ⊂ Rd×n×R1×n with n < d for which there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data , we use gradient descent with learning rate η and initialization w ( 0 ) to minimize the squared loss , i.e . to solve : arg min w∈Rd 1 2 ‖y − wX‖2 . It is well-known ( 8 ) that gradient descent converges to the following closed-form solution : wr = w ( 0 ) ( I −X ( XTX ) †XT ) + yX† , where A† is the pseudo-inverse of A ( see Appendix A for a proof ) . In this work , we prove that when n < d , sampling w ( 0 ) on a hyper-sphere leads to wr with lower generalization error than the minimum norm solution yX† with constant probability . Since the last layer of modern deep networks is usually linear , we apply this result from regression to the last layer of networks to arrive at our method , LLBoost . We end the introduction with a summary of our main contributions : 1 . We present LLBoost , a computationally efficient method for boosting the validation accuracy of pre-trained neural networks without affecting training accuracy . Additionally , we provide an efficient implementation of LLBoost on the GPU . 2 . We provide a wealth of empirical evidence that LLBoost improves the validation accuracy of pre-trained neural networks and prove that it does not affect training accuracy for overparameterized models . 3 . We provide evidence that LLBoost yields a computational advantage over random seed search for training neural networks . 4 . In the over-parameterized linear regression setting , we prove that LLBoost reduces the test error of any interpolating solution with high probability without affecting training error . 2 RELATED WORK . Understanding the benefits of over-parameterization is a recent topic of major interest in machine learning . ( 2 ) introduced the double descent curve showing that when increasing model capacity past the interpolation threshold , test error can drop a second time . ( 20 ) noted that this phenomenon had been noticed empirically several decades prior to the work of ( 2 ) . A line of recent work have provided a theoretical analysis of double descent in linear regression ( 13 ; 3 ; 1 ; 22 ; 5 ; 21 ) . In particular , ( 13 ; 3 ) analyzed the generalization error for the minimum norm solution in over-parameterized linear models . These works proved that the minimum norm solution can yield lower generalization error in the over-parameterized regime than in the under-parameterized regime . Our theoretical analysis primarily builds on that of ( 3 ) , but we analyze the generalization error of random-initialization interpolating solutions instead of the minimum-norm solution . Various methods , including random seed search and ensembling , are standardly used to improve the performance of machine learning models through training . ( 4 ) recommends using 5-10 random seeds if computational power is available , and this is typically done in state-of-the-art models ; for example , ( 14 ) considered 5 random seeds for ResNet-110 and ( 12 ) considered at least 2 random seeds on CIFAR10 ( 17 ) . ( 9 ) rigorously analyzed the impact of random seeds on model performance by considering between 50-400 random seeds for models of varying depth on MNIST ( 18 ) . Figure 1 from their work demonstrates that random seeds can affect the validation accuracy of models by up to a few percentage points on MNIST . Since this comes at a significant computational price , it is critical to identify methods that can obtain such boosts without having to perform random seed search , which is the topic of this paper . Another popular approach to improving neural network performance is ensembling , for example via bagging ( 6 ) or boosting ( 10 ) . Recently , ( 15 ) presented an approach to ensembling , which involved training a single network and saving the model parameters each time training stopped at a local minimum . ( 15 ) built upon ( 30 ) , which introduced horizontal and vertical voting to ensemble a model across training epochs . LLBoost can be used in combination with all the above approaches to further boost their performance , since it only adjusts the last layer of a pre-trained model without requiring any training . 3 PRELIMINARIES AND METHOD . We now present the preliminaries relevant for formally introducing LLBoost . Since our method is derived from over-parameterized linear regression , we first describe this problem setting . Noiseless Regression . Let ( X , y ) ⊂ Rd×n×R1×n denote the training dataset where d is the number of dimensions and n is the number of samples . In noiseless linear regression there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data , we use gradient descent to minimize the squared loss . Formally , we solve : arg min w∈Rd 1 2 ‖y − wX‖2 ( 1 ) using gradient descent with learning rate η , which proceeds according to the updates : w ( t+1 ) = w ( t ) + η ( y − w ( t ) X ) XT . The following well-known theorem ( 8 ) gives the solution , w ( ∞ ) , for the objective ( 1 ) given by gradient descent . Theorem 1 . Let ( X , y ) ⊂ Rd×n × R1×n with X of rank r < d , let λmax denote the largest eigenvalue ofXXT , and letA† denote the pseudo-inverse of a matrixA . Given initializationw ( 0 ) ∈ Rd , gradient descent used to solve : arg min w∈Rd 1 2 ‖y − wX‖2 with learning rate η < λ−1max converges to w ( ∞ ) = w ( 0 ) ( I −X ( XTX ) †XT ) + yX† . When w ( 0 ) = 0 ( zero initialization ) , w ( ∞ ) is the minimum ` 2-norm solution for solving the linear system wX = y . The following property of w ( ∞ ) is used to demonstrate that LLBoost does not impact training error . Lemma 1 . Let w ( ∞ ) = w ( 0 ) ( I−X ( XTX ) †XT ) + yX† . Then , w ( 0 ) ( I−X ( XTX ) †XT ) ⊥ yX† . The proof follows directly by substituting inX = UΣV T given by the singular value decomposition ( SVD ) . Lemma 1 implies that LLBoost does not affect training predictions since it only ever adds a component orthogonal to the span of the training feature matrix . We primarily focus on the noiseless regression setting , but we note that our theory directly extends to the Gaussian model from ( 3 ) and ( 7 ) . We now present a result from random matrix theory , which is at the basis of our results . The proof of the following lemma relies on the rotation invariance of the multivariate Gaussian distribution and is presented in Appendix B. Lemma 2 . If X ∈ Rd×n with columns x ( i ) ∼ N ( 0 , Id×d ) , then the singular vectors of X are uniformly distributed on the unit sphere Sd−1 . In particular , if x ∼ N ( 0 , Id×d ) , then x‖x‖ is uniformly distributed on the unit sphere Sd−1 . Method Overview . Given the above preliminaries , we present LLBoost in Algorithm 1 . LLBoost takes in the following inputs : ( 1 ) the training feature matrix X ; ( 2 ) the validation feature matrix Xt ; ( 3 ) the validation labels yt ; ( 4 ) the last layer weights w ; ( 5 ) the number of samples to consider T ; and ( 6 ) the radius of the hyper-sphere to sample from γ. LLBoost begins by constructing the projection matrix P = I −X ( XTX ) †XT . Then , LLBoost samples z ∼ N ( 0 , I ) , normalizes z and adds the perturbation γzP to w. LLBoost repeats this process and returns the perturbation that most improves the validation accuracy . Remarks . As shown in Figure 1A , X and Xt are constructed by applying all but the last layer of the neural network to the training/test data . If X is not full rank ( as is the case for neural networks that are not over-parameterized ) , we can create a low rank X by truncating the lower singular values of X . This correction only mildly affects training error as is shown in Section 4 . For classification settings , yt should be converted to 1-hot vectors . When the last layer has a bias , one can simply append a 1 to w and append an extra row of 1 ’ s to X and Xt . In most of our experiments , we consider up to T = 100 , 000 samples . When selecting γ , one can utilize a binary search approach starting with γ = 1√ d and either doubling or halving γ depending on whether the validation accuracy increased or decreased respectively . However , as a simple heuristic one can start with γ = 1√ d ‖w‖ ( as is recommended by Theorem 3 ) . Lastly , when the network has multiple outputs ( as in multiclass problems ) , we let the entries of the weight matrix be i.i.d . standard normal variables and then normalize by the Frobenius norm . LLBoost does not impact training error/accuracy . Since the perturbations constructed by LLBoost are always projected into the space orthogonal to the training features X , Lemma 1 implies that LLBoost does not alter the training predictions . This is an important aspect of our algorithm since otherwise , LLBoost could just overfit the test data . Algorithm 1 LLBoost ( X , Xt , yt , w , T , γ ) Input : X : = rank r training feature matrix ; Xt : = val . feature matrix ; yt : = val . labels ; w : = last layer weights ; T : = number of samples ; γ : = radius of hyper-sphere Output : wbest : = adjusted last layer weights 1 : wbest = None 2 : accbest = 0 ; ( 1 , d ) = w.shape ; U , Σ , V T ← SVD ( X ) 3 : P ← Id×d − UIr×rUT 4 : for t← 1 to T do 5 : z ← N ( 0 , Id×d ) 6 : z ← z‖z‖ 7 : wr ← γzP + w 8 : acc = GetV alAcc ( Xt , yt , wr ) 9 : if acc > accbest then 10 : wbest = wr ; accbest = acc 11 : end if 12 : end for 13 : return wbest GPU Implementation . We can easily remove the loop in Algorithm 1 by using the GPU since the dominating computations are vector-matrix products . A full implementation is provided using PyTorch ( 25 ) and NumPy ( 23 ; 29 ) in the footnote below1 . In practice , computing the validation accuracy for each sample of wr is the limiting factor in LLBoost . For large datasets , the functions used to compute the SVD and the projection matrix P need to be selected carefully . Note that with respect to SVD , the reduced SVD is sufficient for our algorithm . While for the ImageNet training features from FixResNext-101 the full SVD would require constructing a matrix with roughly 1012 entries , the reduced SVD only requires 108 entries .
This paper proposes LLBoost that enables adjusting the last linear layer without impacting the training accuracy under the assumption that the last linear layer is in an over-parametrized situation. When the last layer is not over-parametrized, LLBoost first applies the low rank approximation to the training feature matrix through the SVD decomposition, which may affect the original training accuracy. The reason why LLBoost does not change the training accuracy is explained as follows: In an over-parametrized noiseless linear regression, a solution of a linear system $y = wX$ obtained by the gradient descent with an initial value of $w^{(0)}$ is given in a closed form of $\hat{w} = w^{(0)} (I - X(X^\top X)^\dagger X^\top) + yX^\dagger$. Therefore, we can compute a solution of $y = wX$ by simply generating $w^{(0)}$ randomly and applying this formula. It is also experimentally verified that LLBoost can adjust the last linear layer without impacting the training accuracy (after appriximated with SVD when necessary). The authors also present theoretical results that sampling $w^{(0)}$ uniformly on the hyper-shpere of appropriate radius leads to a solution that is better than the minimum norm solution ($yX^\dagger$) with constant probability.
science.mup_single_document_summarization.584
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nLLBoost: Last Layer Perturbation to Boost Pre-trained Neural Networks\n\nMain Content:\nIn this work , we present LLBoost , a theoretically-grounded , computationallyefficient method to boost the validation accuracy of pre-trained overparameterized models without impacting the original training accuracy . LLBoost adjusts the last layer of a neural network by adding a term that is orthogonal to the training feature matrix , which is constructed by applying all layers but the last to the training data . We provide an efficient implementation of LLBoost on the GPU and demonstrate that LLBoost , run using only 1 GPU , improves the test/validation accuracy of pre-trained models on CIFAR10 , ImageNet32 , and ImageNet . In the over-parameterized linear regression setting , we prove that LLBoost reduces the generalization error of any interpolating solution with high probability without affecting training error . 1 INTRODUCTION . Over the past decade , deep networks have produced a number of state-of-the-art results including surpassing human performance on the ImageNet classification task ( 26 ; 14 ) . However , tuning hyperparameters to produce the best possible neural network in a given application remains a computationally expensive task . State-of-the-art results often involve selecting the best model using multiple random seeds ( 14 ; 27 ; 12 ; 28 ) or ensembling ( 15 ) , and training even a single model can take several days even when using multiple GPUs . Hence , it is critical to identify computationally efficient approaches to improve the performance of pre-trained models without the need of re-training them . In this work , we present LLBoost , a theoretically-grounded , computationally-efficient method to boost the validation accuracy of pre-trained , over-parameterized models without impacting the training accuracy . Figure 1 provides an overview of our method as well as the main results . As shown in Figure 1A , our method adjusts the last fully-connected layer of a neural network by selecting the best performing perturbation out of several orthogonal to the training feature matrix , which is constructed by applying all but the last layer to the training data . In Figure 1B , we provide an example showing how our method applied to a model trained under a poorly chosen random seed can boost validation accuracy comparable to that of a model trained under a better seed . Lastly , Figure 1C shows that our method can significantly improve the validation accuracy of pre-trained neural networks on large datasets using a fraction of the training time . The intuition for our method is based on characterizing the benefit of random initialization in overparameterized linear regression . In particular , consider a dataset ( X , y ) ⊂ Rd×n×R1×n with n < d for which there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data , we use gradient descent with learning rate η and initialization w ( 0 ) to minimize the squared loss , i.e . to solve : arg min w∈Rd 1 2 ‖y − wX‖2 . It is well-known ( 8 ) that gradient descent converges to the following closed-form solution : wr = w ( 0 ) ( I −X ( XTX ) †XT ) + yX† , where A† is the pseudo-inverse of A ( see Appendix A for a proof ) . In this work , we prove that when n < d , sampling w ( 0 ) on a hyper-sphere leads to wr with lower generalization error than the minimum norm solution yX† with constant probability . Since the last layer of modern deep networks is usually linear , we apply this result from regression to the last layer of networks to arrive at our method , LLBoost . We end the introduction with a summary of our main contributions : 1 . We present LLBoost , a computationally efficient method for boosting the validation accuracy of pre-trained neural networks without affecting training accuracy . Additionally , we provide an efficient implementation of LLBoost on the GPU . 2 . We provide a wealth of empirical evidence that LLBoost improves the validation accuracy of pre-trained neural networks and prove that it does not affect training accuracy for overparameterized models . 3 . We provide evidence that LLBoost yields a computational advantage over random seed search for training neural networks . 4 . In the over-parameterized linear regression setting , we prove that LLBoost reduces the test error of any interpolating solution with high probability without affecting training error . 2 RELATED WORK . Understanding the benefits of over-parameterization is a recent topic of major interest in machine learning . ( 2 ) introduced the double descent curve showing that when increasing model capacity past the interpolation threshold , test error can drop a second time . ( 20 ) noted that this phenomenon had been noticed empirically several decades prior to the work of ( 2 ) . A line of recent work have provided a theoretical analysis of double descent in linear regression ( 13 ; 3 ; 1 ; 22 ; 5 ; 21 ) . In particular , ( 13 ; 3 ) analyzed the generalization error for the minimum norm solution in over-parameterized linear models . These works proved that the minimum norm solution can yield lower generalization error in the over-parameterized regime than in the under-parameterized regime . Our theoretical analysis primarily builds on that of ( 3 ) , but we analyze the generalization error of random-initialization interpolating solutions instead of the minimum-norm solution . Various methods , including random seed search and ensembling , are standardly used to improve the performance of machine learning models through training . ( 4 ) recommends using 5-10 random seeds if computational power is available , and this is typically done in state-of-the-art models ; for example , ( 14 ) considered 5 random seeds for ResNet-110 and ( 12 ) considered at least 2 random seeds on CIFAR10 ( 17 ) . ( 9 ) rigorously analyzed the impact of random seeds on model performance by considering between 50-400 random seeds for models of varying depth on MNIST ( 18 ) . Figure 1 from their work demonstrates that random seeds can affect the validation accuracy of models by up to a few percentage points on MNIST . Since this comes at a significant computational price , it is critical to identify methods that can obtain such boosts without having to perform random seed search , which is the topic of this paper . Another popular approach to improving neural network performance is ensembling , for example via bagging ( 6 ) or boosting ( 10 ) . Recently , ( 15 ) presented an approach to ensembling , which involved training a single network and saving the model parameters each time training stopped at a local minimum . ( 15 ) built upon ( 30 ) , which introduced horizontal and vertical voting to ensemble a model across training epochs . LLBoost can be used in combination with all the above approaches to further boost their performance , since it only adjusts the last layer of a pre-trained model without requiring any training . 3 PRELIMINARIES AND METHOD . We now present the preliminaries relevant for formally introducing LLBoost . Since our method is derived from over-parameterized linear regression , we first describe this problem setting . Noiseless Regression . Let ( X , y ) ⊂ Rd×n×R1×n denote the training dataset where d is the number of dimensions and n is the number of samples . In noiseless linear regression there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data , we use gradient descent to minimize the squared loss . Formally , we solve : arg min w∈Rd 1 2 ‖y − wX‖2 ( 1 ) using gradient descent with learning rate η , which proceeds according to the updates : w ( t+1 ) = w ( t ) + η ( y − w ( t ) X ) XT . The following well-known theorem ( 8 ) gives the solution , w ( ∞ ) , for the objective ( 1 ) given by gradient descent . Theorem 1 . Let ( X , y ) ⊂ Rd×n × R1×n with X of rank r < d , let λmax denote the largest eigenvalue ofXXT , and letA† denote the pseudo-inverse of a matrixA . Given initializationw ( 0 ) ∈ Rd , gradient descent used to solve : arg min w∈Rd 1 2 ‖y − wX‖2 with learning rate η < λ−1max converges to w ( ∞ ) = w ( 0 ) ( I −X ( XTX ) †XT ) + yX† . When w ( 0 ) = 0 ( zero initialization ) , w ( ∞ ) is the minimum ` 2-norm solution for solving the linear system wX = y . The following property of w ( ∞ ) is used to demonstrate that LLBoost does not impact training error . Lemma 1 . Let w ( ∞ ) = w ( 0 ) ( I−X ( XTX ) †XT ) + yX† . Then , w ( 0 ) ( I−X ( XTX ) †XT ) ⊥ yX† . The proof follows directly by substituting inX = UΣV T given by the singular value decomposition ( SVD ) . Lemma 1 implies that LLBoost does not affect training predictions since it only ever adds a component orthogonal to the span of the training feature matrix . We primarily focus on the noiseless regression setting , but we note that our theory directly extends to the Gaussian model from ( 3 ) and ( 7 ) . We now present a result from random matrix theory , which is at the basis of our results . The proof of the following lemma relies on the rotation invariance of the multivariate Gaussian distribution and is presented in Appendix B. Lemma 2 . If X ∈ Rd×n with columns x ( i ) ∼ N ( 0 , Id×d ) , then the singular vectors of X are uniformly distributed on the unit sphere Sd−1 . In particular , if x ∼ N ( 0 , Id×d ) , then x‖x‖ is uniformly distributed on the unit sphere Sd−1 . Method Overview . Given the above preliminaries , we present LLBoost in Algorithm 1 . LLBoost takes in the following inputs : ( 1 ) the training feature matrix X ; ( 2 ) the validation feature matrix Xt ; ( 3 ) the validation labels yt ; ( 4 ) the last layer weights w ; ( 5 ) the number of samples to consider T ; and ( 6 ) the radius of the hyper-sphere to sample from γ. LLBoost begins by constructing the projection matrix P = I −X ( XTX ) †XT . Then , LLBoost samples z ∼ N ( 0 , I ) , normalizes z and adds the perturbation γzP to w. LLBoost repeats this process and returns the perturbation that most improves the validation accuracy . Remarks . As shown in Figure 1A , X and Xt are constructed by applying all but the last layer of the neural network to the training/test data . If X is not full rank ( as is the case for neural networks that are not over-parameterized ) , we can create a low rank X by truncating the lower singular values of X . This correction only mildly affects training error as is shown in Section 4 . For classification settings , yt should be converted to 1-hot vectors . When the last layer has a bias , one can simply append a 1 to w and append an extra row of 1 ’ s to X and Xt . In most of our experiments , we consider up to T = 100 , 000 samples . When selecting γ , one can utilize a binary search approach starting with γ = 1√ d and either doubling or halving γ depending on whether the validation accuracy increased or decreased respectively . However , as a simple heuristic one can start with γ = 1√ d ‖w‖ ( as is recommended by Theorem 3 ) . Lastly , when the network has multiple outputs ( as in multiclass problems ) , we let the entries of the weight matrix be i.i.d . standard normal variables and then normalize by the Frobenius norm . LLBoost does not impact training error/accuracy . Since the perturbations constructed by LLBoost are always projected into the space orthogonal to the training features X , Lemma 1 implies that LLBoost does not alter the training predictions . This is an important aspect of our algorithm since otherwise , LLBoost could just overfit the test data . Algorithm 1 LLBoost ( X , Xt , yt , w , T , γ ) Input : X : = rank r training feature matrix ; Xt : = val . feature matrix ; yt : = val . labels ; w : = last layer weights ; T : = number of samples ; γ : = radius of hyper-sphere Output : wbest : = adjusted last layer weights 1 : wbest = None 2 : accbest = 0 ; ( 1 , d ) = w.shape ; U , Σ , V T ← SVD ( X ) 3 : P ← Id×d − UIr×rUT 4 : for t← 1 to T do 5 : z ← N ( 0 , Id×d ) 6 : z ← z‖z‖ 7 : wr ← γzP + w 8 : acc = GetV alAcc ( Xt , yt , wr ) 9 : if acc > accbest then 10 : wbest = wr ; accbest = acc 11 : end if 12 : end for 13 : return wbest GPU Implementation . We can easily remove the loop in Algorithm 1 by using the GPU since the dominating computations are vector-matrix products . A full implementation is provided using PyTorch ( 25 ) and NumPy ( 23 ; 29 ) in the footnote below1 . In practice , computing the validation accuracy for each sample of wr is the limiting factor in LLBoost . For large datasets , the functions used to compute the SVD and the projection matrix P need to be selected carefully . Note that with respect to SVD , the reduced SVD is sufficient for our algorithm . While for the ImageNet training features from FixResNext-101 the full SVD would require constructing a matrix with roughly 1012 entries , the reduced SVD only requires 108 entries .", "role": "user" }, { "content": "This paper proposes LLBoost that enables adjusting the last linear layer without impacting the training accuracy under the assumption that the last linear layer is in an over-parametrized situation. When the last layer is not over-parametrized, LLBoost first applies the low rank approximation to the training feature matrix through the SVD decomposition, which may affect the original training accuracy. The reason why LLBoost does not change the training accuracy is explained as follows: In an over-parametrized noiseless linear regression, a solution of a linear system $y = wX$ obtained by the gradient descent with an initial value of $w^{(0)}$ is given in a closed form of $\\hat{w} = w^{(0)} (I - X(X^\\top X)^\\dagger X^\\top) + yX^\\dagger$. Therefore, we can compute a solution of $y = wX$ by simply generating $w^{(0)}$ randomly and applying this formula. It is also experimentally verified that LLBoost can adjust the last linear layer without impacting the training accuracy (after appriximated with SVD when necessary). The authors also present theoretical results that sampling $w^{(0)}$ uniformly on the hyper-shpere of appropriate radius leads to a solution that is better than the minimum norm solution ($yX^\\dagger$) with constant probability.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Chronic obstructive pulmonary disease (COPD) is the leading cause of death worldwide, and poses a substantial economic and social burden. Telemonitoring has been proposed as a solution to this growing problem, but its impact on patient outcome is equivocal. This randomized controlled trial aimed to investigate effectiveness of telemonitoring in improving COPD patient outcome. In total, 106 subjects were randomly assigned to the telemonitoring (n = 53) or usual care (n = 53) group. During the two months following discharge, telemonitoring group patients had to report their symptoms daily using an electronic diary. The primary outcome measure was time to first re-admission for COPD exacerbation within six months of discharge. During the follow-up period, time to first re-admission for COPD exacerbation was significantly increased in the telemonitoring group than in the usual care group (p = 0.026). Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006). In conclusion, telemonitoring intervention was associated with improved outcomes among COPD patients admitted for exacerbation in a country characterized by a small territory and high accessibility to medical services. The findings are encouraging and add further support to implementation of telemonitoring as part of COPD care. Extracted Data:
[["telemonitoring group", "usual group", "ER visits for all causes", "significantly decreased", "Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006)."], ["telemonitoring group", "usual group", "the time to first re-admission for COPD exacerbation", "significantly increased", ", time to first re-admission for COPD exacerbation was significantly increased in the telemonitoring group than in the usual care group (p = 0.026)."], ["telemonitoring group", "usual group", "the number of all-cause re-admissions over a period of six months", "significantly decreased", "Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006)"], ["telemonitoring group", "usual group", "episodes of COPD-related re-admissions or ER visits per capita", "significantly decreased", "Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006)."]]
science.evidence_inference.29
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Chronic obstructive pulmonary disease (COPD) is the leading cause of death worldwide, and poses a substantial economic and social burden. Telemonitoring has been proposed as a solution to this growing problem, but its impact on patient outcome is equivocal. This randomized controlled trial aimed to investigate effectiveness of telemonitoring in improving COPD patient outcome. In total, 106 subjects were randomly assigned to the telemonitoring (n = 53) or usual care (n = 53) group. During the two months following discharge, telemonitoring group patients had to report their symptoms daily using an electronic diary. The primary outcome measure was time to first re-admission for COPD exacerbation within six months of discharge. During the follow-up period, time to first re-admission for COPD exacerbation was significantly increased in the telemonitoring group than in the usual care group (p = 0.026). Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006). In conclusion, telemonitoring intervention was associated with improved outcomes among COPD patients admitted for exacerbation in a country characterized by a small territory and high accessibility to medical services. The findings are encouraging and add further support to implementation of telemonitoring as part of COPD care.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"telemonitoring group\", \"usual group\", \"ER visits for all causes\", \"significantly decreased\", \"Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006).\"], [\"telemonitoring group\", \"usual group\", \"the time to first re-admission for COPD exacerbation\", \"significantly increased\", \", time to first re-admission for COPD exacerbation was significantly increased in the telemonitoring group than in the usual care group (p = 0.026).\"], [\"telemonitoring group\", \"usual group\", \"the number of all-cause re-admissions over a period of six months\", \"significantly decreased\", \"Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006)\"], [\"telemonitoring group\", \"usual group\", \"episodes of COPD-related re-admissions or ER visits per capita\", \"significantly decreased\", \"Telemonitoring was also associated with a reduced number of all-cause re-admissions (0.23 vs. 0.68/patient; p = 0.002) and emergency room visits (0.36 vs. 0.91/patient; p = 0.006).\"]]", "role": "assistant" } ]
science.discomat_te
| Sample no. | Composition (mol%) | Composition (mol%) | Density d (gcm-3) | Glass molar volume (V m) | Additive oxide volume (V o) | Volume deviation (V x) | | Sample no. | Bi2O3 | B2O3 | Density d (gcm-3) | Glass molar volume (V m) | Additive oxide volume (V o) | Volume deviation (V x) | | 1 | 20 | 80 | 4.457+-0.002 | 33.40 | 41.07 | -7.66 | | 2 | 25 | 75 | 4.815+-0.002 | 35.04 | 41.78 | -6.74 | | 3 | 30 | 70 | 5.470+-0.003 | 34.46 | 42.48 | -8.01 | | 4 | 33 | 67 | 5.636+-0.003 | 35.56 | 42.91 | -7.34 | | 5 | 37.5 | 62.5 | 6.006+-0.003 | 36.34 | 43.54 | -7.20 | | 6 | 40 | 60 | 6.246+-0.001 | 36.51 | 43.89 | -7.36 | | 7 | 41 | 59 | 6.359+-0.006 | 36.50 | 44.03 | -7.53 | | 8 | 42 | 58 | 6.389+-0.003 | 36.95 | 44.17 | -7.22 | | 9 | 47 | 53 | 6.737+-0.004 | 37.98 | 44.88 | -6.89 | | 10 | 50 | 50 | 6.874+-0.005 | 38.96 | 45.30 | -6.54 | | 11 | 53 | 47 | 7.074+-0.005 | 39.54 | 45.73 | -6.19 | | 12 | 55 | 45 | 7.214 +- 0.004 | 39.85 | 46.01 | -6.15 | | 13 | 60 | 40 | 7.550+-0.011 | 40.72 | 46.71 | -5.99 | | 14 | 66 | 34 | 7.765+-0.020 | 42.65 | 47.56 | -4.90 | Caption: Density, molar volume, additive oxide volume and volume deviations for bismuth borate glasses Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1", "type": "Other"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "1"} {"value": "80", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "1"} {"value": "4.457", "type": "Other"} {"value": "33.40", "type": "Other"} {"value": "41.07", "type": "Other"} {"value": "2", "type": "Other"} {"value": "25", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "2"} {"value": "75", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "2"} {"value": "4.815", "type": "Other"} {"value": "35.04", "type": "Other"} {"value": "41.78", "type": "Other"} {"value": "3", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "3"} {"value": "70", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "3"} {"value": "5.470", "type": "Other"} {"value": "34.46", "type": "Other"} {"value": "42.48", "type": "Other"} {"value": "4", "type": "Other"} {"value": "33", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "4"} {"value": "67", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "4"} {"value": "5.636", "type": "Other"} {"value": "35.56", "type": "Other"} {"value": "42.91", "type": "Other"} {"value": "5", "type": "Other"} {"value": "37.5", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "5"} {"value": "62.5", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "5"} {"value": "6.006", "type": "Other"} {"value": "36.34", "type": "Other"} {"value": "43.54", "type": "Other"} {"value": "6", "type": "Other"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "6"} {"value": "60", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "6"} {"value": "6.246", "type": "Other"} {"value": "36.51", "type": "Other"} {"value": "43.89", "type": "Other"} {"value": "7", "type": "Other"} {"value": "41", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "7"} {"value": "59", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "7"} {"value": "6.359", "type": "Other"} {"value": "36.50", "type": "Other"} {"value": "44.03", "type": "Other"} {"value": "8", "type": "Other"} {"value": "42", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "8"} {"value": "58", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "8"} {"value": "6.389", "type": "Other"} {"value": "36.95", "type": "Other"} {"value": "44.17", "type": "Other"} {"value": "9", "type": "Other"} {"value": "47", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "9"} {"value": "53", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "9"} {"value": "6.737", "type": "Other"} {"value": "37.98", "type": "Other"} {"value": "44.88", "type": "Other"} {"value": "10", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "10"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "10"} {"value": "6.874", "type": "Other"} {"value": "38.96", "type": "Other"} {"value": "45.30", "type": "Other"} {"value": "11", "type": "Other"} {"value": "53", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "11"} {"value": "47", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "11"} {"value": "7.074", "type": "Other"} {"value": "39.54", "type": "Other"} {"value": "45.73", "type": "Other"} {"value": "12", "type": "Other"} {"value": "55", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "12"} {"value": "45", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "12"} {"value": "7.214", "type": "Other"} {"value": "39.85", "type": "Other"} {"value": "46.01", "type": "Other"} {"value": "13", "type": "Other"} {"value": "60", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "13"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "13"} {"value": "7.550", "type": "Other"} {"value": "40.72", "type": "Other"} {"value": "46.71", "type": "Other"} {"value": "14", "type": "Other"} {"value": "66", "type": "Glass_Compound_Amount", "constituent": "Bi2O3", "unit": "mol", "material": "14"} {"value": "34", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "14"} {"value": "7.765", "type": "Other"} {"value": "42.65", "type": "Other"} {"value": "47.56", "type": "Other"}
science.discomat_te.41
[ { "content": "| Sample no. | Composition (mol%) | Composition (mol%) | Density d (gcm-3) | Glass molar volume (V m) | Additive oxide volume (V o) | Volume deviation (V x) |\n| Sample no. | Bi2O3 | B2O3 | Density d (gcm-3) | Glass molar volume (V m) | Additive oxide volume (V o) | Volume deviation (V x) |\n| 1 | 20 | 80 | 4.457+-0.002 | 33.40 | 41.07 | -7.66 |\n| 2 | 25 | 75 | 4.815+-0.002 | 35.04 | 41.78 | -6.74 |\n| 3 | 30 | 70 | 5.470+-0.003 | 34.46 | 42.48 | -8.01 |\n| 4 | 33 | 67 | 5.636+-0.003 | 35.56 | 42.91 | -7.34 |\n| 5 | 37.5 | 62.5 | 6.006+-0.003 | 36.34 | 43.54 | -7.20 |\n| 6 | 40 | 60 | 6.246+-0.001 | 36.51 | 43.89 | -7.36 |\n| 7 | 41 | 59 | 6.359+-0.006 | 36.50 | 44.03 | -7.53 |\n| 8 | 42 | 58 | 6.389+-0.003 | 36.95 | 44.17 | -7.22 |\n| 9 | 47 | 53 | 6.737+-0.004 | 37.98 | 44.88 | -6.89 |\n| 10 | 50 | 50 | 6.874+-0.005 | 38.96 | 45.30 | -6.54 |\n| 11 | 53 | 47 | 7.074+-0.005 | 39.54 | 45.73 | -6.19 |\n| 12 | 55 | 45 | 7.214 +- 0.004 | 39.85 | 46.01 | -6.15 |\n| 13 | 60 | 40 | 7.550+-0.011 | 40.72 | 46.71 | -5.99 |\n| 14 | 66 | 34 | 7.765+-0.020 | 42.65 | 47.56 | -4.90 |\n\nCaption: Density, molar volume, additive oxide volume and volume deviations for bismuth borate glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"1\"}\n{\"value\": \"80\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"1\"}\n{\"value\": \"4.457\", \"type\": \"Other\"}\n{\"value\": \"33.40\", \"type\": \"Other\"}\n{\"value\": \"41.07\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"2\"}\n{\"value\": \"75\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"2\"}\n{\"value\": \"4.815\", \"type\": \"Other\"}\n{\"value\": \"35.04\", \"type\": \"Other\"}\n{\"value\": \"41.78\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"3\"}\n{\"value\": \"70\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"3\"}\n{\"value\": \"5.470\", \"type\": \"Other\"}\n{\"value\": \"34.46\", \"type\": \"Other\"}\n{\"value\": \"42.48\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"33\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"4\"}\n{\"value\": \"67\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"4\"}\n{\"value\": \"5.636\", \"type\": \"Other\"}\n{\"value\": \"35.56\", \"type\": \"Other\"}\n{\"value\": \"42.91\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"37.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"5\"}\n{\"value\": \"62.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"5\"}\n{\"value\": \"6.006\", \"type\": \"Other\"}\n{\"value\": \"36.34\", \"type\": \"Other\"}\n{\"value\": \"43.54\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"6\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"6\"}\n{\"value\": \"6.246\", \"type\": \"Other\"}\n{\"value\": \"36.51\", \"type\": \"Other\"}\n{\"value\": \"43.89\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"41\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"7\"}\n{\"value\": \"59\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"7\"}\n{\"value\": \"6.359\", \"type\": \"Other\"}\n{\"value\": \"36.50\", \"type\": \"Other\"}\n{\"value\": \"44.03\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"8\"}\n{\"value\": \"58\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"8\"}\n{\"value\": \"6.389\", \"type\": \"Other\"}\n{\"value\": \"36.95\", \"type\": \"Other\"}\n{\"value\": \"44.17\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"47\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"9\"}\n{\"value\": \"53\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"9\"}\n{\"value\": \"6.737\", \"type\": \"Other\"}\n{\"value\": \"37.98\", \"type\": \"Other\"}\n{\"value\": \"44.88\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"10\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"10\"}\n{\"value\": \"6.874\", \"type\": \"Other\"}\n{\"value\": \"38.96\", \"type\": \"Other\"}\n{\"value\": \"45.30\", \"type\": \"Other\"}\n{\"value\": \"11\", \"type\": \"Other\"}\n{\"value\": \"53\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"11\"}\n{\"value\": \"47\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"11\"}\n{\"value\": \"7.074\", \"type\": \"Other\"}\n{\"value\": \"39.54\", \"type\": \"Other\"}\n{\"value\": \"45.73\", \"type\": \"Other\"}\n{\"value\": \"12\", \"type\": \"Other\"}\n{\"value\": \"55\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"12\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"12\"}\n{\"value\": \"7.214\", \"type\": \"Other\"}\n{\"value\": \"39.85\", \"type\": \"Other\"}\n{\"value\": \"46.01\", \"type\": \"Other\"}\n{\"value\": \"13\", \"type\": \"Other\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"13\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"13\"}\n{\"value\": \"7.550\", \"type\": \"Other\"}\n{\"value\": \"40.72\", \"type\": \"Other\"}\n{\"value\": \"46.71\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"66\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Bi2O3\", \"unit\": \"mol\", \"material\": \"14\"}\n{\"value\": \"34\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"14\"}\n{\"value\": \"7.765\", \"type\": \"Other\"}\n{\"value\": \"42.65\", \"type\": \"Other\"}\n{\"value\": \"47.56\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Cost effectiveness evaluations based on cRCT data accurately reflect prices for patients in actual clinical practice. Research Abstract: BACKGROUND Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs). The objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs) to observational data from actual clinical practice. Selective Cox-2 inhibitors (coxibs) were used as an example. METHODS AND FINDINGS The UK General Practice Research Database (GPRD) was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI) events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs) or coxibs. A basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib. Outcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics) linked to GPRD. Prescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary. The study population included over 1 million patients prescribed conventional NSAIDs or coxibs. Only a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs), whereas coxib RCTs required daily use for at least 6-9 months. The mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use) and US$298k for hospitalizations. The mean costs (for GPRD events) over calendar time were US$58k during 1990-1993 and US$174k during 2002-2005. Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT. CONCLUSIONS The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies. External validity should be an explicit requirement for cost-effectiveness analyses.
{"verdict": "CONTRADICT", "evidence": ["The mean costs (for GPRD events) over calendar time were US$58k during 1990-1993 and US$174k during 2002-2005.", "Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.", "CONCLUSIONS The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies."]}
science.scifact_entailment.192
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCost effectiveness evaluations based on cRCT data accurately reflect prices for patients in actual clinical practice.\n\nResearch Abstract:\nBACKGROUND Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs).\nThe objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs) to observational data from actual clinical practice.\nSelective Cox-2 inhibitors (coxibs) were used as an example.\nMETHODS AND FINDINGS The UK General Practice Research Database (GPRD) was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI) events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs) or coxibs.\nA basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib.\nOutcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics) linked to GPRD.\nPrescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary.\nThe study population included over 1 million patients prescribed conventional NSAIDs or coxibs.\nOnly a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs), whereas coxib RCTs required daily use for at least 6-9 months.\nThe mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use) and US$298k for hospitalizations.\nThe mean costs (for GPRD events) over calendar time were US$58k during 1990-1993 and US$174k during 2002-2005.\nUsing RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.\nCONCLUSIONS The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies.\nExternal validity should be an explicit requirement for cost-effectiveness analyses.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"The mean costs (for GPRD events) over calendar time were US$58k during 1990-1993 and US$174k during 2002-2005.\", \"Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.\", \"CONCLUSIONS The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Thus, it is useful to per-7 Our experimental setup is designed to support a comparison between Negra and TüBa-D/Z for the three evaluation metrics and is intended to be comparable to the setup of <cite>Kübler et al. (2006)</cite> .
["Similarities"]
science.multicite_intent_classification.431
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThus, it is useful to per-7 Our experimental setup is designed to support a comparison between Negra and TüBa-D/Z for the three evaluation metrics and is intended to be comparable to the setup of <cite>Kübler et al. (2006)</cite> .", "role": "user" }, { "content": "[\"Similarities\"]", "role": "assistant" } ]
science.discomat_te
| Oxides | xm | xo | gm | Lo +-0.01 | | Li2O | 1 | 2.91 | 1 | 1 | | B2O3 | 2 | 3.66 | 2.36 | 0.42 | | ZnO | 1.65 | 3.19 | 1.87 | 0.53 | | TiO2 | 1.54 | 3.28 | 1.72 | 0.58 | | V2O5 | 1.63 | 2.62 | 1.84 | 0.54 | Caption: The metal ion electronegativity (xm), oxide ion electronegativity (xo), basicity moderating parameter (gm) and optical basicity of individual oxides (Lo). Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1", "type": "Other"} {"value": "2.91", "type": "Other"} {"value": "1", "type": "Other"} {"value": "1", "type": "Other"} {"value": "2", "type": "Other"} {"value": "3.66", "type": "Other"} {"value": "2.36", "type": "Other"} {"value": "0.42", "type": "Other"} {"value": "1.65", "type": "Other"} {"value": "3.19", "type": "Other"} {"value": "1.87", "type": "Other"} {"value": "0.53", "type": "Other"} {"value": "1.54", "type": "Other"} {"value": "3.28", "type": "Other"} {"value": "1.72", "type": "Other"} {"value": "0.58", "type": "Other"} {"value": "1.63", "type": "Other"} {"value": "2.62", "type": "Other"} {"value": "1.84", "type": "Other"} {"value": "0.54", "type": "Other"}
science.discomat_te.26
[ { "content": "| Oxides | xm | xo | gm | Lo +-0.01 |\n| Li2O | 1 | 2.91 | 1 | 1 |\n| B2O3 | 2 | 3.66 | 2.36 | 0.42 |\n| ZnO | 1.65 | 3.19 | 1.87 | 0.53 |\n| TiO2 | 1.54 | 3.28 | 1.72 | 0.58 |\n| V2O5 | 1.63 | 2.62 | 1.84 | 0.54 |\n\nCaption: The metal ion electronegativity (xm), oxide ion electronegativity (xo), basicity moderating parameter (gm) and optical basicity of individual oxides (Lo).\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2.91\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"3.66\", \"type\": \"Other\"}\n{\"value\": \"2.36\", \"type\": \"Other\"}\n{\"value\": \"0.42\", \"type\": \"Other\"}\n{\"value\": \"1.65\", \"type\": \"Other\"}\n{\"value\": \"3.19\", \"type\": \"Other\"}\n{\"value\": \"1.87\", \"type\": \"Other\"}\n{\"value\": \"0.53\", \"type\": \"Other\"}\n{\"value\": \"1.54\", \"type\": \"Other\"}\n{\"value\": \"3.28\", \"type\": \"Other\"}\n{\"value\": \"1.72\", \"type\": \"Other\"}\n{\"value\": \"0.58\", \"type\": \"Other\"}\n{\"value\": \"1.63\", \"type\": \"Other\"}\n{\"value\": \"2.62\", \"type\": \"Other\"}\n{\"value\": \"1.84\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Lack of association of C-C chemokine receptor 5 DD32 deletion status with rheumatoid arthritis, systemic lupus erythematosus, lupus nephritis, and disease severity. OBJECTIVE: C-C chemokine receptor 5 (CCR5) plays an important role in inflammation. A 32 base-pair (DD32) deletion in the CCR5 gene leads to a nonfunctional receptor. This deletion has been reported to have a protective effect on the development and progression of several autoimmune diseases. We investigated whether the DD32 deletion is associated with disease susceptibility in a population of patients with rheumatoid arthritis (RA), systemic lupus erythematosus (SLE), and lupus nephritis (LN); and whether it is associated with disease severity. METHODS: DNA samples from 405 RA patients, 97 SLE patients, 113 LN patients, and 431 healthy controls were genotyped for the CCR5 DD32 deletion. Differences in genotype frequencies were tested between patients and controls. Association of genotypes with disease severity was analyzed. RESULTS: Genotype frequencies of each group were in Hardy-Weinberg equilibrium. The genotype frequencies of patients did not differ significantly from controls (CCR5/DD32, DD32/DD32: RA 18.3% and 1.2%, respectively; SLE 17.5% and 2.1%; LN 13.3% and 1.8%; controls 20.0% and 2.8%). However, there was a trend for lower DD32 deletion allele frequency in LN patients compared to controls (p = 0.08). There was no significant association between the CCR5 status and disease severity in RA, SLE, or LN. CONCLUSION: Although an association with LN cannot be excluded, the CCR5 DD32 deletion does not seem to be a disease susceptibility genotype for RA, SLE, or LN. No significant effect of the DD32 deletion on disease severity was demonstrated.
{"CellLine": [], "Chemical": [], "Disease": ["lupus nephritis", "inflammation", "rheumatoid arthritis", "RA", "LN", "SLE", "systemic lupus erythematosus", "autoimmune diseases"], "Gene": ["C-C chemokine receptor 5", "CCR5"], "Species": ["patients"], "Variant": ["32 base-pair (DD32) deletion", "DD32", "DD32 deletion"]}
science.biored_ner.71
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nLack of association of C-C chemokine receptor 5 DD32 deletion status with rheumatoid arthritis, systemic lupus erythematosus, lupus nephritis, and disease severity. OBJECTIVE: C-C chemokine receptor 5 (CCR5) plays an important role in inflammation. A 32 base-pair (DD32) deletion in the CCR5 gene leads to a nonfunctional receptor. This deletion has been reported to have a protective effect on the development and progression of several autoimmune diseases. We investigated whether the DD32 deletion is associated with disease susceptibility in a population of patients with rheumatoid arthritis (RA), systemic lupus erythematosus (SLE), and lupus nephritis (LN); and whether it is associated with disease severity. METHODS: DNA samples from 405 RA patients, 97 SLE patients, 113 LN patients, and 431 healthy controls were genotyped for the CCR5 DD32 deletion. Differences in genotype frequencies were tested between patients and controls. Association of genotypes with disease severity was analyzed. RESULTS: Genotype frequencies of each group were in Hardy-Weinberg equilibrium. The genotype frequencies of patients did not differ significantly from controls (CCR5/DD32, DD32/DD32: RA 18.3% and 1.2%, respectively; SLE 17.5% and 2.1%; LN 13.3% and 1.8%; controls 20.0% and 2.8%). However, there was a trend for lower DD32 deletion allele frequency in LN patients compared to controls (p = 0.08). There was no significant association between the CCR5 status and disease severity in RA, SLE, or LN. CONCLUSION: Although an association with LN cannot be excluded, the CCR5 DD32 deletion does not seem to be a disease susceptibility genotype for RA, SLE, or LN. No significant effect of the DD32 deletion on disease severity was demonstrated.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"lupus nephritis\", \"inflammation\", \"rheumatoid arthritis\", \"RA\", \"LN\", \"SLE\", \"systemic lupus erythematosus\", \"autoimmune diseases\"], \"Gene\": [\"C-C chemokine receptor 5\", \"CCR5\"], \"Species\": [\"patients\"], \"Variant\": [\"32 base-pair (DD32) deletion\", \"DD32\", \"DD32 deletion\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Liposome bupivacaine is a novel formulation of the local anesthetic bupivacaine, designed to provide prolonged postsurgical analgesia. This analysis examined pooled efficacy data as reflected in cumulative pain scores from 10 randomized, double-blind liposome bupivacaine clinical studies in which the study drug was administered via local wound infiltration. ABSTRACT.METHODS: A total of 823 patients were exposed to liposome bupivacaine in 10 local wound infiltration studies at doses ranging from 66 mg to 532 mg in five surgical settings; 446 patients received bupivacaine HCl (dose: 75–200 mg) and 190 received placebo. Efficacy measures were assessed through 72 hours after surgery. ABSTRACT.RESULTS: Overall, 45% of patients were male and 19% were ≥65 years of age. In the analysis of cumulative pain intensity scores through 72 hours, liposome bupivacaine was associated with lower pain scores than the comparator in 16 of 19 treatment arms assessed, achieving statistically significant differences compared with bupivacaine HCl (P < 0.05) in five of 17 treatment arms. These results were supported by results of other efficacy measures, including time to first use of opioid rescue medication, proportion of patients avoiding opioid rescue medication, total postsurgical consumption of opioid rescue medication, and patient/care provider satisfaction with postoperative analgesia. Local infiltration of liposome bupivacaine resulted in significant systemic plasma levels of bupivacaine, which could persist for 96 hours; systemic plasma levels of bupivacaine following administration of liposome bupivacaine were not correlated with local efficacy. Liposome bupivacaine and bupivacaine HCl were generally well tolerated. ABSTRACT.CONCLUSION: Based on this integrated analysis of multiple efficacy measures, liposome bupivacaine appears to be a potentially useful therapeutic option for prolonged reduction of postsurgical pain in soft tissue and orthopedic surgeries. Extracted Data:
[["liposome bupivacaine", "bupivacaine HCl", "cumulative pain", "significantly decreased", "In the analysis of cumulative pain intensity scores through 72 hours, liposome bupivacaine was associated with lower pain scores than the comparator in 16 of 19 treatment arms assessed, achieving statistically significant differences compared with bupivacaine HCl (P \u003c 0.05) in five of 17 treatment arms"], ["liposome bupivacaine 266 mg ", "placebo", "\"satisfied\ufffd or \"extremely satisfied\ufffd with their postoperative analgesia at 24 hours after surgery", "significantly increased", "At 24 hours after surgery in this study, 95% of patients in the liposome bupivacaine 266 mg group were \"satisfied\" or \"extremely satisfied\" with their postoperative analgesia compared with 72% in the placebo group (P = 0.0007). "], ["liposome bupivacaine 266 mg ", "placebo", "\"satisfied\ufffd or \"extremely satisfied\ufffd with their postoperative analgesia at 72 hours after surgery", "significantly increased", "At 24 hours after surgery in this study, 95% of patients in the liposome bupivacaine 266 mg group were \"satisfied\" or \"extremely satisfied\" with their postoperative analgesia compared with 72% in the placebo group (P = 0.0007). At 72 hours after surgery, percentages were 95% and 73%, respectively (P = 0.0007)."]]
science.evidence_inference.61
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Liposome bupivacaine is a novel formulation of the local anesthetic bupivacaine, designed to provide prolonged postsurgical analgesia. This analysis examined pooled efficacy data as reflected in cumulative pain scores from 10 randomized, double-blind liposome bupivacaine clinical studies in which the study drug was administered via local wound infiltration.\n\nABSTRACT.METHODS: A total of 823 patients were exposed to liposome bupivacaine in 10 local wound infiltration studies at doses ranging from 66 mg to 532 mg in five surgical settings; 446 patients received bupivacaine HCl (dose: 75–200 mg) and 190 received placebo. Efficacy measures were assessed through 72 hours after surgery.\n\nABSTRACT.RESULTS: Overall, 45% of patients were male and 19% were ≥65 years of age. In the analysis of cumulative pain intensity scores through 72 hours, liposome bupivacaine was associated with lower pain scores than the comparator in 16 of 19 treatment arms assessed, achieving statistically significant differences compared with bupivacaine HCl (P < 0.05) in five of 17 treatment arms. These results were supported by results of other efficacy measures, including time to first use of opioid rescue medication, proportion of patients avoiding opioid rescue medication, total postsurgical consumption of opioid rescue medication, and patient/care provider satisfaction with postoperative analgesia. Local infiltration of liposome bupivacaine resulted in significant systemic plasma levels of bupivacaine, which could persist for 96 hours; systemic plasma levels of bupivacaine following administration of liposome bupivacaine were not correlated with local efficacy. Liposome bupivacaine and bupivacaine HCl were generally well tolerated.\n\nABSTRACT.CONCLUSION: Based on this integrated analysis of multiple efficacy measures, liposome bupivacaine appears to be a potentially useful therapeutic option for prolonged reduction of postsurgical pain in soft tissue and orthopedic surgeries.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"liposome bupivacaine\", \"bupivacaine HCl\", \"cumulative pain\", \"significantly decreased\", \"In the analysis of cumulative pain intensity scores through 72 hours, liposome bupivacaine was associated with lower pain scores than the comparator in 16 of 19 treatment arms assessed, achieving statistically significant differences compared with bupivacaine HCl (P \\u003c 0.05) in five of 17 treatment arms\"], [\"liposome bupivacaine 266 mg \", \"placebo\", \"\\\"satisfied\\ufffd or \\\"extremely satisfied\\ufffd with their postoperative analgesia at 24 hours after surgery\", \"significantly increased\", \"At 24 hours after surgery in this study, 95% of patients in the liposome bupivacaine 266 mg group were \\\"satisfied\\\" or \\\"extremely satisfied\\\" with their postoperative analgesia compared with 72% in the placebo group (P = 0.0007). \"], [\"liposome bupivacaine 266 mg \", \"placebo\", \"\\\"satisfied\\ufffd or \\\"extremely satisfied\\ufffd with their postoperative analgesia at 72 hours after surgery\", \"significantly increased\", \"At 24 hours after surgery in this study, 95% of patients in the liposome bupivacaine 266 mg group were \\\"satisfied\\\" or \\\"extremely satisfied\\\" with their postoperative analgesia compared with 72% in the placebo group (P = 0.0007). At 72 hours after surgery, percentages were 95% and 73%, respectively (P = 0.0007).\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Introduction. To evaluate the efficacy of epoetin-α prior to revision total knee arthroplasty, we hypothesized that epoetin-α will reduce blood transfusion. Methods. Eighty-one patients were compared in this retrospective review; twenty-eight patients received our dosing regimen. All patients were mildly anemic. Epoetin-α to control (1 : 2) patient matching occurred so that one of two attending surgeons, gender, BMI, complexity of surgery, ASA score, and age were similar between groups. The clinical triggers for blood transfusion during or after the procedure were determined based on peri- and postoperative hemoglobin levels, ASA score, and/or clinical symptoms consistent with anemia. Blood salvage was not used. Results. Blood transfusion and length of stay were lower in the study group. None of the patients who received epoetin-α underwent transfusion. Hemoglobin increased from 11.97 to 13.8, preoperatively. Hemoglobin at day of surgery and time of discharge were higher. Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups. One Epogen patient had an uncomplicated DVT (3.6%). Conclusions. Epoetin-α may have a role in the mildly anemic revision knee patient. It may also decrease patient length of stay allowing for earlier readiness to resume normal activities and/or meet short-term milestones. A randomized study to evaluate the direct and indirect costs of such a treatment methodology in the mildly anemic revision patient may be warranted. Extracted Data:
[["epoetin-\u03b1", "control", "average total blood loss", "no significant difference", "Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups."], ["epoetin-\u03b1", "control", "hidden blood loss", "no significant difference", "Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups."]]
science.evidence_inference.56
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nIntroduction. To evaluate the efficacy of epoetin-α prior to revision total knee arthroplasty, we hypothesized that epoetin-α will reduce blood transfusion. Methods. Eighty-one patients were compared in this retrospective review; twenty-eight patients received our dosing regimen. All patients were mildly anemic. Epoetin-α to control (1 : 2) patient matching occurred so that one of two attending surgeons, gender, BMI, complexity of surgery, ASA score, and age were similar between groups. The clinical triggers for blood transfusion during or after the procedure were determined based on peri- and postoperative hemoglobin levels, ASA score, and/or clinical symptoms consistent with anemia. Blood salvage was not used. Results. Blood transfusion and length of stay were lower in the study group. None of the patients who received epoetin-α underwent transfusion. Hemoglobin increased from 11.97 to 13.8, preoperatively. Hemoglobin at day of surgery and time of discharge were higher. Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups. One Epogen patient had an uncomplicated DVT (3.6%). Conclusions. Epoetin-α may have a role in the mildly anemic revision knee patient. It may also decrease patient length of stay allowing for earlier readiness to resume normal activities and/or meet short-term milestones. A randomized study to evaluate the direct and indirect costs of such a treatment methodology in the mildly anemic revision patient may be warranted.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"epoetin-\\u03b1\", \"control\", \"average total blood loss\", \"no significant difference\", \"Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups.\"], [\"epoetin-\\u03b1\", \"control\", \"hidden blood loss\", \"no significant difference\", \"Gender, BMI, ASA score, total and hidden blood losses, calculated blood loss, preop PLT, PT, PTT, and INR were similar between groups.\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Treatment with a protein named FN impairs regenerative abilities of aged muscles. Research Abstract: Age-related changes in the niche have long been postulated to impair the function of somatic stem cells. Here we demonstrate that the aged stem cell niche in skeletal muscle contains substantially reduced levels of fibronectin (FN), leading to detrimental consequences for the function and maintenance of muscle stem cells (MuSCs). Deletion of the gene encoding FN from young regenerating muscles replicates the aging phenotype and leads to a loss of MuSC numbers. By using an extracellular matrix (ECM) library screen and pathway profiling, we characterize FN as a preferred adhesion substrate for MuSCs and demonstrate that integrin-mediated signaling through focal adhesion kinase and the p38 mitogen-activated protein kinase pathway is strongly de-regulated in MuSCs from aged mice because of insufficient attachment to the niche. Reconstitution of FN levels in the aged niche remobilizes stem cells and restores youth-like muscle regeneration. Taken together, we identify the loss of stem cell adhesion to FN in the niche ECM as a previously unknown aging mechanism.
{"verdict": "CONTRADICT", "evidence": ["Deletion of the gene encoding FN from young regenerating muscles replicates the aging phenotype and leads to a loss of MuSC numbers.", "Reconstitution of FN levels in the aged niche remobilizes stem cells and restores youth-like muscle regeneration."]}
science.scifact_entailment.855
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nTreatment with a protein named FN impairs regenerative abilities of aged muscles.\n\nResearch Abstract:\nAge-related changes in the niche have long been postulated to impair the function of somatic stem cells.\nHere we demonstrate that the aged stem cell niche in skeletal muscle contains substantially reduced levels of fibronectin (FN), leading to detrimental consequences for the function and maintenance of muscle stem cells (MuSCs).\nDeletion of the gene encoding FN from young regenerating muscles replicates the aging phenotype and leads to a loss of MuSC numbers.\nBy using an extracellular matrix (ECM) library screen and pathway profiling, we characterize FN as a preferred adhesion substrate for MuSCs and demonstrate that integrin-mediated signaling through focal adhesion kinase and the p38 mitogen-activated protein kinase pathway is strongly de-regulated in MuSCs from aged mice because of insufficient attachment to the niche.\nReconstitution of FN levels in the aged niche remobilizes stem cells and restores youth-like muscle regeneration.\nTaken together, we identify the loss of stem cell adhesion to FN in the niche ECM as a previously unknown aging mechanism.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Deletion of the gene encoding FN from young regenerating muscles replicates the aging phenotype and leads to a loss of MuSC numbers.\", \"Reconstitution of FN levels in the aged niche remobilizes stem cells and restores youth-like muscle regeneration.\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. Data During January 2017, we performed a keyword based crawl for articles and comments in the Europe subreddit and tweets in Twitter, which contained the word "Brexit". The use of a political and controversial term in the query is deliberate, to capture the emotional diversity of a politically oriented corpus. We crawled one hundred articles from Reddit, with more than forty thousand comments and more than three thousand tweets. For the Reddit data, we collected information on location, time, and the number of upvotes. For the Twitter data, we stored the number or re-tweets and favourites, time and location information. Our focus is on single term (also known as unigram) sentiment, thus posts in both networks were processed to a single term list. In total, the number of unique terms in our corpus was 30227. Based on the enchant python library, used in BIBREF26 , and the supported Great British English dictionary, 19193 were validated and 11034 were invalidated. Our analysis will focus on the 19193 valid terms, that follow Zipf's Law with scaling-law coefficient INLINEFORM0 is a good fit. After validation, terms were stemmed with Porter Stemming Algorithm BIBREF27 . Stemming identified 10953 distinct term groups with one or more terms. Stop-words, intensifiers and negators are also included in the valid term groups. Both term validation and stemming are unsupervised, since our goal is to maintain scalability in beyond polarity lexicon acquisition and sentiment classification. Crowdsourcing The crowdsourcing task, hosted in CrowdFlower, required contributors to label term groups in three different main classes, emotion, intensifier and none, without a golden standard, rules or any participation restrictions . Emotion labelling included the 8 basic emotions as defined by Plutchik. Intensifier class included intensifiers and negators. Finally, none referred to stop-words or words with no particular emotion. Each of the eleven options for the main classes, will be referred to as "subclass". Terms are grouped based on their stem. Each term group has a main annotation class defined by majority, and several sub annotation classes, defined by the non majority annotations. However, to aid multi class analysis of the results, every annotation is logged in the lexicon database. Assessment Based on a set of experiments, we identified 136 term groups that would test the ability of a contributor in all of the three main classes, emotion evoking, intensifying context, and none. As the assessment term groups had more than ten thousand annotations, we analyse it separately from the lexicon. In order for a worker to gain the ability to contribute to the crowdsourcing task and eventually get paid, he/she had to properly annotate 80% of the assessment term groups encountered. The annotations should be within the dominant classes, and not subclasses, as defined from the assessment annotators. E.g., for an assessment term group that received 100 annotations in various emotions, we check if the worker annotates the term group as emotion evoking. Let INLINEFORM0 be the set of workers INLINEFORM1 and and INLINEFORM2 the set of eleven INLINEFORM3 subclasses: eight emotions , two intensifiers, and none of the former. We define INLINEFORM4 as the number of total annotations for each worker INLINEFORM5 . Then: DISPLAYFORM0 We define INLINEFORM0 be the set of workers INLINEFORM1 in the assessment process, INLINEFORM2 the set of workers INLINEFORM3 in the acquisition process. Then, for INLINEFORM4 we define: DISPLAYFORM0 DISPLAYFORM1 and: DISPLAYFORM0 DISPLAYFORM1 The optimal INLINEFORM0 is found for INLINEFORM1 . For this study, the optimal filtering percentage was found at 40%, INLINEFORM2 . Workers from India and Venezuela, who contributed 92% of the task, have annotated more than 30% of the term groups with joy. However, annotations from countries with more than 300 annotations, don't follow the same distribution. Specifically, workers from Philippines, United States, Colombia, Poland, United Kingdom, Russia, and Egypt, performed a smoother distributed emotion annotation. In comparison, the most annotated emotion in BIBREF21 was fear in 18% of the total terms. By further analysing worker annotation distribution, we identified workers that had a significant fraction of their total annotations in a single subclass. E.g. one specific worker annotated 99% of the assessment term groups he encountered as joy. Dishonesty or spamming is a known problem in crowdsourcing BIBREF28 and multiple proposed solutions exist BIBREF28 , but they require gold standards or objective crowdsourcing tasks. As we don't have a gold standard, and the task is more subjective, these spamming elimination methods are not applicable. Our solution is the implementation of a fast and efficient filter, which only relies on the obtained annotations and the assessment. If workers' answers were above a certain percentage on a single subclass for both the assessment and the annotation process, then the user would be flagged as dishonest and the total of their annotations would be discarded. This rule was applied to the whole range of possible answers, including the 8 emotions, 2 intensifiers and "none". Prior to implementing the filter, we analysed how it would affect the number of eligible, not considered as spamming, workers. The thick line in Figure FIGREF22 shows the percentage during the assessment term groups, and the dotted line shows the percentage during the lexicon acquisition. The higher the single annotation percentage, the higher the certainty of spamming behaviour. In large part, the exclusion rate was similar for both the assessment and the lexicon annotations. A number of workers had a more cautious behaviour in the test questions, resulting in reduced percentage of exclusions during assessment process. This behaviour is justified, as the first set of questions encounter by a worker are knowingly a set of assessment questions. Each assessment term group was annotated more than 120 times, by 187 annotators. This is a critical mass of contributors and provides valuable findings with regards to task. These are: Workers rarely clicked the informative dictionary link. As a result, they would annotate emotional words to none, probably due to misinterpretation. We avoided the direct inclusion of the dictionary definition, as it could be considered as a form of leverage. E.g. "vehement, halcyon" , two -uncommon- emotion baring words, were both annotated as none, and less than 0.2% of the workers (on average) clicked a dictionary link. The concept of intensifiers is understandable but requires critical mass BIBREF29 . A small number of annotators would initially annotate intensifiers/negators with an emotion, but the distribution would slowly shift towards the correct class. E.g. "reduce, little, plethora" , were initially annotated as sad sad joy, but after tens of annotations they were annotated as weakening weakening intensifying. All words should be evaluated, even those that seemingly don't carry a specific emotion. As times change, words and emotions acquire new links. E.g. "anti, serious" , were both annotated as fear evoking with a great emotional diversity. Reliability Single annotation reliability agreement is the degree of agreement between annotators, for term groups that have annotation majority in exactly one sub class. In our lexicon, single annotation reliability agreement was low, mainly due to the low number of annotators for each term group in relation to the high number of possible categories. Based on Fleiss Kappa BIBREF30 (simply referred as k), and as seen in Table TABREF35 , term groups with 2 annotations had the lowest reliability agreement, while term groups with 6 annotations the highest reliability agreement. As the number of annotators rises, the number of possible agreement permutations increases but the number of major annotated subclasses decreases. More annotators have a positive effect in both k and certainty of classification. As we restrict our lexicon to emotions, reliability increases for any number of annotators except two. This is explained by the decrease in the number of possible categories. When we restrict our analysis on emotion related annotation the probability for agreement in annotations increases, resulting in a high emotional k. The best way to increase k is to provide additional annotations that will eventually converge to a majority class or a limited group of classes. Crowd and experts comparison We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower. Evaluators were given a summary of the annotations received for the term group in the form of:The term group "inequality inequity" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered. The summary of the evaluation for both experts and crowd can be seen in Figure FIGREF36 . The first graph presents the validity over the number of annotations in the main class of the term group. Although this information is hidden from the evaluators, high annotational agreement results in high evaluation scores. Both experts and the crowd follow that positive trend. Crowd contributors are more strict in their evaluations, but after four annotations we observe a significant validity increase on both crowd and experts. Likewise, the annotation percentage for the majority class has a positive influence to the evaluation score, with the exception of 100% agreement, second graph Figure FIGREF36 . The weighing factor for term groups with 100% annotation agreement is the reduced number of total annotations, as the mean number of total annotations drops abruptly on the 100%, and total agreement is more frequent in term groups with low number of total annotations. It's worth noting that certain percentages can only occur on specific number of total annotations, e.g. 17% and 83% can only occur when the number of total annotations is six. In emotion annotations, as seen on the third graph of Figure FIGREF36 crowd and experts follow a similar evaluation pattern. Anticipation and joy had the exact same evaluation, while every other emotion and stop words were evaluated lower from the crowd. The only subclasses evaluated higher from the crowd were intensifiers and negators, with a significant difference in the evaluations for the latter. Section 6.3 provides a more detailed evaluation for term groups that received at least one annotation as intensifiers or negators. The final graph in Figure FIGREF36 presents a clear negative correlation of subclass agreement and evaluation scores. The highest number of subclasses that do not affect evaluation scores is three, above that there is a steady decline of the evaluation scores, for both the crowd and the experts. The evaluation results provide some key insights in the importance of the number of annotations. The evaluation scores start to improve after four annotations. Annotational agreement and majority voting are less important. Subclass agreement has a negative effect on three or more subclasses. Most importantly and compared to experts, the crowd is a stricter evaluator with significantly lower costs, and higher scalability. Since strict evaluation leads to higher quality annotations, the evaluation can be performed by the crowd instead of experts. Crowd contributors can be found in high numbers and multiple platforms, compared to expert linguists. Evaluation of intensifiers and negators, was also a batch of evaluation and annotation tasks, as mentioned in Section 6.2. However, the difference was that now evaluators had to answer if a term group included at least one valid intensifier or negator. The evaluation was again performed by experts and the crowd, as described in Section 6.2.1. Based on the annotations received in PEL, we used 541 term groups that had at least one annotation in any of the intensifying subclasses. Although, the particular selection of term groups is statistically significant, we expect relatively low evaluation scores. That is because the number of intensifying annotations is low in most of the selected term groups. In Figure FIGREF40 , we define varying levels of agreement on the validity of the intensifying class, based on the agreement of evaluations. For the experts group, low agreement refers to term groups that received at least one out of two evaluations as valid, while high agreement requires the evaluation agreement of both experts. Similarly for the crowd, low agreement refers to a minimum of two valid evaluations, mid agreement corresponds to at least three, and high agreement requires an absolute agreement of all four evaluators. Experts are far more strict than the crowd in the evaluation of intensifiers and negators. When the validity agreement is low on both evaluation groups, the average valid term group difference is more than 40%, but the high validity agreement the difference is just 5.33%. When high agreement evaluation is applied, the crowd and expert evaluations are almost identical. The number of crowd evaluations is the factor that provides a degree of freedom in the evaluation strictness. Acknowledgement This work was supported by the EU project "QROWD - Because Big Data Integration is Humanly Possible". Figure 1: The circumplex of emotions Figure 2: PEL creation work-flow Figure 3: Subclasses based on class selection Figure 6: Intensifying class evaluations Query: How do they compare lexicons? JSON Response:
{"answer": "Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors.", "evidence": ["We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.", "Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered."]}
science.qasper_abstractive_qa.103.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCrowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon\n\nSentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd.\n\nData\nDuring January 2017, we performed a keyword based crawl for articles and comments in the Europe subreddit and tweets in Twitter, which contained the word \"Brexit\". The use of a political and controversial term in the query is deliberate, to capture the emotional diversity of a politically oriented corpus.\nWe crawled one hundred articles from Reddit, with more than forty thousand comments and more than three thousand tweets. For the Reddit data, we collected information on location, time, and the number of upvotes. For the Twitter data, we stored the number or re-tweets and favourites, time and location information.\nOur focus is on single term (also known as unigram) sentiment, thus posts in both networks were processed to a single term list. In total, the number of unique terms in our corpus was 30227. Based on the enchant python library, used in BIBREF26 , and the supported Great British English dictionary, 19193 were validated and 11034 were invalidated. Our analysis will focus on the 19193 valid terms, that follow Zipf's Law with scaling-law coefficient INLINEFORM0 is a good fit.\nAfter validation, terms were stemmed with Porter Stemming Algorithm BIBREF27 . Stemming identified 10953 distinct term groups with one or more terms. Stop-words, intensifiers and negators are also included in the valid term groups. Both term validation and stemming are unsupervised, since our goal is to maintain scalability in beyond polarity lexicon acquisition and sentiment classification.\n\nCrowdsourcing\nThe crowdsourcing task, hosted in CrowdFlower, required contributors to label term groups in three different main classes, emotion, intensifier and none, without a golden standard, rules or any participation restrictions . Emotion labelling included the 8 basic emotions as defined by Plutchik. Intensifier class included intensifiers and negators. Finally, none referred to stop-words or words with no particular emotion.\nEach of the eleven options for the main classes, will be referred to as \"subclass\". Terms are grouped based on their stem. Each term group has a main annotation class defined by majority, and several sub annotation classes, defined by the non majority annotations. However, to aid multi class analysis of the results, every annotation is logged in the lexicon database.\n\nAssessment\nBased on a set of experiments, we identified 136 term groups that would test the ability of a contributor in all of the three main classes, emotion evoking, intensifying context, and none. As the assessment term groups had more than ten thousand annotations, we analyse it separately from the lexicon.\nIn order for a worker to gain the ability to contribute to the crowdsourcing task and eventually get paid, he/she had to properly annotate 80% of the assessment term groups encountered. The annotations should be within the dominant classes, and not subclasses, as defined from the assessment annotators. E.g., for an assessment term group that received 100 annotations in various emotions, we check if the worker annotates the term group as emotion evoking.\nLet INLINEFORM0 be the set of workers INLINEFORM1 and and INLINEFORM2 the set of eleven INLINEFORM3 subclasses: eight emotions , two intensifiers, and none of the former. We define INLINEFORM4 as the number of total annotations for each worker INLINEFORM5 . Then: DISPLAYFORM0 \nWe define INLINEFORM0 be the set of workers INLINEFORM1 in the assessment process, INLINEFORM2 the set of workers INLINEFORM3 in the acquisition process. Then, for INLINEFORM4 we define: DISPLAYFORM0 DISPLAYFORM1 \nand: DISPLAYFORM0 DISPLAYFORM1 \nThe optimal INLINEFORM0 is found for INLINEFORM1 . For this study, the optimal filtering percentage was found at 40%, INLINEFORM2 .\nWorkers from India and Venezuela, who contributed 92% of the task, have annotated more than 30% of the term groups with joy. However, annotations from countries with more than 300 annotations, don't follow the same distribution. Specifically, workers from Philippines, United States, Colombia, Poland, United Kingdom, Russia, and Egypt, performed a smoother distributed emotion annotation. In comparison, the most annotated emotion in BIBREF21 was fear in 18% of the total terms.\nBy further analysing worker annotation distribution, we identified workers that had a significant fraction of their total annotations in a single subclass. E.g. one specific worker annotated 99% of the assessment term groups he encountered as joy. Dishonesty or spamming is a known problem in crowdsourcing BIBREF28 and multiple proposed solutions exist BIBREF28 , but they require gold standards or objective crowdsourcing tasks.\nAs we don't have a gold standard, and the task is more subjective, these spamming elimination methods are not applicable. Our solution is the implementation of a fast and efficient filter, which only relies on the obtained annotations and the assessment. If workers' answers were above a certain percentage on a single subclass for both the assessment and the annotation process, then the user would be flagged as dishonest and the total of their annotations would be discarded. This rule was applied to the whole range of possible answers, including the 8 emotions, 2 intensifiers and \"none\".\nPrior to implementing the filter, we analysed how it would affect the number of eligible, not considered as spamming, workers. The thick line in Figure FIGREF22 shows the percentage during the assessment term groups, and the dotted line shows the percentage during the lexicon acquisition.\nThe higher the single annotation percentage, the higher the certainty of spamming behaviour. In large part, the exclusion rate was similar for both the assessment and the lexicon annotations. A number of workers had a more cautious behaviour in the test questions, resulting in reduced percentage of exclusions during assessment process. This behaviour is justified, as the first set of questions encounter by a worker are knowingly a set of assessment questions.\nEach assessment term group was annotated more than 120 times, by 187 annotators. This is a critical mass of contributors and provides valuable findings with regards to task. These are:\nWorkers rarely clicked the informative dictionary link. As a result, they would annotate emotional words to none, probably due to misinterpretation. We avoided the direct inclusion of the dictionary definition, as it could be considered as a form of leverage. E.g. \"vehement, halcyon\" , two -uncommon- emotion baring words, were both annotated as none, and less than 0.2% of the workers (on average) clicked a dictionary link.\nThe concept of intensifiers is understandable but requires critical mass BIBREF29 . A small number of annotators would initially annotate intensifiers/negators with an emotion, but the distribution would slowly shift towards the correct class. E.g. \"reduce, little, plethora\" , were initially annotated as sad sad joy, but after tens of annotations they were annotated as weakening weakening intensifying.\nAll words should be evaluated, even those that seemingly don't carry a specific emotion. As times change, words and emotions acquire new links. E.g. \"anti, serious\" , were both annotated as fear evoking with a great emotional diversity.\n\nReliability\nSingle annotation reliability agreement is the degree of agreement between annotators, for term groups that have annotation majority in exactly one sub class. In our lexicon, single annotation reliability agreement was low, mainly due to the low number of annotators for each term group in relation to the high number of possible categories.\nBased on Fleiss Kappa BIBREF30 (simply referred as k), and as seen in Table TABREF35 , term groups with 2 annotations had the lowest reliability agreement, while term groups with 6 annotations the highest reliability agreement. As the number of annotators rises, the number of possible agreement permutations increases but the number of major annotated subclasses decreases. More annotators have a positive effect in both k and certainty of classification.\nAs we restrict our lexicon to emotions, reliability increases for any number of annotators except two. This is explained by the decrease in the number of possible categories. When we restrict our analysis on emotion related annotation the probability for agreement in annotations increases, resulting in a high emotional k. The best way to increase k is to provide additional annotations that will eventually converge to a majority class or a limited group of classes.\n\nCrowd and experts comparison\nWe perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.\nEvaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered.\nThe summary of the evaluation for both experts and crowd can be seen in Figure FIGREF36 . The first graph presents the validity over the number of annotations in the main class of the term group. Although this information is hidden from the evaluators, high annotational agreement results in high evaluation scores. Both experts and the crowd follow that positive trend. Crowd contributors are more strict in their evaluations, but after four annotations we observe a significant validity increase on both crowd and experts.\nLikewise, the annotation percentage for the majority class has a positive influence to the evaluation score, with the exception of 100% agreement, second graph Figure FIGREF36 . The weighing factor for term groups with 100% annotation agreement is the reduced number of total annotations, as the mean number of total annotations drops abruptly on the 100%, and total agreement is more frequent in term groups with low number of total annotations. It's worth noting that certain percentages can only occur on specific number of total annotations, e.g. 17% and 83% can only occur when the number of total annotations is six.\nIn emotion annotations, as seen on the third graph of Figure FIGREF36 crowd and experts follow a similar evaluation pattern. Anticipation and joy had the exact same evaluation, while every other emotion and stop words were evaluated lower from the crowd. The only subclasses evaluated higher from the crowd were intensifiers and negators, with a significant difference in the evaluations for the latter. Section 6.3 provides a more detailed evaluation for term groups that received at least one annotation as intensifiers or negators.\nThe final graph in Figure FIGREF36 presents a clear negative correlation of subclass agreement and evaluation scores. The highest number of subclasses that do not affect evaluation scores is three, above that there is a steady decline of the evaluation scores, for both the crowd and the experts.\nThe evaluation results provide some key insights in the importance of the number of annotations. The evaluation scores start to improve after four annotations. Annotational agreement and majority voting are less important. Subclass agreement has a negative effect on three or more subclasses. Most importantly and compared to experts, the crowd is a stricter evaluator with significantly lower costs, and higher scalability. Since strict evaluation leads to higher quality annotations, the evaluation can be performed by the crowd instead of experts. Crowd contributors can be found in high numbers and multiple platforms, compared to expert linguists.\nEvaluation of intensifiers and negators, was also a batch of evaluation and annotation tasks, as mentioned in Section 6.2. However, the difference was that now evaluators had to answer if a term group included at least one valid intensifier or negator. The evaluation was again performed by experts and the crowd, as described in Section 6.2.1. Based on the annotations received in PEL, we used 541 term groups that had at least one annotation in any of the intensifying subclasses. Although, the particular selection of term groups is statistically significant, we expect relatively low evaluation scores. That is because the number of intensifying annotations is low in most of the selected term groups.\nIn Figure FIGREF40 , we define varying levels of agreement on the validity of the intensifying class, based on the agreement of evaluations. For the experts group, low agreement refers to term groups that received at least one out of two evaluations as valid, while high agreement requires the evaluation agreement of both experts. Similarly for the crowd, low agreement refers to a minimum of two valid evaluations, mid agreement corresponds to at least three, and high agreement requires an absolute agreement of all four evaluators.\nExperts are far more strict than the crowd in the evaluation of intensifiers and negators. When the validity agreement is low on both evaluation groups, the average valid term group difference is more than 40%, but the high validity agreement the difference is just 5.33%. When high agreement evaluation is applied, the crowd and expert evaluations are almost identical. The number of crowd evaluations is the factor that provides a degree of freedom in the evaluation strictness.\n\nAcknowledgement\nThis work was supported by the EU project \"QROWD - Because Big Data Integration is Humanly Possible\".\n\nFigure 1: The circumplex of emotions\n\nFigure 2: PEL creation work-flow\n\nFigure 3: Subclasses based on class selection\n\nFigure 6: Intensifying class evaluations\n\nQuery: How do they compare lexicons?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors.\", \"evidence\": [\"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.\", \"Evaluators were given a summary of the annotations received for the term group in the form of:The term group \\\"inequality inequity\\\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Patients with microcytosis and higher erythrocyte count were more resistant to severe malarial anaemia when infected with Plasmodium falciparum. Research Abstract: BACKGROUND The heritable haemoglobinopathy alpha(+)-thalassaemia is caused by the reduced synthesis of alpha-globin chains that form part of normal adult haemoglobin (Hb). Individuals homozygous for alpha(+)-thalassaemia have microcytosis and an increased erythrocyte count. Alpha(+)-thalassaemia homozygosity confers considerable protection against severe malaria, including severe malarial anaemia (SMA) (Hb concentration < 50 g/l), but does not influence parasite count. We tested the hypothesis that the erythrocyte indices associated with alpha(+)-thalassaemia homozygosity provide a haematological benefit during acute malaria. METHODS AND FINDINGS Data from children living on the north coast of Papua New Guinea who had participated in a case-control study of the protection afforded by alpha(+)-thalassaemia against severe malaria were reanalysed to assess the genotype-specific reduction in erythrocyte count and Hb levels associated with acute malarial disease. We observed a reduction in median erythrocyte count of approximately 1.5 x 10(12)/l in all children with acute falciparum malaria relative to values in community children (p < 0.001). We developed a simple mathematical model of the linear relationship between Hb concentration and erythrocyte count. This model predicted that children homozygous for alpha(+)-thalassaemia lose less Hb than children of normal genotype for a reduction in erythrocyte count of >1.1 x 10(12)/l as a result of the reduced mean cell Hb in homozygous alpha(+)-thalassaemia. In addition, children homozygous for alpha(+)-thalassaemia require a 10% greater reduction in erythrocyte count than children of normal genotype (p = 0.02) for Hb concentration to fall to 50 g/l, the cutoff for SMA. We estimated that the haematological profile in children homozygous for alpha(+)-thalassaemia reduces the risk of SMA during acute malaria compared to children of normal genotype (relative risk 0.52; 95% confidence interval [CI] 0.24-1.12, p = 0.09). CONCLUSIONS The increased erythrocyte count and microcytosis in children homozygous for alpha(+)-thalassaemia may contribute substantially to their protection against SMA. A lower concentration of Hb per erythrocyte and a larger population of erythrocytes may be a biologically advantageous strategy against the significant reduction in erythrocyte count that occurs during acute infection with the malaria parasite Plasmodium falciparum. This haematological profile may reduce the risk of anaemia by other Plasmodium species, as well as other causes of anaemia. Other host polymorphisms that induce an increased erythrocyte count and microcytosis may confer a similar advantage.
{"verdict": "SUPPORT", "evidence": ["Alpha(+)-thalassaemia homozygosity confers considerable protection against severe malaria, including severe malarial anaemia (SMA) (Hb concentration < 50 g/l), but does not influence parasite count.", "We estimated that the haematological profile in children homozygous for alpha(+)-thalassaemia reduces the risk of SMA during acute malaria compared to children of normal genotype (relative risk 0.52; 95% confidence interval [CI] 0.24-1.12, p = 0.09).", "CONCLUSIONS The increased erythrocyte count and microcytosis in children homozygous for alpha(+)-thalassaemia may contribute substantially to their protection against SMA."]}
science.scifact_entailment.584
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPatients with microcytosis and higher erythrocyte count were more resistant to severe malarial anaemia when infected with Plasmodium falciparum.\n\nResearch Abstract:\nBACKGROUND The heritable haemoglobinopathy alpha(+)-thalassaemia is caused by the reduced synthesis of alpha-globin chains that form part of normal adult haemoglobin (Hb).\nIndividuals homozygous for alpha(+)-thalassaemia have microcytosis and an increased erythrocyte count.\nAlpha(+)-thalassaemia homozygosity confers considerable protection against severe malaria, including severe malarial anaemia (SMA) (Hb concentration < 50 g/l), but does not influence parasite count.\nWe tested the hypothesis that the erythrocyte indices associated with alpha(+)-thalassaemia homozygosity provide a haematological benefit during acute malaria.\nMETHODS AND FINDINGS Data from children living on the north coast of Papua New Guinea who had participated in a case-control study of the protection afforded by alpha(+)-thalassaemia against severe malaria were reanalysed to assess the genotype-specific reduction in erythrocyte count and Hb levels associated with acute malarial disease.\nWe observed a reduction in median erythrocyte count of approximately 1.5 x 10(12)/l in all children with acute falciparum malaria relative to values in community children (p < 0.001).\nWe developed a simple mathematical model of the linear relationship between Hb concentration and erythrocyte count.\nThis model predicted that children homozygous for alpha(+)-thalassaemia lose less Hb than children of normal genotype for a reduction in erythrocyte count of >1.1 x 10(12)/l as a result of the reduced mean cell Hb in homozygous alpha(+)-thalassaemia.\nIn addition, children homozygous for alpha(+)-thalassaemia require a 10% greater reduction in erythrocyte count than children of normal genotype (p = 0.02) for Hb concentration to fall to 50 g/l, the cutoff for SMA.\nWe estimated that the haematological profile in children homozygous for alpha(+)-thalassaemia reduces the risk of SMA during acute malaria compared to children of normal genotype (relative risk 0.52; 95% confidence interval [CI] 0.24-1.12, p = 0.09).\nCONCLUSIONS The increased erythrocyte count and microcytosis in children homozygous for alpha(+)-thalassaemia may contribute substantially to their protection against SMA.\nA lower concentration of Hb per erythrocyte and a larger population of erythrocytes may be a biologically advantageous strategy against the significant reduction in erythrocyte count that occurs during acute infection with the malaria parasite Plasmodium falciparum.\nThis haematological profile may reduce the risk of anaemia by other Plasmodium species, as well as other causes of anaemia.\nOther host polymorphisms that induce an increased erythrocyte count and microcytosis may confer a similar advantage.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Alpha(+)-thalassaemia homozygosity confers considerable protection against severe malaria, including severe malarial anaemia (SMA) (Hb concentration < 50 g/l), but does not influence parasite count.\", \"We estimated that the haematological profile in children homozygous for alpha(+)-thalassaemia reduces the risk of SMA during acute malaria compared to children of normal genotype (relative risk 0.52; 95% confidence interval [CI] 0.24-1.12, p = 0.09).\", \"CONCLUSIONS The increased erythrocyte count and microcytosis in children homozygous for alpha(+)-thalassaemia may contribute substantially to their protection against SMA.\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: A multi-layer approach to disinformation detection on Twitter We tackle the problem of classifying news articles pertaining to disinformation vs mainstream news by solely inspecting their diffusion mechanisms on Twitter. Our technique is inherently simple compared to existing text-based approaches, as it allows to by-pass the multiple levels of complexity which are found in news content (e.g. grammar, syntax, style). We employ a multi-layer representation of Twitter diffusion networks, and we compute for each layer a set of global network features which quantify different aspects of the sharing process. Experimental results with two large-scale datasets, corresponding to diffusion cascades of news shared respectively in the United States and Italy, show that a simple Logistic Regression model is able to classify disinformation vs mainstream networks with high accuracy (AUROC up to 94%), also when considering the political bias of different sources in the classification task. We also highlight differences in the sharing patterns of the two news domains which appear to be country-independent. We believe that our network-based approach provides useful insights which pave the way to the future development of a system to detect misleading and harmful information spreading on social media. Methodology ::: Disinformation and mainstream news In this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \in \lbrace D,M\rbrace $, and a set of tweets $\Pi _i=\lbrace T_i^1, T_i^2, ...\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected: Disinformation articles, published by websites which are well-known for producing low-credibility content, false and misleading news reports as well as extreme propaganda and hoaxes and flagged as such by reputable journalists and fact-checkers; Mainstream news, referring to traditional news outlets which deliver factual and credible information. We believe that this is currently the most reliable classification approach, but it entails obvious limitations, as disinformation outlets may also publish true stories and likewise misinformation is sometimes reported on mainstream media. Also, given the choice of news sources, we cannot test whether our methodology is able to classify disinformation vs factual but not mainstream news which are published on niche, non-disinformation outlets. Methodology ::: US dataset We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains. As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting. Methodology ::: Global network properties We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties: Number of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\rightarrow v$, $v\rightarrow u$). Size of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph. Number of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \leftrightarrow v$ ignoring edge directions. Size of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph. Diameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph. Average Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28. Main K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered). Density (d): the density for directed graphs is $d=\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph. Structural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\frac{1}{|V||V-1|}\sum _i\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$. We used networkx Python package BIBREF30 to compute all features. Whenever a layer is empty. we simply set to 0 all its features. In addition to computing the above nine features for each layer, we added two indicators for encoding information about pure tweets, namely the number T of pure tweets (containing URLs to a given news article) and the number U of unique users authoring those tweets. Therefore, a single diffusion network is represented by a vector with $9\cdot 4+2=38$ entries. Experiments ::: Layer importance analysis In order to understand the impact of each layer on the performances of classifiers, we performed additional experiments considering separately each layer (we ignored T and U features relative to pure tweets). In Table 5 we show metrics for each layer and all size classes, computed with a 10-fold stratified shuffle split cross validation, evaluated on the US dataset; in Figure 5 we show AUROC values for each layer compared with the general multi-layer approach. We can notice that both Q and M layers alone capture adequately the discrepancies of the two distinct news domains in the United States as they obtain good results with AUROC values in the range 75%-86%; these are comparable with those of the multi-layer approach which, nevertheless, outperforms them across all size classes. We obtained similar performances for the Italian dataset, as the M layer obtains comparable performances w.r.t multi-layer approach with AUROC values in the range 72%-82%. We do not show these results for sake of conciseness. Experiments ::: Feature importance analysis We further investigated the importance of each feature by performing a $\chi ^2$ test, with 10-fold stratified shuffle split cross validation, considering the entire range of network sizes $[0, +\infty )$. We show the Top-5 most discriminative features for each country in Table 6. We can notice the exact same set of features (with different relative orderings in the Top-3) in both countries; these correspond to two global network propertie–LWCC, which indicates the size of the largest cascade in the layer, and SCC, which correlates with the size of the network–associated to the same set of layers (Quotes, Retweets and Mentions). We further performed a $\chi ^2$ test to highlight the most discriminative features in the M layer of both countries, which performed equally well in the classification task as previously highlighted; also in this case we focused on the entire range of network sizes $[0, +\infty )$. Interestingly, we discovered exactly the same set of Top-3 features in both countries, namely LWCC, SCC and DWCC (which indicates the depth of the largest cascade in the layer). An inspection of the distributions of all aforementioned features revealed that disinformation news exhibit on average larger values than mainstream news. We can qualitatively sum up these results as follows: Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared. Interactions in disinformation sharing cascades tends to be broader and deeper than in mainstream news, as widely reported in the literature BIBREF8BIBREF2BIBREF7. Users likely make a different usage of mentions when sharing news belonging to the two domains, consequently shaping different sharing patterns. Experiments ::: Temporal analysis Similar to BIBREF9, we carried out additional experiments to answer the following question: how long do we need to observe a news spreading on Twitter in order to accurately classify it as disinformation or mainstream? With this goal, we built several versions of our original dataset of multi-layer networks by considering in turn the following lifetimes: 1 hour, 6 hours, 12 hours, 1 day, 2 days, 3 days and 7 days; for each case, we computed the global network properties of the corresponding network and evaluated the LR classifier with 10-fold cross validation, separately for each lifetime (and considering always the entire set of networks). We show corresponding AUROC values for both US and IT datasets in Figure 6. We can see that in both countries news diffusion networks can be accurately classified after just a few hours of spreading, with AUROC values which are larger than 80% after only 6 hours of diffusion. These results are very promising and suggest that articles pertaining to the two news domains exhibit discrepancies in their sharing patterns that can be timely exploited in order to rapidly detect misleading items from factual information. Conclusions In this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform. We disentangled different types of interactions on Twitter to accordingly build a multi-layer representation of news diffusion networks, and we computed a set of global network properties–separately for each layer–in order to encode each network with a tuple of features. Our goal was to investigate whether a multi-layer representation performs better than one layer BIBREF11, and to understand which of the features, observed at given layers, are most effective in the classification task. Experiments with an off-the-shelf classifier such as Logistic Regression on datasets pertaining to two different media landscapes (US and Italy) yield very accurate classification results (AUROC up to 94%), even when accounting for the different political bias of news sources, which are far better than our baseline BIBREF11 with improvements up to 20%. Classification performances using single layers show that the layer of mentions alone entails better performance w.r.t other layers in both countries. We also highlighted the most discriminative features across different layers in both countries; the results suggest that differences between the two news domains might be country-independent but rather due only to the typology of content shared, and that disinformation news shape broader and deeper cascades. Additional experiments involving the temporal evolution of Twitter diffusion networks show that our methodology can accurate classify mainstream and disinformation news after a few hours of propagation on the platform. Overall, our results prove that the topological features of multi-layer diffusion networks might be effectively exploited to detect online disinformation. We do not deny the presence of deceptive efforts to orchestrate the regular spread of information on social media via content amplification and manipulation BIBREF37BIBREF38. On the contrary, we postulate that such hidden forces might play to accentuate the discrepancies between the diffusion patterns of disinformation and mainstream news (and thus to make our methodology effective). In the future we aim to further investigate three directions: (1) employ temporal networks to represent news diffusion and apply classification techniques that take into account the sequential aspect of data (e.g. recurrent neural networks); (2) carry out an extensive comparison of the diffusion of disinformation and mainstream news across countries to investigate deeper the presence of differences and similarities in sharing patterns; (3) leverage our network-based features in addition to state-of-the-art text-based approaches for "fake news" dete ction in order to deliver a real-world system to detect misleading and harmful information spreading on social media. Table 1: Composition of the US dataset according to class, size and political bias. Table 4: Comparison of performances of our multi-layer approach vs the baseline (single-layer). We show AUROC values for the LR classifier evaluated on different size classes of both US and IT datasets. Table 5: Different evaluations metrics for LR classifier evaluated on different size classes of the US dataset and trained using features separately for each layer. Best scores for each row are written in bold. Query: How is the political bias of different sources included in the model? JSON Response:
{"answer": "By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains", "evidence": ["As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting."]}
science.qasper_abstractive_qa.164.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA multi-layer approach to disinformation detection on Twitter\n\nWe tackle the problem of classifying news articles pertaining to disinformation vs mainstream news by solely inspecting their diffusion mechanisms on Twitter. Our technique is inherently simple compared to existing text-based approaches, as it allows to by-pass the multiple levels of complexity which are found in news content (e.g. grammar, syntax, style). We employ a multi-layer representation of Twitter diffusion networks, and we compute for each layer a set of global network features which quantify different aspects of the sharing process. Experimental results with two large-scale datasets, corresponding to diffusion cascades of news shared respectively in the United States and Italy, show that a simple Logistic Regression model is able to classify disinformation vs mainstream networks with high accuracy (AUROC up to 94%), also when considering the political bias of different sources in the classification task. We also highlight differences in the sharing patterns of the two news domains which appear to be country-independent. We believe that our network-based approach provides useful insights which pave the way to the future development of a system to detect misleading and harmful information spreading on social media.\n\nMethodology ::: Disinformation and mainstream news\nIn this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \\in \\lbrace D,M\\rbrace $, and a set of tweets $\\Pi _i=\\lbrace T_i^1, T_i^2, ...\\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected:\nDisinformation articles, published by websites which are well-known for producing low-credibility content, false and misleading news reports as well as extreme propaganda and hoaxes and flagged as such by reputable journalists and fact-checkers;\nMainstream news, referring to traditional news outlets which deliver factual and credible information.\nWe believe that this is currently the most reliable classification approach, but it entails obvious limitations, as disinformation outlets may also publish true stories and likewise misinformation is sometimes reported on mainstream media. Also, given the choice of news sources, we cannot test whether our methodology is able to classify disinformation vs factual but not mainstream news which are published on niche, non-disinformation outlets.\n\nMethodology ::: US dataset\nWe collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.\nAs it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting.\n\nMethodology ::: Global network properties\nWe used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties:\nNumber of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\\rightarrow v$, $v\\rightarrow u$).\nSize of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph.\nNumber of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \\leftrightarrow v$ ignoring edge directions.\nSize of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph.\nDiameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph.\nAverage Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28.\nMain K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered).\nDensity (d): the density for directed graphs is $d=\\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph.\nStructural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\\frac{1}{|V||V-1|}\\sum _i\\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$.\nWe used networkx Python package BIBREF30 to compute all features. Whenever a layer is empty. we simply set to 0 all its features. In addition to computing the above nine features for each layer, we added two indicators for encoding information about pure tweets, namely the number T of pure tweets (containing URLs to a given news article) and the number U of unique users authoring those tweets. Therefore, a single diffusion network is represented by a vector with $9\\cdot 4+2=38$ entries.\n\nExperiments ::: Layer importance analysis\nIn order to understand the impact of each layer on the performances of classifiers, we performed additional experiments considering separately each layer (we ignored T and U features relative to pure tweets). In Table 5 we show metrics for each layer and all size classes, computed with a 10-fold stratified shuffle split cross validation, evaluated on the US dataset; in Figure 5 we show AUROC values for each layer compared with the general multi-layer approach. We can notice that both Q and M layers alone capture adequately the discrepancies of the two distinct news domains in the United States as they obtain good results with AUROC values in the range 75%-86%; these are comparable with those of the multi-layer approach which, nevertheless, outperforms them across all size classes.\nWe obtained similar performances for the Italian dataset, as the M layer obtains comparable performances w.r.t multi-layer approach with AUROC values in the range 72%-82%. We do not show these results for sake of conciseness.\n\nExperiments ::: Feature importance analysis\nWe further investigated the importance of each feature by performing a $\\chi ^2$ test, with 10-fold stratified shuffle split cross validation, considering the entire range of network sizes $[0, +\\infty )$. We show the Top-5 most discriminative features for each country in Table 6.\nWe can notice the exact same set of features (with different relative orderings in the Top-3) in both countries; these correspond to two global network propertie–LWCC, which indicates the size of the largest cascade in the layer, and SCC, which correlates with the size of the network–associated to the same set of layers (Quotes, Retweets and Mentions).\nWe further performed a $\\chi ^2$ test to highlight the most discriminative features in the M layer of both countries, which performed equally well in the classification task as previously highlighted; also in this case we focused on the entire range of network sizes $[0, +\\infty )$. Interestingly, we discovered exactly the same set of Top-3 features in both countries, namely LWCC, SCC and DWCC (which indicates the depth of the largest cascade in the layer).\nAn inspection of the distributions of all aforementioned features revealed that disinformation news exhibit on average larger values than mainstream news.\nWe can qualitatively sum up these results as follows:\nSharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.\nInteractions in disinformation sharing cascades tends to be broader and deeper than in mainstream news, as widely reported in the literature BIBREF8BIBREF2BIBREF7.\nUsers likely make a different usage of mentions when sharing news belonging to the two domains, consequently shaping different sharing patterns.\n\nExperiments ::: Temporal analysis\nSimilar to BIBREF9, we carried out additional experiments to answer the following question: how long do we need to observe a news spreading on Twitter in order to accurately classify it as disinformation or mainstream?\nWith this goal, we built several versions of our original dataset of multi-layer networks by considering in turn the following lifetimes: 1 hour, 6 hours, 12 hours, 1 day, 2 days, 3 days and 7 days; for each case, we computed the global network properties of the corresponding network and evaluated the LR classifier with 10-fold cross validation, separately for each lifetime (and considering always the entire set of networks). We show corresponding AUROC values for both US and IT datasets in Figure 6.\nWe can see that in both countries news diffusion networks can be accurately classified after just a few hours of spreading, with AUROC values which are larger than 80% after only 6 hours of diffusion. These results are very promising and suggest that articles pertaining to the two news domains exhibit discrepancies in their sharing patterns that can be timely exploited in order to rapidly detect misleading items from factual information.\n\nConclusions\nIn this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform. We disentangled different types of interactions on Twitter to accordingly build a multi-layer representation of news diffusion networks, and we computed a set of global network properties–separately for each layer–in order to encode each network with a tuple of features. Our goal was to investigate whether a multi-layer representation performs better than one layer BIBREF11, and to understand which of the features, observed at given layers, are most effective in the classification task.\nExperiments with an off-the-shelf classifier such as Logistic Regression on datasets pertaining to two different media landscapes (US and Italy) yield very accurate classification results (AUROC up to 94%), even when accounting for the different political bias of news sources, which are far better than our baseline BIBREF11 with improvements up to 20%. Classification performances using single layers show that the layer of mentions alone entails better performance w.r.t other layers in both countries.\nWe also highlighted the most discriminative features across different layers in both countries; the results suggest that differences between the two news domains might be country-independent but rather due only to the typology of content shared, and that disinformation news shape broader and deeper cascades.\nAdditional experiments involving the temporal evolution of Twitter diffusion networks show that our methodology can accurate classify mainstream and disinformation news after a few hours of propagation on the platform.\nOverall, our results prove that the topological features of multi-layer diffusion networks might be effectively exploited to detect online disinformation. We do not deny the presence of deceptive efforts to orchestrate the regular spread of information on social media via content amplification and manipulation BIBREF37BIBREF38. On the contrary, we postulate that such hidden forces might play to accentuate the discrepancies between the diffusion patterns of disinformation and mainstream news (and thus to make our methodology effective).\nIn the future we aim to further investigate three directions: (1) employ temporal networks to represent news diffusion and apply classification techniques that take into account the sequential aspect of data (e.g. recurrent neural networks); (2) carry out an extensive comparison of the diffusion of disinformation and mainstream news across countries to investigate deeper the presence of differences and similarities in sharing patterns; (3) leverage our network-based features in addition to state-of-the-art text-based approaches for \"fake news\" dete ction in order to deliver a real-world system to detect misleading and harmful information spreading on social media.\n\nTable 1: Composition of the US dataset according to class, size and political bias.\n\nTable 4: Comparison of performances of our multi-layer approach vs the baseline (single-layer). We show AUROC values for the LR classifier evaluated on different size classes of both US and IT datasets.\n\nTable 5: Different evaluations metrics for LR classifier evaluated on different size classes of the US dataset and trained using features separately for each layer. Best scores for each row are written in bold.\n\nQuery: How is the political bias of different sources included in the model?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains\", \"evidence\": [\"As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting.\"]}", "role": "assistant" } ]
science.discomat_te
| Composition | Lasocka parameters (K) | Lasocka parameters (K) | Et (kJ/mol) | Et (kJ/mol) | | Composition | A | B | Moynihan | Kissinger | | 20C | 585+-2 | 8.4+-0.9 | 359+-36 | 346+-37 | | 40C | 562+-1 | 6.7+-0.5 | 416+-27 | 408+-28 | Caption: Lasocka parameters A and B of Eq. (1) and the glass transition activation energy (Et) of the two studied glasses. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "585", "type": "Other"} {"value": "8.4", "type": "Other"} {"value": "359", "type": "Other"} {"value": "346", "type": "Other"} {"value": "562", "type": "Other"} {"value": "6.7", "type": "Other"} {"value": "416", "type": "Other"} {"value": "408", "type": "Other"}
science.discomat_te.368
[ { "content": "| Composition | Lasocka parameters (K) | Lasocka parameters (K) | Et (kJ/mol) | Et (kJ/mol) |\n| Composition | A | B | Moynihan | Kissinger |\n| 20C | 585+-2 | 8.4+-0.9 | 359+-36 | 346+-37 |\n| 40C | 562+-1 | 6.7+-0.5 | 416+-27 | 408+-28 |\n\nCaption: Lasocka parameters A and B of Eq. (1) and the glass transition activation energy (Et) of the two studied glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"585\", \"type\": \"Other\"}\n{\"value\": \"8.4\", \"type\": \"Other\"}\n{\"value\": \"359\", \"type\": \"Other\"}\n{\"value\": \"346\", \"type\": \"Other\"}\n{\"value\": \"562\", \"type\": \"Other\"}\n{\"value\": \"6.7\", \"type\": \"Other\"}\n{\"value\": \"416\", \"type\": \"Other\"}\n{\"value\": \"408\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: A Gang of Adversarial Bandits Main Content: √ ln ( NK/Ψ ) ΨKT ) and per-trial time complexity of O ( K ln ( N ) ) , whilst GABA-II has a weaker O ( √ ln ( N/Ψ ) ln ( NK/Ψ ) ΨKT ) regret , but a better O ( ln ( K ) ln ( N ) ) per-trial time complexity . We highlight improvements of both algorithms over running independent standard MABs across users . 1 Introduction . During the last decade multi-armed bandits ( MAB ) have received a great deal of attention in machine learning and related fields , due to their wide practical and theoretical importance . The central problem is to design a decision strategy whereby a learner explores sequentially the environment in order to find the best item ( arm ) within a prescribed set . At each step in the exploration the learner chooses an arm , after which feedback ( typically a loss or reward corresponding to the selected arm ) is observed from the environment . Then the next decision is made by the learner based on past interactions , and the process repeats . The goal is to design efficient exploration strategies which incur a small cumulative loss in comparison to the cumulative loss that would have been obtained by always * Equal contribution . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . selecting the best arm in hindsight . Applications of MAB are numerous , including recommender systems [ 1 ] , clinical trials [ 2 ] , and adaptive routing [ 3 ] , among others . In this paper we study the problem in which the learner is facing several MAB problems that are related according to a prescribed interaction graph . A main motivation behind this problem are online recommendation systems , whereby each of several users is associated with a MAB problem ( task ) , where the arms correspond to a finite set of products , and the graph represents a social network among users . The goal is to exploit users ’ similarity in order to improve the efficiency of learning users ’ preferences via online exploration of products . In the standard full information setting , a lot of work has been done showing that techniques from multitask learning are effective in reducing the amount of data needed to learn each of the individual tasks , both in the statistical and adversarial settings , see [ 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ] and references therein . Graphs have been used to model task relationships , with different tasks ’ parameters encouraged to be close according to the graph topology . In contrast , multitask learning in the bandit setting is much less explored . The algorithms that we present exploit the network homophily principle [ 12 ] which formulates that users that are connected in the network have similar preferences , that is , they tend to share preferred recommendations . We will show that our algorithms exploit graph structure and enjoy potentially much smaller regret bounds than the cumulative regret of standard MAB run independently on each user . Since the original graph may be dense , we exploit a randomized sparsification technique to develop fast prediction algorithms . Our approach builds upon previous work on online learning over graphs [ 13 , 14 ] to generate a perfect full oriented binary tree , whose leaves are in one-to-one correspondence with the nodes of the original graph . This construction approximately preserves the relevant graph properties in expectation , and provides the starting point for designing our efficient algorithms . A further ingredient in our algorithm is provided by the method of specialists [ 15 , 16 ] . Our learning strategies combine the above ingredients to devise efficient online algorithms under partial feedback . Contributions . We introduce two Gang of Adversarial BAndit algorithms , GABA-I and GABAII that learn jointly MAB models for N users over K possible actions . Both algorithms are designed to exploit network structure while being ( extremely ) computationally efficient . We derive expected ( over the algorithms ’ randomizations ) regret bounds . The bounds scale with the dispersion measure Ψ ∈ [ 1 , N ] of the best actions over the graph . For GABA-I the bound1 is of order of O ( √ ln ( NK/Ψ ) ΨKT ) , where T is the number of trials , and has a per-trial time complexity of O ( K ln ( N ) ) . On the other hand GABA-II has a weaker expected regret bound of O ( √ ln ( N/Ψ ) ln ( NK/Ψ ) ΨKT ) but is faster , having a per-trial time complexity of O ( ln ( K ) ln ( N ) ) . Thus the GABA-I algorithm improves on algorithms that treat each user independently , as in the best case the regret improves from O ( √ N ) to O ( √ lnN ) and in the worst case the regret degrades by at most a constant factor . GABA-II has slightly weaker regret bounds ; however , it is more computationally efficient . Outline of Main Results . The social network graph G is determined by a set of undirected links between users { ωu , v } Nu < v where ωu , v ∈ [ 0 , ∞ ) indicates the magnitude of the link between user u and v. For all t ∈ [ T ] we have a user ut ∈ [ N ] and a loss vector ` t ∈ [ 0 , 1 ] K which are selected by Nature before learning begins and are unknown to Learner ; i.e. , Nature is a deterministic oblivious adversary ( see e.g. , [ 17 , Section 5.1 ] ) . Learning then proceeds in trials t = 1 , 2 , . . . , T . On trial t : 1 . Nature reveals user ut ∈ [ N ] to Learner , 2 . Learner selects action at ∈ [ K ] , 3 . Nature reveals loss ` t , at ∈ [ 0 , 1 ] to Learner . Before reflecting on the N -user case we review the well-known results for the single user ( N = 1 ) . The seminal EXP3 algorithm [ 18 ] obtains the following ( uniform ) regret bound2 , E [ ∑ t∈ [ T ] ` t , at ] − min a∈ [ K ] ∑ t∈ [ T ] ` t , a ∈ O ( √ K ln ( K ) T ) , ( 1 ) 1The bounds of GABA-I and GABA-II however depend on oracular knowledge of optimal tuning parameters . We discuss this as well as a means of lessening this dependency following Corollary 5 . 2An algorithm was given in [ 19 ] that removed the lnK term from the regret . where the expectation is with respect to the internal randomization of the EXP3 algorithm . In the N -user setting , by running a copy of EXP3 independently for each user , we may obtain a uniform regret bound of ( see e.g. , [ 20 ] ) E [ ∑ t∈ [ T ] ` t , at ] − min y : [ N ] → [ K ] ∑ t∈ [ T ] ` t , y ( ut ) ∈ O ( √ K ln ( K ) NT ) , ( 2 ) i.e. , for each user u the best action is y ( u ) and we now pay an additional constant factor of √ N in our regret . In this work we exploit the social network structure to prove a non-uniform regret bound for the GABA-I algorithm ( see Corollary 4 ) of R ( y ) : = E [ ∑ t∈ [ T ] ` t , at ] − ∑ t∈ [ T ] ` t , y ( ut ) ∈ O ( √ K ln ( KN Ψ ( y ) ) Ψ ( y ) T ) , ( 3 ) for any mapping of users to actions y : [ N ] → [ K ] . The non-uniform regret now depends on Ψ ( y ) ∈ [ 1 , N ] ( see ( 5 ) ) which measures dispersion of users ’ ‘ best ’ actions across the network . Thus , by taking network structure into account , we may upper bound the scaling in the regret with respect to the number of users by O ( √ ln ( eNΨ ( y ) ) Ψ ( y ) ) . When the best action across the network is nearly uniform then the dispersion Ψ ( y ) ∈ O ( 1 ) , in contrast when the dispersion is maximal then Ψ ( y ) = N thus in the best case the regret with respect to the number of users improves from O ( √ N ) to O ( √ lnN ) and in the worst case the regret only increases by a constant factor . The first algorithm GABA-I obtains the regret ( 3 ) while requiring O ( K lnN ) time to predict and update . The second algorithm GABA-II ’ s regret ( see Corollary 5 ) is larger by a O ( √ lnN/Ψ ( y ) ) factor but now prediction is at an even faster O ( ln ( K ) ln ( N ) ) time per trial , that is , prediction time improves exponentially with respect to the cardinality of the action set [ K ] . Thus both algorithms support very large user networks and the second algorithm allows efficient prediction with very large action sets . Related Work . We mention here some of the key papers that are more closely related to ours and refer the reader to the technical appendices for an expanded literature review . There has been much work in the heterogenous multi-user setting for linear-stochastic bandits . Out of these works , those closest to us are when the users are in a known social network and it is assumed that neighbouring users respond to context vectors in a similar way [ 21 , 22 , 23 , 24 , 25 , 26 ] but as far as we are aware no works on this model have so far been done in the adversarial setting . Other works on this topic include those in which it is assumed that there is an unknown clustering of the users , where users in the same cluster are homogenous [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ] ; as well as other models [ 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ] . There are also works on networked , homogenous multi-user bandit models with limited communication between users [ 44 , 45 , 46 , 47 , 48 , 49 , 50 ] . Related to the multiuser setting are works on transfer learning and meta-learning with linear-stochastic bandits [ 51 , 52 ] . Whilst our work assumes a known network structure over the users , there is a wide literature on bandit problems in which the actions are structured in a network and it is assumed that neighbouring actions give similar losses [ 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 ] , as well as other networked-action models [ 61 , 62 , 63 ] . In addition to the seminal paper on adversarial bandits [ 18 ] , our work utilises ideas from several different papers [ 13 , 14 , 15 , 16 , 64 , 65 ] . Notation . Given a set X we define 2X to be its power-set , that is : the set of all subsets of X . For any positive integer m , we define [ m ] : = { 1 , 2 , . . . , m } . For any predicate PRED , JPREDK : = 1 if PRED is true and equals 0 otherwise . Given vectors x , x′ ∈ RK we define x · x′ to be their inner product ( i.e. , equal to ∑ i∈ [ K ] xixi′ ) and we define x x′ to be their component-wise product ( i.e. , ( x x′ ) i : = xix′i for all i ∈ [ K ] ) . We define ‘ 1 ’ to be the K-dimensional vector in which each component is equal to 1 . Given a full oriented binary tree B we denote the set of its vertices also by B . Given a non-leaf vertex n ∈ B let / ( n ) and . ( n ) be its left child and right child respectively . Given a non-root vertex n ∈ B let ↑ ( n ) be its parent . Given a vertex n ∈ B let ⇑ ( n ) and ⇓ ( n ) be its set of ancestors and leaf-descendants ( i.e . descendants which are leaves ) respectively . Given a vertex n ∈ B we define J ( n ) and I ( n ) as the left-most and right-most descendants ( which are leaves ) of n respectively . Finally , we denote the user graph by G , which is an undirected connected graph with edge weights { ωu , v : 1 ≤ u < v ≤ N } . For convenience we assume N is a power of two.3 3This assumption does not limit our results because to run our algorithms one can always add dummy vertices without altering input weights , so as to force N to be a power of two .
In this paper the authors consider the problem of adversarial contextual bandits, where the contexts (finite) and the reward function are adversarial, but there exists a graph structure among the various context vectors. The goal of the online learning algorithm is to select a mapping from the context to an action that minimizes the total regret. Here, the aim is to achieve results better than naively treating the cross product of contexts and actions as an arm and running a EXP3 type algorithm. The authors present regret bounds that depend on an interesting quantity of the graph: weighted effective resistance. This quantity improves the regret to logartihmic when the max-cut size in the graph is small and recovers the worst-case bound (treating the number of context times the number of actions as the arms).
science.mup_single_document_summarization.16143
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nA Gang of Adversarial Bandits\n\nMain Content:\n√ ln ( NK/Ψ ) ΨKT ) and per-trial time complexity of O ( K ln ( N ) ) , whilst GABA-II has a weaker O ( √ ln ( N/Ψ ) ln ( NK/Ψ ) ΨKT ) regret , but a better O ( ln ( K ) ln ( N ) ) per-trial time complexity . We highlight improvements of both algorithms over running independent standard MABs across users . 1 Introduction . During the last decade multi-armed bandits ( MAB ) have received a great deal of attention in machine learning and related fields , due to their wide practical and theoretical importance . The central problem is to design a decision strategy whereby a learner explores sequentially the environment in order to find the best item ( arm ) within a prescribed set . At each step in the exploration the learner chooses an arm , after which feedback ( typically a loss or reward corresponding to the selected arm ) is observed from the environment . Then the next decision is made by the learner based on past interactions , and the process repeats . The goal is to design efficient exploration strategies which incur a small cumulative loss in comparison to the cumulative loss that would have been obtained by always * Equal contribution . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . selecting the best arm in hindsight . Applications of MAB are numerous , including recommender systems [ 1 ] , clinical trials [ 2 ] , and adaptive routing [ 3 ] , among others . In this paper we study the problem in which the learner is facing several MAB problems that are related according to a prescribed interaction graph . A main motivation behind this problem are online recommendation systems , whereby each of several users is associated with a MAB problem ( task ) , where the arms correspond to a finite set of products , and the graph represents a social network among users . The goal is to exploit users ’ similarity in order to improve the efficiency of learning users ’ preferences via online exploration of products . In the standard full information setting , a lot of work has been done showing that techniques from multitask learning are effective in reducing the amount of data needed to learn each of the individual tasks , both in the statistical and adversarial settings , see [ 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ] and references therein . Graphs have been used to model task relationships , with different tasks ’ parameters encouraged to be close according to the graph topology . In contrast , multitask learning in the bandit setting is much less explored . The algorithms that we present exploit the network homophily principle [ 12 ] which formulates that users that are connected in the network have similar preferences , that is , they tend to share preferred recommendations . We will show that our algorithms exploit graph structure and enjoy potentially much smaller regret bounds than the cumulative regret of standard MAB run independently on each user . Since the original graph may be dense , we exploit a randomized sparsification technique to develop fast prediction algorithms . Our approach builds upon previous work on online learning over graphs [ 13 , 14 ] to generate a perfect full oriented binary tree , whose leaves are in one-to-one correspondence with the nodes of the original graph . This construction approximately preserves the relevant graph properties in expectation , and provides the starting point for designing our efficient algorithms . A further ingredient in our algorithm is provided by the method of specialists [ 15 , 16 ] . Our learning strategies combine the above ingredients to devise efficient online algorithms under partial feedback . Contributions . We introduce two Gang of Adversarial BAndit algorithms , GABA-I and GABAII that learn jointly MAB models for N users over K possible actions . Both algorithms are designed to exploit network structure while being ( extremely ) computationally efficient . We derive expected ( over the algorithms ’ randomizations ) regret bounds . The bounds scale with the dispersion measure Ψ ∈ [ 1 , N ] of the best actions over the graph . For GABA-I the bound1 is of order of O ( √ ln ( NK/Ψ ) ΨKT ) , where T is the number of trials , and has a per-trial time complexity of O ( K ln ( N ) ) . On the other hand GABA-II has a weaker expected regret bound of O ( √ ln ( N/Ψ ) ln ( NK/Ψ ) ΨKT ) but is faster , having a per-trial time complexity of O ( ln ( K ) ln ( N ) ) . Thus the GABA-I algorithm improves on algorithms that treat each user independently , as in the best case the regret improves from O ( √ N ) to O ( √ lnN ) and in the worst case the regret degrades by at most a constant factor . GABA-II has slightly weaker regret bounds ; however , it is more computationally efficient . Outline of Main Results . The social network graph G is determined by a set of undirected links between users { ωu , v } Nu < v where ωu , v ∈ [ 0 , ∞ ) indicates the magnitude of the link between user u and v. For all t ∈ [ T ] we have a user ut ∈ [ N ] and a loss vector ` t ∈ [ 0 , 1 ] K which are selected by Nature before learning begins and are unknown to Learner ; i.e. , Nature is a deterministic oblivious adversary ( see e.g. , [ 17 , Section 5.1 ] ) . Learning then proceeds in trials t = 1 , 2 , . . . , T . On trial t : 1 . Nature reveals user ut ∈ [ N ] to Learner , 2 . Learner selects action at ∈ [ K ] , 3 . Nature reveals loss ` t , at ∈ [ 0 , 1 ] to Learner . Before reflecting on the N -user case we review the well-known results for the single user ( N = 1 ) . The seminal EXP3 algorithm [ 18 ] obtains the following ( uniform ) regret bound2 , E [ ∑ t∈ [ T ] ` t , at ] − min a∈ [ K ] ∑ t∈ [ T ] ` t , a ∈ O ( √ K ln ( K ) T ) , ( 1 ) 1The bounds of GABA-I and GABA-II however depend on oracular knowledge of optimal tuning parameters . We discuss this as well as a means of lessening this dependency following Corollary 5 . 2An algorithm was given in [ 19 ] that removed the lnK term from the regret . where the expectation is with respect to the internal randomization of the EXP3 algorithm . In the N -user setting , by running a copy of EXP3 independently for each user , we may obtain a uniform regret bound of ( see e.g. , [ 20 ] ) E [ ∑ t∈ [ T ] ` t , at ] − min y : [ N ] → [ K ] ∑ t∈ [ T ] ` t , y ( ut ) ∈ O ( √ K ln ( K ) NT ) , ( 2 ) i.e. , for each user u the best action is y ( u ) and we now pay an additional constant factor of √ N in our regret . In this work we exploit the social network structure to prove a non-uniform regret bound for the GABA-I algorithm ( see Corollary 4 ) of R ( y ) : = E [ ∑ t∈ [ T ] ` t , at ] − ∑ t∈ [ T ] ` t , y ( ut ) ∈ O ( √ K ln ( KN Ψ ( y ) ) Ψ ( y ) T ) , ( 3 ) for any mapping of users to actions y : [ N ] → [ K ] . The non-uniform regret now depends on Ψ ( y ) ∈ [ 1 , N ] ( see ( 5 ) ) which measures dispersion of users ’ ‘ best ’ actions across the network . Thus , by taking network structure into account , we may upper bound the scaling in the regret with respect to the number of users by O ( √ ln ( eNΨ ( y ) ) Ψ ( y ) ) . When the best action across the network is nearly uniform then the dispersion Ψ ( y ) ∈ O ( 1 ) , in contrast when the dispersion is maximal then Ψ ( y ) = N thus in the best case the regret with respect to the number of users improves from O ( √ N ) to O ( √ lnN ) and in the worst case the regret only increases by a constant factor . The first algorithm GABA-I obtains the regret ( 3 ) while requiring O ( K lnN ) time to predict and update . The second algorithm GABA-II ’ s regret ( see Corollary 5 ) is larger by a O ( √ lnN/Ψ ( y ) ) factor but now prediction is at an even faster O ( ln ( K ) ln ( N ) ) time per trial , that is , prediction time improves exponentially with respect to the cardinality of the action set [ K ] . Thus both algorithms support very large user networks and the second algorithm allows efficient prediction with very large action sets . Related Work . We mention here some of the key papers that are more closely related to ours and refer the reader to the technical appendices for an expanded literature review . There has been much work in the heterogenous multi-user setting for linear-stochastic bandits . Out of these works , those closest to us are when the users are in a known social network and it is assumed that neighbouring users respond to context vectors in a similar way [ 21 , 22 , 23 , 24 , 25 , 26 ] but as far as we are aware no works on this model have so far been done in the adversarial setting . Other works on this topic include those in which it is assumed that there is an unknown clustering of the users , where users in the same cluster are homogenous [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ] ; as well as other models [ 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ] . There are also works on networked , homogenous multi-user bandit models with limited communication between users [ 44 , 45 , 46 , 47 , 48 , 49 , 50 ] . Related to the multiuser setting are works on transfer learning and meta-learning with linear-stochastic bandits [ 51 , 52 ] . Whilst our work assumes a known network structure over the users , there is a wide literature on bandit problems in which the actions are structured in a network and it is assumed that neighbouring actions give similar losses [ 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 ] , as well as other networked-action models [ 61 , 62 , 63 ] . In addition to the seminal paper on adversarial bandits [ 18 ] , our work utilises ideas from several different papers [ 13 , 14 , 15 , 16 , 64 , 65 ] . Notation . Given a set X we define 2X to be its power-set , that is : the set of all subsets of X . For any positive integer m , we define [ m ] : = { 1 , 2 , . . . , m } . For any predicate PRED , JPREDK : = 1 if PRED is true and equals 0 otherwise . Given vectors x , x′ ∈ RK we define x · x′ to be their inner product ( i.e. , equal to ∑ i∈ [ K ] xixi′ ) and we define x x′ to be their component-wise product ( i.e. , ( x x′ ) i : = xix′i for all i ∈ [ K ] ) . We define ‘ 1 ’ to be the K-dimensional vector in which each component is equal to 1 . Given a full oriented binary tree B we denote the set of its vertices also by B . Given a non-leaf vertex n ∈ B let / ( n ) and . ( n ) be its left child and right child respectively . Given a non-root vertex n ∈ B let ↑ ( n ) be its parent . Given a vertex n ∈ B let ⇑ ( n ) and ⇓ ( n ) be its set of ancestors and leaf-descendants ( i.e . descendants which are leaves ) respectively . Given a vertex n ∈ B we define J ( n ) and I ( n ) as the left-most and right-most descendants ( which are leaves ) of n respectively . Finally , we denote the user graph by G , which is an undirected connected graph with edge weights { ωu , v : 1 ≤ u < v ≤ N } . For convenience we assume N is a power of two.3 3This assumption does not limit our results because to run our algorithms one can always add dummy vertices without altering input weights , so as to force N to be a power of two .", "role": "user" }, { "content": "In this paper the authors consider the problem of adversarial contextual bandits, where the contexts (finite) and the reward function are adversarial, but there exists a graph structure among the various context vectors. The goal of the online learning algorithm is to select a mapping from the context to an action that minimizes the total regret. Here, the aim is to achieve results better than naively treating the cross product of contexts and actions as an arm and running a EXP3 type algorithm. The authors present regret bounds that depend on an interesting quantity of the graph: weighted effective resistance. This quantity improves the regret to logartihmic when the max-cut size in the graph is small and recovers the worst-case bound (treating the number of context times the number of actions as the arms).", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: siRNA knockdown of A20 slows tumor progression in an in vivo murine xenograft model. Research Abstract: Glioblastomas are deadly cancers that display a functional cellular hierarchy maintained by self-renewing glioblastoma stem cells (GSCs). GSCs are regulated by molecular pathways distinct from the bulk tumor that may be useful therapeutic targets. We determined that A20 (TNFAIP3), a regulator of cell survival and the NF-kappaB pathway, is overexpressed in GSCs relative to non-stem glioblastoma cells at both the mRNA and protein levels. To determine the functional significance of A20 in GSCs, we targeted A20 expression with lentiviral-mediated delivery of short hairpin RNA (shRNA). Inhibiting A20 expression decreased GSC growth and survival through mechanisms associated with decreased cell-cycle progression and decreased phosphorylation of p65/RelA. Elevated levels of A20 in GSCs contributed to apoptotic resistance: GSCs were less susceptible to TNFalpha-induced cell death than matched non-stem glioma cells, but A20 knockdown sensitized GSCs to TNFalpha-mediated apoptosis. The decreased survival of GSCs upon A20 knockdown contributed to the reduced ability of these cells to self-renew in primary and secondary neurosphere formation assays. The tumorigenic potential of GSCs was decreased with A20 targeting, resulting in increased survival of mice bearing human glioma xenografts. In silico analysis of a glioma patient genomic database indicates that A20 overexpression and amplification is inversely correlated with survival. Together these data indicate that A20 contributes to glioma maintenance through effects on the glioma stem cell subpopulation. Although inactivating mutations in A20 in lymphoma suggest A20 can act as a tumor suppressor, similar point mutations have not been identified through glioma genomic sequencing: in fact, our data suggest A20 may function as a tumor enhancer in glioma through promotion of GSC survival. A20 anticancer therapies should therefore be viewed with caution as effects will likely differ depending on the tumor type.
{"verdict": "SUPPORT", "evidence": ["The tumorigenic potential of GSCs was decreased with A20 targeting, resulting in increased survival of mice bearing human glioma xenografts."]}
science.scifact_entailment.913
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nsiRNA knockdown of A20 slows tumor progression in an in vivo murine xenograft model.\n\nResearch Abstract:\nGlioblastomas are deadly cancers that display a functional cellular hierarchy maintained by self-renewing glioblastoma stem cells (GSCs).\nGSCs are regulated by molecular pathways distinct from the bulk tumor that may be useful therapeutic targets.\nWe determined that A20 (TNFAIP3), a regulator of cell survival and the NF-kappaB pathway, is overexpressed in GSCs relative to non-stem glioblastoma cells at both the mRNA and protein levels.\nTo determine the functional significance of A20 in GSCs, we targeted A20 expression with lentiviral-mediated delivery of short hairpin RNA (shRNA).\nInhibiting A20 expression decreased GSC growth and survival through mechanisms associated with decreased cell-cycle progression and decreased phosphorylation of p65/RelA. Elevated levels of A20 in GSCs contributed to apoptotic resistance: GSCs were less susceptible to TNFalpha-induced cell death than matched non-stem glioma cells, but A20 knockdown sensitized GSCs to TNFalpha-mediated apoptosis.\nThe decreased survival of GSCs upon A20 knockdown contributed to the reduced ability of these cells to self-renew in primary and secondary neurosphere formation assays.\nThe tumorigenic potential of GSCs was decreased with A20 targeting, resulting in increased survival of mice bearing human glioma xenografts.\nIn silico analysis of a glioma patient genomic database indicates that A20 overexpression and amplification is inversely correlated with survival.\nTogether these data indicate that A20 contributes to glioma maintenance through effects on the glioma stem cell subpopulation.\nAlthough inactivating mutations in A20 in lymphoma suggest A20 can act as a tumor suppressor, similar point mutations have not been identified through glioma genomic sequencing: in fact, our data suggest A20 may function as a tumor enhancer in glioma through promotion of GSC survival.\nA20 anticancer therapies should therefore be viewed with caution as effects will likely differ depending on the tumor type.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The tumorigenic potential of GSCs was decreased with A20 targeting, resulting in increased survival of mice bearing human glioma xenografts.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: In particular, <cite>Naseem et al.</cite> showed that by selectively sharing parameters based on typological features of each language, substantial improvements can be achieved, compared to using a single set of parameters for all languages.
["Background"]
science.multicite_intent_classification.684
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIn particular, <cite>Naseem et al.</cite> showed that by selectively sharing parameters based on typological features of each language, substantial improvements can be achieved, compared to using a single set of parameters for all languages.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: M. stadtmanae induces ASC speck formation in BlaER1 monocytes. Research Abstract: Interleukin-1β (IL-1β) is a cytokine whose bioactivity is controlled by activation of the inflammasome. However, in response to lipopolysaccharide, human monocytes secrete IL-1β independently of classical inflammasome stimuli. Here, we report that this constituted a species-specific response that is not observed in the murine system. Indeed, in human monocytes, lipopolysaccharide triggered an "alternative inflammasome" that relied on NLRP3-ASC-caspase-1 signaling, yet was devoid of any classical inflammasome characteristics including pyroptosome formation, pyroptosis induction, and K(+) efflux dependency. Genetic dissection of the underlying signaling pathway in a monocyte transdifferentiation system revealed that alternative inflammasome activation was propagated by TLR4-TRIF-RIPK1-FADD-CASP8 signaling upstream of NLRP3. Importantly, involvement of this signaling cascade was limited to alternative inflammasome activation and did not extend to classical NLRP3 activation. Because alternative inflammasome activation embraces both sensitivity and promiscuity of TLR4, we propose a pivotal role for this signaling cascade in TLR4-driven, IL-1β-mediated immune responses and immunopathology in humans.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.467
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nM. stadtmanae induces ASC speck formation in BlaER1 monocytes.\n\nResearch Abstract:\nInterleukin-1β (IL-1β) is a cytokine whose bioactivity is controlled by activation of the inflammasome.\nHowever, in response to lipopolysaccharide, human monocytes secrete IL-1β independently of classical inflammasome stimuli.\nHere, we report that this constituted a species-specific response that is not observed in the murine system.\nIndeed, in human monocytes, lipopolysaccharide triggered an \"alternative inflammasome\" that relied on NLRP3-ASC-caspase-1 signaling, yet was devoid of any classical inflammasome characteristics including pyroptosome formation, pyroptosis induction, and K(+) efflux dependency.\nGenetic dissection of the underlying signaling pathway in a monocyte transdifferentiation system revealed that alternative inflammasome activation was propagated by TLR4-TRIF-RIPK1-FADD-CASP8 signaling upstream of NLRP3.\nImportantly, involvement of this signaling cascade was limited to alternative inflammasome activation and did not extend to classical NLRP3 activation.\nBecause alternative inflammasome activation embraces both sensitivity and promiscuity of TLR4, we propose a pivotal role for this signaling cascade in TLR4-driven, IL-1β-mediated immune responses and immunopathology in humans.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: Semantic Web documents that encode facts about entities on the Web have been growing rapidly in size and evolving over time. Creating summaries on lengthy Semantic Web documents for quick identification of the corresponding entity has been of great contemporary interest. In this paper, we explore automatic summa-rization techniques that characterize and enable identification of an entity and create summaries that are human friendly. Specifically, we highlight the importance of diversified (faceted) summaries by combining three dimensions: diversity, uniqueness, and popularity. Our novel diversity-aware entity summarization approach mimics human conceptual clustering techniques to group facts, and picks representative facts from each group to form concise (i.e., short) and comprehensive (i.e., improved coverage through diversity) summaries. We evaluate our approach against the state-of-the-art techniques and show that our work improves both the quality and the efficiency of entity summarization.
Output Entities: {"Generic": ["state-of-the-art techniques"], "Material": ["lengthy Semantic Web documents", "Semantic Web documents"], "Method": ["human conceptual clustering techniques", "diversity-aware entity summarization approach", "automatic summa-rization techniques"], "Metric": ["quality", "efficiency"], "OtherScientificTerm": ["uniqueness", "diversified (faceted) summaries", "popularity", "diversity"], "Task": ["entity summarization", "identification of the corresponding entity", "Creating summaries"]}
science.scierc_ner.17
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nSemantic Web documents that encode facts about entities on the Web have been growing rapidly in size and evolving over time. Creating summaries on lengthy Semantic Web documents for quick identification of the corresponding entity has been of great contemporary interest. In this paper, we explore automatic summa-rization techniques that characterize and enable identification of an entity and create summaries that are human friendly. Specifically, we highlight the importance of diversified (faceted) summaries by combining three dimensions: diversity, uniqueness, and popularity. Our novel diversity-aware entity summarization approach mimics human conceptual clustering techniques to group facts, and picks representative facts from each group to form concise (i.e., short) and comprehensive (i.e., improved coverage through diversity) summaries. We evaluate our approach against the state-of-the-art techniques and show that our work improves both the quality and the efficiency of entity summarization.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"state-of-the-art techniques\"], \"Material\": [\"lengthy Semantic Web documents\", \"Semantic Web documents\"], \"Method\": [\"human conceptual clustering techniques\", \"diversity-aware entity summarization approach\", \"automatic summa-rization techniques\"], \"Metric\": [\"quality\", \"efficiency\"], \"OtherScientificTerm\": [\"uniqueness\", \"diversified (faceted) summaries\", \"popularity\", \"diversity\"], \"Task\": [\"entity summarization\", \"identification of the corresponding entity\", \"Creating summaries\"]}", "role": "assistant" } ]
science.discomat_te
| Glass | lc (nm) | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Eg (eV)+-0.01 | DE (eV)+-0.001 | Lth +-0.0001 | | Glass | lc (nm) | 4T1g (P) | 4T1g (F) | 4T2g (F) | 2T1g (G) | 2Eg (G) | Eg (eV)+-0.01 | DE (eV)+-0.001 | Lth +-0.0001 | | C0 | 299 | - | - | - | - | - | 4.29 | 0.254 | 0.4474 | | C1 | 315 | 351 | 438 | 625 | 644 | 691 | 4.02 | 0.481 | 0.4480 | | C2 | 322 | 362 | 439 | 622 | 646 | 694 | 3.94 | 0.518 | 0.4493 | | C3 | 328 | 365 | 435 | 624 | 649 | 690 | 3.66 | 0.528 | 0.4506 | | C4 | 335 | 367 | 438 | 626 | 650 | 690 | 3.59 | 0.575 | 0.4524 | | C5 | 342 | 368 | 439 | 628 | 652 | 691 | 3.37 | 0.602 | 0.4537 | Caption: Cut off wavelength (lc), absorption band positions, optical band gap (Eg), Urbach energy (DE) and theoretical optical basicity (Lth) of the Li2O-PbO-B2O3-P2O5: Cr2O3 glasses. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "299", "type": "Other"} {"value": "4.29", "type": "Other"} {"value": "0.254", "type": "Other"} {"value": "0.4474", "type": "Other"} {"value": "315", "type": "Other"} {"value": "351", "type": "Other"} {"value": "438", "type": "Other"} {"value": "625", "type": "Other"} {"value": "644", "type": "Other"} {"value": "691", "type": "Other"} {"value": "4.02", "type": "Other"} {"value": "0.481", "type": "Other"} {"value": "0.4480", "type": "Other"} {"value": "322", "type": "Other"} {"value": "362", "type": "Other"} {"value": "439", "type": "Other"} {"value": "622", "type": "Other"} {"value": "646", "type": "Other"} {"value": "694", "type": "Other"} {"value": "3.94", "type": "Other"} {"value": "0.518", "type": "Other"} {"value": "0.4493", "type": "Other"} {"value": "328", "type": "Other"} {"value": "365", "type": "Other"} {"value": "435", "type": "Other"} {"value": "624", "type": "Other"} {"value": "649", "type": "Other"} {"value": "690", "type": "Other"} {"value": "3.66", "type": "Other"} {"value": "0.528", "type": "Other"} {"value": "0.4506", "type": "Other"} {"value": "335", "type": "Other"} {"value": "367", "type": "Other"} {"value": "438", "type": "Other"} {"value": "626", "type": "Other"} {"value": "650", "type": "Other"} {"value": "690", "type": "Other"} {"value": "3.59", "type": "Other"} {"value": "0.575", "type": "Other"} {"value": "0.4524", "type": "Other"} {"value": "342", "type": "Other"} {"value": "368", "type": "Other"} {"value": "439", "type": "Other"} {"value": "628", "type": "Other"} {"value": "652", "type": "Other"} {"value": "691", "type": "Other"} {"value": "3.37", "type": "Other"} {"value": "0.602", "type": "Other"} {"value": "0.4537", "type": "Other"}
science.discomat_te.464
[ { "content": "| Glass | lc (nm) | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Positions of transitions in nm from 4A2g (F) to | Eg (eV)+-0.01 | DE (eV)+-0.001 | Lth +-0.0001 |\n| Glass | lc (nm) | 4T1g (P) | 4T1g (F) | 4T2g (F) | 2T1g (G) | 2Eg (G) | Eg (eV)+-0.01 | DE (eV)+-0.001 | Lth +-0.0001 |\n| C0 | 299 | - | - | - | - | - | 4.29 | 0.254 | 0.4474 |\n| C1 | 315 | 351 | 438 | 625 | 644 | 691 | 4.02 | 0.481 | 0.4480 |\n| C2 | 322 | 362 | 439 | 622 | 646 | 694 | 3.94 | 0.518 | 0.4493 |\n| C3 | 328 | 365 | 435 | 624 | 649 | 690 | 3.66 | 0.528 | 0.4506 |\n| C4 | 335 | 367 | 438 | 626 | 650 | 690 | 3.59 | 0.575 | 0.4524 |\n| C5 | 342 | 368 | 439 | 628 | 652 | 691 | 3.37 | 0.602 | 0.4537 |\n\nCaption: Cut off wavelength (lc), absorption band positions, optical band gap (Eg), Urbach energy (DE) and theoretical optical basicity (Lth) of the Li2O-PbO-B2O3-P2O5: Cr2O3 glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"299\", \"type\": \"Other\"}\n{\"value\": \"4.29\", \"type\": \"Other\"}\n{\"value\": \"0.254\", \"type\": \"Other\"}\n{\"value\": \"0.4474\", \"type\": \"Other\"}\n{\"value\": \"315\", \"type\": \"Other\"}\n{\"value\": \"351\", \"type\": \"Other\"}\n{\"value\": \"438\", \"type\": \"Other\"}\n{\"value\": \"625\", \"type\": \"Other\"}\n{\"value\": \"644\", \"type\": \"Other\"}\n{\"value\": \"691\", \"type\": \"Other\"}\n{\"value\": \"4.02\", \"type\": \"Other\"}\n{\"value\": \"0.481\", \"type\": \"Other\"}\n{\"value\": \"0.4480\", \"type\": \"Other\"}\n{\"value\": \"322\", \"type\": \"Other\"}\n{\"value\": \"362\", \"type\": \"Other\"}\n{\"value\": \"439\", \"type\": \"Other\"}\n{\"value\": \"622\", \"type\": \"Other\"}\n{\"value\": \"646\", \"type\": \"Other\"}\n{\"value\": \"694\", \"type\": \"Other\"}\n{\"value\": \"3.94\", \"type\": \"Other\"}\n{\"value\": \"0.518\", \"type\": \"Other\"}\n{\"value\": \"0.4493\", \"type\": \"Other\"}\n{\"value\": \"328\", \"type\": \"Other\"}\n{\"value\": \"365\", \"type\": \"Other\"}\n{\"value\": \"435\", \"type\": \"Other\"}\n{\"value\": \"624\", \"type\": \"Other\"}\n{\"value\": \"649\", \"type\": \"Other\"}\n{\"value\": \"690\", \"type\": \"Other\"}\n{\"value\": \"3.66\", \"type\": \"Other\"}\n{\"value\": \"0.528\", \"type\": \"Other\"}\n{\"value\": \"0.4506\", \"type\": \"Other\"}\n{\"value\": \"335\", \"type\": \"Other\"}\n{\"value\": \"367\", \"type\": \"Other\"}\n{\"value\": \"438\", \"type\": \"Other\"}\n{\"value\": \"626\", \"type\": \"Other\"}\n{\"value\": \"650\", \"type\": \"Other\"}\n{\"value\": \"690\", \"type\": \"Other\"}\n{\"value\": \"3.59\", \"type\": \"Other\"}\n{\"value\": \"0.575\", \"type\": \"Other\"}\n{\"value\": \"0.4524\", \"type\": \"Other\"}\n{\"value\": \"342\", \"type\": \"Other\"}\n{\"value\": \"368\", \"type\": \"Other\"}\n{\"value\": \"439\", \"type\": \"Other\"}\n{\"value\": \"628\", \"type\": \"Other\"}\n{\"value\": \"652\", \"type\": \"Other\"}\n{\"value\": \"691\", \"type\": \"Other\"}\n{\"value\": \"3.37\", \"type\": \"Other\"}\n{\"value\": \"0.602\", \"type\": \"Other\"}\n{\"value\": \"0.4537\", \"type\": \"Other\"}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: RESULTS: The correct diagnosis based on the classic triad of shock, acute abdominal pain, and pulsatile abdominal mass was made in only one of 19 (5.3%) patients. Only 50% of abdominal aortic aneurysms present with the classic triad of hypotension, back pain and a pulsatile abdominal mass. Some of these patients present with the classic triad of symptoms such as abdominal pain, pulsatile abdominal mass and shock. The clinical onset was characterized by the unique syndrome: continuous abdominal bruit, abdominal and left flank pain with an associated pulsatile mass (Mansour Triad). Most AAAs rupture into the retroperitoneal cavity, which results in the classical triad of pain, hypotension, and a pulsatile mass. Rupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain, shock and a pulsatile abdominal mass are present. Rupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain , shock and a pulsatile abdominal mass are present Rupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain, shock and a pulsatile abdominal mass are present. Only 50% of abdominal aortic aneurysms present with the classic triad of hypotension, back pain and a pulsatile abdominal mass. Some of these patients present with the classic triad of symptoms such as abdominal pain, pulsatile abdominal mass and shock. Question: Which symptoms comprise Abdominal aortic aneurysm rupture Triad?
["shock", "acute abdominal pain", "pulsatile abdominal mass"]
science.bioasq_list_qa.688
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: RESULTS: The correct diagnosis based on the classic triad of shock, acute abdominal pain, and pulsatile abdominal mass was made in only one of 19 (5.3%) patients.\nOnly 50% of abdominal aortic aneurysms present with the classic triad of hypotension, back pain and a pulsatile abdominal mass.\nSome of these patients present with the classic triad of symptoms such as abdominal pain, pulsatile abdominal mass and shock.\nThe clinical onset was characterized by the unique syndrome: continuous abdominal bruit, abdominal and left flank pain with an associated pulsatile mass (Mansour Triad).\nMost AAAs rupture into the retroperitoneal cavity, which results in the classical triad of pain, hypotension, and a pulsatile mass.\nRupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain, shock and a pulsatile abdominal mass are present.\nRupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain , shock and a pulsatile abdominal mass are present\nRupture of an abdominal aortic aneurysm is readily diagnosed when the triad of abdominal or back pain, shock and a pulsatile abdominal mass are present.\nOnly 50% of abdominal aortic aneurysms present with the classic triad of hypotension, back pain and a pulsatile abdominal mass.\nSome of these patients present with the classic triad of symptoms such as abdominal pain, pulsatile abdominal mass and shock.\n\nQuestion: Which symptoms comprise Abdominal aortic aneurysm rupture Triad?", "role": "user" }, { "content": "[\"shock\", \"acute abdominal pain\", \"pulsatile abdominal mass\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Datasets To evaluate our model on the word description task on WordNet, we followed <cite>Noraset et al. (2017)</cite> and extracted data from WordNet using the dict-definition 9 toolkit.
["Uses"]
science.multicite_intent_classification.667
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nDatasets To evaluate our model on the word description task on WordNet, we followed <cite>Noraset et al. (2017)</cite> and extracted data from WordNet using the dict-definition 9 toolkit.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Indacaterol is a novel, once-daily (o.d.) inhaled, long-acting β2-agonist in development for chronic obstructive pulmonary disease (COPD). This 12-week, double-blind study compared the efficacy, safety, and tolerability of indacaterol to that of placebo in patients with moderate-to-severe COPD. ABSTRACT.METHODS: Efficacy variables included 24-h trough FEV1 (mean of 23 h 10 min and 23 h 45 min post-dose) at Week 12 (primary endpoint) and after Day 1, and the percentage of COPD days with poor control (i.e., worsening symptoms). Safety was assessed by adverse events (AEs), mean serum potassium and blood glucose, QTc (Fridericia), and vital signs. ABSTRACT.RESULTS: Patients were randomised (n = 416, mean age 63 years) to receive either indacaterol 150 μg o.d. (n = 211) or placebo (n = 205) via a single-dose dry-powder inhaler; 87.5% completed the study. Trough FEV1 (LSM ± SEM) at Week 12 was 1.48 ± 0.018 L for indacaterol and 1.35 ± 0.019 L for placebo, a clinically relevant difference of 130 ± 24 mL (p < 0.001). Trough FEV1 after one dose was significantly higher with indacaterol than placebo (p < 0.001). Indacaterol demonstrated significantly higher peak FEV1 than placebo, both on Day 1 and at Week 12, with indacaterol-placebo differences (LSM ± SEM) of 190 ± 28 (p < 0.001) and 160 ± 28 mL (p < 0.001), respectively. Standardised AUC measurements for FEV1 (between 5 min and 4 h, 5 min and 1 h, and 1 and 4 h post-dose) at Week 12 were all significantly greater with indacaterol than placebo (p < 0.001), with LSM (± SEM) differences of 170 ± 24, 180 ± 24, and 170 ± 24 mL, respectively. Indacaterol significantly reduced the percentage of days of poor control versus placebo by 22.5% (p < 0.001) and was also associated with significantly reduced use of rescue medication (p < 0.001). The overall rates of AEs were comparable between the groups (indacaterol 49.3%, placebo 46.8%), with the most common AEs being COPD worsening (indacaterol 8.5%, placebo 12.2%) and cough (indacaterol 6.2%, placebo 7.3%). One patient died in the placebo group. Serum potassium and blood glucose levels did not differ significantly between the two groups, and no patient had QTc >500 ms. ABSTRACT.CONCLUSIONS: Indacaterol 150 μg o.d. provided clinically significant and sustained bronchodilation, reduced rescue medication use, and had a safety and tolerability profile similar to placebo. ABSTRACT.TRIAL REGISTRATION: NCT00624286 Extracted Data:
[["indacaterol 150 \u03bcg o.d", "placebo", "FEV1 after one dose", "significantly increased", "Trough FEV1 after one dose was significantly higher with indacaterol than placebo (p \u003c 0.001)."], ["indacaterol 150 \u03bcg o.d", "placebo", "Standardised AUC measurements for FEV1 at Week 12", "significantly increased", "Standardised AUC measurements for FEV1 (between 5 min and 4 h, 5 min and 1 h, and 1 and 4 h post-dose) at Week 12 were all significantly greater with indacaterol than placebo (p \u003c 0.001), with LSM (\u00b1 SEM) differences of 170 \u00b1 24, 180 \u00b1 24, and 170 \u00b1 24 mL, respectively."], ["indacaterol 150 \u03bcg o.d", "placebo", "forced expiratory volume in 1 s (FEV1) at Week 12", "significantly increased", "Trough FEV1 (LSM \u00b1 SEM) at Week 12 was 1.48 \u00b1 0.018 L for indacaterol and 1.35 \u00b1 0.019 L for placebo, a clinically relevant difference of 130 \u00b1 24 mL (p \u003c 0.001)."], ["indacaterol 150 \u03bcg o.d", "placebo", "The overall rate of adverse events (ASs)", "no significant difference", "The overall rates of AEs were comparable between the groups (indacaterol 49.3%, placebo 46.8%)"]]
science.evidence_inference.67
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Indacaterol is a novel, once-daily (o.d.) inhaled, long-acting β2-agonist in development for chronic obstructive pulmonary disease (COPD). This 12-week, double-blind study compared the efficacy, safety, and tolerability of indacaterol to that of placebo in patients with moderate-to-severe COPD.\n\nABSTRACT.METHODS: Efficacy variables included 24-h trough FEV1 (mean of 23 h 10 min and 23 h 45 min post-dose) at Week 12 (primary endpoint) and after Day 1, and the percentage of COPD days with poor control (i.e., worsening symptoms). Safety was assessed by adverse events (AEs), mean serum potassium and blood glucose, QTc (Fridericia), and vital signs.\n\nABSTRACT.RESULTS: Patients were randomised (n = 416, mean age 63 years) to receive either indacaterol 150 μg o.d. (n = 211) or placebo (n = 205) via a single-dose dry-powder inhaler; 87.5% completed the study. Trough FEV1 (LSM ± SEM) at Week 12 was 1.48 ± 0.018 L for indacaterol and 1.35 ± 0.019 L for placebo, a clinically relevant difference of 130 ± 24 mL (p < 0.001). Trough FEV1 after one dose was significantly higher with indacaterol than placebo (p < 0.001). Indacaterol demonstrated significantly higher peak FEV1 than placebo, both on Day 1 and at Week 12, with indacaterol-placebo differences (LSM ± SEM) of 190 ± 28 (p < 0.001) and 160 ± 28 mL (p < 0.001), respectively. Standardised AUC measurements for FEV1 (between 5 min and 4 h, 5 min and 1 h, and 1 and 4 h post-dose) at Week 12 were all significantly greater with indacaterol than placebo (p < 0.001), with LSM (± SEM) differences of 170 ± 24, 180 ± 24, and 170 ± 24 mL, respectively. Indacaterol significantly reduced the percentage of days of poor control versus placebo by 22.5% (p < 0.001) and was also associated with significantly reduced use of rescue medication (p < 0.001). The overall rates of AEs were comparable between the groups (indacaterol 49.3%, placebo 46.8%), with the most common AEs being COPD worsening (indacaterol 8.5%, placebo 12.2%) and cough (indacaterol 6.2%, placebo 7.3%). One patient died in the placebo group. Serum potassium and blood glucose levels did not differ significantly between the two groups, and no patient had QTc >500 ms.\n\nABSTRACT.CONCLUSIONS: Indacaterol 150 μg o.d. provided clinically significant and sustained bronchodilation, reduced rescue medication use, and had a safety and tolerability profile similar to placebo.\n\nABSTRACT.TRIAL REGISTRATION: NCT00624286\n\nExtracted Data:", "role": "user" }, { "content": "[[\"indacaterol 150 \\u03bcg o.d\", \"placebo\", \"FEV1 after one dose\", \"significantly increased\", \"Trough FEV1 after one dose was significantly higher with indacaterol than placebo (p \\u003c 0.001).\"], [\"indacaterol 150 \\u03bcg o.d\", \"placebo\", \"Standardised AUC measurements for FEV1 at Week 12\", \"significantly increased\", \"Standardised AUC measurements for FEV1 (between 5 min and 4 h, 5 min and 1 h, and 1 and 4 h post-dose) at Week 12 were all significantly greater with indacaterol than placebo (p \\u003c 0.001), with LSM (\\u00b1 SEM) differences of 170 \\u00b1 24, 180 \\u00b1 24, and 170 \\u00b1 24 mL, respectively.\"], [\"indacaterol 150 \\u03bcg o.d\", \"placebo\", \"forced expiratory volume in 1 s (FEV1) at Week 12\", \"significantly increased\", \"Trough FEV1 (LSM \\u00b1 SEM) at Week 12 was 1.48 \\u00b1 0.018 L for indacaterol and 1.35 \\u00b1 0.019 L for placebo, a clinically relevant difference of 130 \\u00b1 24 mL (p \\u003c 0.001).\"], [\"indacaterol 150 \\u03bcg o.d\", \"placebo\", \"The overall rate of adverse events (ASs)\", \"no significant difference\", \"The overall rates of AEs were comparable between the groups (indacaterol 49.3%, placebo 46.8%)\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND:: The aim of this study was to evaluate whether an eight-week group mindfulness-based stress reduction program would be an acceptable and effective treatment for patients suffering from GAD. ABSTRACT.METHODS:: Eligible subjects with generalized anxiety disorder were recruited to Parsa Hospital and Shahid Rajaee Counseling and Social Work Center from June 2009 to October 2011. An experienced psychiatrist diagnosed these patients with a structured clinical interview for axis I disorders and a clinical psychologist conducted a clinical interview for in order to confirm the diagnosis. Subjects were randomized to an eight week course of group mindfulness based stress reduction program (16 subjects) or a control group (15 subjects). Subjects completed measures of anxiety, depressive symptoms and worry at baseline and end of treatment. To investigate the relationship between anxiety, depression and worry symptoms and mindfulness based stress reduction as well as controlling pretest effect, data were analyzed with analysis of covariance. ABSTRACT.RESULTS:: There were significant reductions in anxiety (P<0/001), depressive (P<0/001) and worry symptoms (P<0/001) from baseline to end of treatment. ABSTRACT.CONCLUSION:: Mindfulness- based stress reduction may be an acceptable intervention in the improvement of generalized anxiety disorder symptoms. Future trials include development of randomized clinical trials of MBSR for GAD. Extracted Data:
[["mindfulness based stress reduction program", "control", "depression, anxiety, and worry symptoms", "significantly decreased", "There were significant reductions in anxiety (P\u003c0/001), depressive (P\u003c0/001) and worry symptoms (P\u003c0/001) from baseline to end of treatment."]]
science.evidence_inference.239
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: The aim of this study was to evaluate whether an eight-week group mindfulness-based stress reduction program would be an acceptable and effective treatment for patients suffering from GAD.\n\nABSTRACT.METHODS:: Eligible subjects with generalized anxiety disorder were recruited to Parsa Hospital and Shahid Rajaee Counseling and Social Work Center from June 2009 to October 2011. An experienced psychiatrist diagnosed these patients with a structured clinical interview for axis I disorders and a clinical psychologist conducted a clinical interview for in order to confirm the diagnosis. Subjects were randomized to an eight week course of group mindfulness based stress reduction program (16 subjects) or a control group (15 subjects). Subjects completed measures of anxiety, depressive symptoms and worry at baseline and end of treatment. To investigate the relationship between anxiety, depression and worry symptoms and mindfulness based stress reduction as well as controlling pretest effect, data were analyzed with analysis of covariance.\n\nABSTRACT.RESULTS:: There were significant reductions in anxiety (P<0/001), depressive (P<0/001) and worry symptoms (P<0/001) from baseline to end of treatment.\n\nABSTRACT.CONCLUSION:: Mindfulness- based stress reduction may be an acceptable intervention in the improvement of generalized anxiety disorder symptoms. Future trials include development of randomized clinical trials of MBSR for GAD.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"mindfulness based stress reduction program\", \"control\", \"depression, anxiety, and worry symptoms\", \"significantly decreased\", \"There were significant reductions in anxiety (P\\u003c0/001), depressive (P\\u003c0/001) and worry symptoms (P\\u003c0/001) from baseline to end of treatment.\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Reduced adiponectin expression after high-fat diet is associated with selective up-regulation of ALDH1A1 and further retinoic acid receptor signaling in adipose tissue. Adiponectin is an adipocyte-derived adipokine with potent antidiabetic, anti-inflammatory, and antiatherogenic activity. Long-term, high-fat diet results in gain of body weight, adiposity, further inflammatory-based cardiovascular diseases, and reduced adiponectin secretion. Vitamin A derivatives/retinoids are involved in several of these processes, which mainly take place in white adipose tissue (WAT). In this study, we examined adiponectin expression as a function of dietary high-fat and high-vitamin A conditions in mice. A decrease of adiponectin expression in addition to an up-regulation of aldehyde dehydrogenase A1 (ALDH1A1), retinoid signaling, and retinoic acid response element signaling was selectively observed in WAT of mice fed a normal-vitamin A, high-fat diet. Reduced adiponectin expression in WAT was also observed in mice fed a high-vitamin A diet. Adipocyte cell culture revealed that endogenous and synthetic retinoic acid receptor (RAR)a- and RARg-selective agonists, as well as a synthetic retinoid X receptor agonist, efficiently reduced adiponectin expression, whereas ALDH1A1 expression only increased with RAR agonists. We conclude that reduced adiponectin expression under high-fat dietary conditions is dependent on 1) increased ALDH1A1 expression in adipocytes, which does not increase all-trans-retinoic acid levels; 2) further RAR ligand-induced, WAT-selective, increased retinoic acid response element-mediated signaling; and 3) RAR ligand-dependent reduction of adiponectin expression.-Landrier, J.-F., Kasiri, E., Karkeni, E., Mihaly, J., Beke, G., Weiss, K., Lucas, R., Aydemir, G., Salles, J., Walrand, S., de Lera, A. R., Ruhl, R. Reduced adiponectin expression after high-fat diet is associated with selective up-regulation of ALDH1A1 and further retinoic acid receptor signaling in adipose tissue.
{"CellLine": [], "Chemical": ["Vitamin A", "vitamin A", "retinoids", "retinoic acid"], "Disease": ["inflammatory", "adiposity", "cardiovascular diseases"], "Gene": ["retinoid X receptor", "retinoic acid receptor (RAR)a", "retinoic acid receptor", "Adiponectin", "RARg", "aldehyde dehydrogenase A1", "adipokine", "adiponectin", "RAR", "ALDH1A1"], "Species": ["mice"], "Variant": []}
science.biored_ner.68
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nReduced adiponectin expression after high-fat diet is associated with selective up-regulation of ALDH1A1 and further retinoic acid receptor signaling in adipose tissue. Adiponectin is an adipocyte-derived adipokine with potent antidiabetic, anti-inflammatory, and antiatherogenic activity. Long-term, high-fat diet results in gain of body weight, adiposity, further inflammatory-based cardiovascular diseases, and reduced adiponectin secretion. Vitamin A derivatives/retinoids are involved in several of these processes, which mainly take place in white adipose tissue (WAT). In this study, we examined adiponectin expression as a function of dietary high-fat and high-vitamin A conditions in mice. A decrease of adiponectin expression in addition to an up-regulation of aldehyde dehydrogenase A1 (ALDH1A1), retinoid signaling, and retinoic acid response element signaling was selectively observed in WAT of mice fed a normal-vitamin A, high-fat diet. Reduced adiponectin expression in WAT was also observed in mice fed a high-vitamin A diet. Adipocyte cell culture revealed that endogenous and synthetic retinoic acid receptor (RAR)a- and RARg-selective agonists, as well as a synthetic retinoid X receptor agonist, efficiently reduced adiponectin expression, whereas ALDH1A1 expression only increased with RAR agonists. We conclude that reduced adiponectin expression under high-fat dietary conditions is dependent on 1) increased ALDH1A1 expression in adipocytes, which does not increase all-trans-retinoic acid levels; 2) further RAR ligand-induced, WAT-selective, increased retinoic acid response element-mediated signaling; and 3) RAR ligand-dependent reduction of adiponectin expression.-Landrier, J.-F., Kasiri, E., Karkeni, E., Mihaly, J., Beke, G., Weiss, K., Lucas, R., Aydemir, G., Salles, J., Walrand, S., de Lera, A. R., Ruhl, R. Reduced adiponectin expression after high-fat diet is associated with selective up-regulation of ALDH1A1 and further retinoic acid receptor signaling in adipose tissue.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"Vitamin A\", \"vitamin A\", \"retinoids\", \"retinoic acid\"], \"Disease\": [\"inflammatory\", \"adiposity\", \"cardiovascular diseases\"], \"Gene\": [\"retinoid X receptor\", \"retinoic acid receptor (RAR)a\", \"retinoic acid receptor\", \"Adiponectin\", \"RARg\", \"aldehyde dehydrogenase A1\", \"adipokine\", \"adiponectin\", \"RAR\", \"ALDH1A1\"], \"Species\": [\"mice\"], \"Variant\": []}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning Main Content: 1 Introduction . The availability of historically collected data is a common scenario in many real-world decisionmaking problems , including medical treatments [ e.g. , 17 , 67 ] , recommendation systems [ e.g. , 33 , 16 ] , personalized advertising [ e.g. , 3 , 60 ] , finance [ e.g. , 43 ] , and industrial robot control [ e.g. , 27 , 26 ] . Historical data can be leveraged to address two classes of problems . First , given data collected with a behavioral policy , we want to estimate the performance of a different target policy . This problem is known as off-policy evaluation [ Off-PE , 21 ] . Second , we want to employ the available data to improve the performance of a baseline policy . This latter problem is named off-policy learning [ OffPL 14 ] . Off-policy methods are studied by both the reinforcement learning [ RL , 58 ] and contextual multi-armed bandit [ CMAB , 30 ] communities . Given its intrinsic simplicity compared to RL , offpolicy methods are nowadays well understood in the CMAB framework [ e.g. , 44 , 1 , 14 , 64 ] . Among them , the doubly robust estimator [ DR , 14 ] is one of the most promising off-policy methods for CMABs . DR combines a direct method ( DM ) , in which the reward is estimated from historical data via regression , with an importance sampling [ IS , 46 ] control variate . IS plays a crucial role in the off-policy methods and counterfactual reasoning . However , IS tends to exhibit problematic behavior for general distributions . This is formalized by its heavy-tailed properties [ 40 ] , which prevent the application of exponential concentration bounds [ 4 ] . To cope with this issue , typically , corrections are performed on the importance weight including truncation [ 23 ] and self-normalization [ SN , 46 ] , among the most popular . Significant results have recently been derived for both techniques [ 47 , 29 , 39 ] . Nevertheless , we believe that the widespread use of IS calls for a better theoretical understanding of its properties and for the design of principled weight corrections . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Defining the desirable properties of an off-policy estimator is a non-trivial task . Some works employed the mean squared error ( MSE ) as an index of the estimator quality [ 34 , 64 ] . However , controlling the MSE , while effectively capturing the bias-variance trade-off , does not provide any guarantee on the concentration properties of the estimator , which might still display a heavy-tailed behavior [ 37 ] . For this reason , we believe that a more suitable approach is to require that the estimator deviations concentrate at a subgaussian rate [ 12 ] . Subgaussianity implicitly controls the tail behavior and leads to tight exponential concentration inequalities . Unlike MSE , the probabilistic requirements are nonasymptotic ( finite-sample ) , from which guarantees on the MSE can be derived . While subgaussianity can be considered a satisfactory requirement for Off-PE , additional properties are advisable when switching to Off-PL . In particular , the differentiability w.r.t . the target policy parameters is desirable whenever Off-PL is carried out via gradient optimization . For instance , weight truncation , as presented in [ 47 ] , allows achieving subgaussianity but leads to a non-differentiable objective . Consequently , the optimization phase requires additional care , which sometimes leads to computationally heavy discretizations [ 47 ] . On the contrary , the SN estimator is differentiable in the target policy , but fails to achieve subgaussian concentration for general distributions . In this paper , we take a step towards a better understanding of IS . After having introduced the necessary background ( Section 2 ) , we derive an anticoncentration bound for the mean estimation with vanilla IS . We show that polynomial concentration ( Chebychev ’ s inequality ) is tight in this setting ( Section 3 ) . This result formalizes the intuition behind the undesired behavior of these estimators for general distributions . Hence , we propose a class of importance weight corrections , based on the notion of power mean ( Section 4 ) . The rationale behind these corrections is to “ smoothly shrink ” the weights towards the mean , with different intensities . In this way , we mitigate the heavy-tailed behavior and , in the meantime , we exert control over the induced bias . Then , we derive bounds on the bias and variance that allow obtaining an exponential concentration inequality and , under certain conditions , subgaussian concentration ( Section 5 ) . Furthermore , the smooth transformation allows preserving the differentiability in the target policy , unlike some existing transformations , like weight truncation . To the best of our knowledge , this is the first IS correction that preserves the differentiablity and is proved to be subgaussian . This correction , however , requires knowledge of a distributional divergence between the target and behavioral policies , which may be unknown or difficult to compute . To this end , we introduce an approach to empirically estimate the correction parameter , preserving the desirable concentration properties ( Section 6 ) . After providing a comparative review of the literature ( Section 7 ) , we present an experimental study comparing our approach with traditional and modern off-policy baselines on synthetic domains and in the CMAB framework ( Section 8 ) . The proofs of the results presented in the main paper can be found in Appendix A . A preliminary version of this work was presented at the “ Workshop on Reinforcement Learning Theory ” of ICML 2021 [ 42 ] .1 2 Preliminaries . We start introducing the background about probability , importance sampling and contextual bandits . Probability We denote with PpYq the set of probability measures over a pY , FYq . Let P PPpYq , f : YÑR be a function , and µn be an estimator for the mean µ “ Ey „ P rfpyqs obtained with n i.i.d . samples . Suppose that with probability 1´δ it holds that |µn´µ|ď a gpn , δq . For βą0 , we say that µn admits : ( i ) polynomial concentration if gpn , δq “ O ` 1 { pnδqβ ˘ ; ( ii ) exponential concentration if gpδq “ O `` logp1 { δq { n ˘β˘ ; ( iii ) subgaussian concentration if ( ii ) holds with β “ 1 [ 37 ] . These cases correspond to Chebyshev ’ s , Bernstein ’ s , and Höeffding ’ s inequalities respectively [ 4 ] . Importance Sampling Let P , QPPpYq admitting p and q as density functions , if P ! Q , i.e. , P is absolutely continuous w.r.t . Q , for any αPp1,2s , we introduce the integral : IαpP } Qq “ ş Y ppyq αqpyq1´αdy . If P “ Q a.s. ( almost surely ) then IαpP } Qq “ 1 . IαpP } Qq allows defining several divergences , like Rényi [ 51 ] : pα´1q´1 logIαpP } Qq . Let f : YÑR be a function , ( vanilla ) importance sampling [ IS , 46 ] allows estimating the expectation of f under the target distribution P , i.e. , µ “ Ey „ P rfpyqs , using i.i.d . samples tyiuiPrns collected with the behavioral distribution Q : pµn “ 1 n ÿ iPrns ωpyiqfpyiq , where ωpyq “ ppyq qpyq , @ yPY . 1https : //lyang36.github.io/icml2021_rltheory/camera_ready/7.pdf . It is well-known that pµn is unbiased , i.e. , Eyi „ Qrpµns “ µ [ 46 ] . If f is bounded , the variance of the estimator can be upper-bounded as Varyi „ Qrpµnsď 1n } f } 8I2pP } Qq [ 40 ] . More in general , the integral IαpP } Qq represents the α-moment of the importance weight ωpyq under Q. Contextual Bandits A contextual multi-armed bandit [ CMAB , 30 ] is represented by the tuple C “ pX , A , ρ , pq , where X is the set of contexts , A is the finite set of actions ( or arms ) , ρPPpX q is the context distribution , and p : X ˆAÑPpRq is the reward distribution . The agent ’ s behavior is encoded by a policy π : XÑPpAq . At each round tPN , the agent observes a context xt „ ρ , plays an action at „ πp¨|xtq , gets the reward rt „ pp¨|xt , atq and the system moves on to the next round . The value of a policy π is given by vpπq “ ş X ρpxq ř aPAπpa|xq ş R ppr|x , aqrdrdx . We denote with rpx , aq “ ş R ppr|x , aqrdr the reward function . A policy π ˚ is optimal if it maximizes the value function , i.e. , π˚ PargmaxπPΠ vpπq , where Π “ tπ : XÑPpAqu is the set of all policies . Let D “ tpxt , at , rtqutPrns be a dataset of samples collected in a CMAB with a behavioral policy πb PΠ . The off-policy evaluation [ Off-PE , 21 ] problem consists in estimating the value function vpπeq of a target policy πe PΠ using the samples in D. The off-policy learning [ Off-PL , 14 ] problem consists in estimating an optimal policy π˚ PΠ using the samples in D. The simplest approach to address the Off-PE/Off-PL problem is to learn from D an estimate prpx , aq of the reward function rpx , aq via regression . This approach is known as direct method ( DM ) and its properties heavily depend on the quality of the estimate pr . Another approach is to simply apply IS to reweight the samples of D , leading to the inverse propensity scoring [ IS , 17 ] estimator . The two approaches are combined in the doubly-robust [ DR , 14 ] estimator , in which the DM estimate is corrected with an IS control variate to reduce the variance using the estimated reward pr ( see also Table 12 in Appendix D ) . 3 Anticoncentration of Vanilla Importance Sampling . In this section , we analyze the intrinsic limitations of the vanilla IS . It is well-known that under the assumption that for some αPp1,2s the divergence IαpP } Qq is finite and f is bounded , the vanilla IS estimator pµn admits polynomial concentration , i.e. , with probability at least 1´δ:2 |pµn´µ|ď } f } 8 ˆ 22´αIαpP } Qq δnα´1 ˙ 1 α . ( 1 ) We now show that the concentration in Equation ( 1 ) is tight , by deriving an anticoncentration bound for |pµn´µ| ; then , we discuss its implications and compare it with previous works . Theorem 3.1 . There exist two distributions P , QPPpYq with P ! Q and a bounded measurable function f : YÑR such that for every αPp1,2s and δPp0 , e´1q if něδemax 1 , pIαpP } Qq´1q 1 α´1 ( , with probability at least δ it holds that : |pµn´µ|ě } f } 8 ˆ IαpP } Qq´1 δnα´1 ˙ 1 α ˆ 1´ eδ n ˙ n´1 α . First of all , we note the polynomial dependence on the confidence level δ . The bound is vacuous when IαpP } Qq “ 1 , i.e. , when P “ Q a.s. , since in an on-policy setting and , being the function f bounded , subgaussian concentration bounds ( like Höeffding ’ s inequality ) hold . In particular , for α “ 2 , n and I2pP } Qq sufficiently large , the bound has order O ` a I2pP } Qq { pδnq ˘ , matching Chebyshev ’ s and the existing concentration inequalities for vanilla importance sampling [ 40 , 41 ] . Our result is of independent interest and applies for general distributions . Previous works considered the MAB [ 34 ] and CMAB [ 64 ] settings proving minimax lower bounds in mean squared error ( MSE ) Ey „ Qrppµn´µq2s . These results differ from ours in several respects . First , we focus on a specific estimator , the vanilla one , while those result are minimax . Second , they provide lower bounds to the MSE , while we focus on the deviations in probability.3 From our probabilistic result , it is immediate to derive an MSE guarantee ( Corollary A.1 of Appendix A.1 ) . Finally , they assume that the second moment of the importance weight I2pP } Qq is finite , whereas our result allows considering scenarios in which only moments of order αă2 are finite . 2The original result [ 41 , Theorem 2 ] was limited to α “ 2 and based on Cantelli ’ s inequality which approaches Chebyshev ’ s when δÑ0 . See Theorem A.1 in Appendix A.1 , for a proof of Equation ( 1 ) . 3As noted in [ 36 ] , when the estimator is not well-concentrated around its mean ( e.g. , in presence of heavy tails ) , the MSE is not adequate to capture the error and high-probability bounds are more advisable .
The paper addresses off-policy evaluation. It proposes to transform importance weights using the function $\frac{\omega(y)}{1-\lambda + \lambda \omega(y)}$, where $\omega(y) = p(y)/q(y)$ is the standard importance weight. The estimator which uses these transformed weights is analyzed both theoretically and empirically. The obtained estimator is unique in having both subgaussian concentration rate and being differentiable. The estimator depends on the hyperparameter $\lambda$, for which the authors give an estimation procedure that only depends on observable quantities.
science.mup_single_document_summarization.12098
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nSubgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning\n\nMain Content:\n1 Introduction . The availability of historically collected data is a common scenario in many real-world decisionmaking problems , including medical treatments [ e.g. , 17 , 67 ] , recommendation systems [ e.g. , 33 , 16 ] , personalized advertising [ e.g. , 3 , 60 ] , finance [ e.g. , 43 ] , and industrial robot control [ e.g. , 27 , 26 ] . Historical data can be leveraged to address two classes of problems . First , given data collected with a behavioral policy , we want to estimate the performance of a different target policy . This problem is known as off-policy evaluation [ Off-PE , 21 ] . Second , we want to employ the available data to improve the performance of a baseline policy . This latter problem is named off-policy learning [ OffPL 14 ] . Off-policy methods are studied by both the reinforcement learning [ RL , 58 ] and contextual multi-armed bandit [ CMAB , 30 ] communities . Given its intrinsic simplicity compared to RL , offpolicy methods are nowadays well understood in the CMAB framework [ e.g. , 44 , 1 , 14 , 64 ] . Among them , the doubly robust estimator [ DR , 14 ] is one of the most promising off-policy methods for CMABs . DR combines a direct method ( DM ) , in which the reward is estimated from historical data via regression , with an importance sampling [ IS , 46 ] control variate . IS plays a crucial role in the off-policy methods and counterfactual reasoning . However , IS tends to exhibit problematic behavior for general distributions . This is formalized by its heavy-tailed properties [ 40 ] , which prevent the application of exponential concentration bounds [ 4 ] . To cope with this issue , typically , corrections are performed on the importance weight including truncation [ 23 ] and self-normalization [ SN , 46 ] , among the most popular . Significant results have recently been derived for both techniques [ 47 , 29 , 39 ] . Nevertheless , we believe that the widespread use of IS calls for a better theoretical understanding of its properties and for the design of principled weight corrections . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Defining the desirable properties of an off-policy estimator is a non-trivial task . Some works employed the mean squared error ( MSE ) as an index of the estimator quality [ 34 , 64 ] . However , controlling the MSE , while effectively capturing the bias-variance trade-off , does not provide any guarantee on the concentration properties of the estimator , which might still display a heavy-tailed behavior [ 37 ] . For this reason , we believe that a more suitable approach is to require that the estimator deviations concentrate at a subgaussian rate [ 12 ] . Subgaussianity implicitly controls the tail behavior and leads to tight exponential concentration inequalities . Unlike MSE , the probabilistic requirements are nonasymptotic ( finite-sample ) , from which guarantees on the MSE can be derived . While subgaussianity can be considered a satisfactory requirement for Off-PE , additional properties are advisable when switching to Off-PL . In particular , the differentiability w.r.t . the target policy parameters is desirable whenever Off-PL is carried out via gradient optimization . For instance , weight truncation , as presented in [ 47 ] , allows achieving subgaussianity but leads to a non-differentiable objective . Consequently , the optimization phase requires additional care , which sometimes leads to computationally heavy discretizations [ 47 ] . On the contrary , the SN estimator is differentiable in the target policy , but fails to achieve subgaussian concentration for general distributions . In this paper , we take a step towards a better understanding of IS . After having introduced the necessary background ( Section 2 ) , we derive an anticoncentration bound for the mean estimation with vanilla IS . We show that polynomial concentration ( Chebychev ’ s inequality ) is tight in this setting ( Section 3 ) . This result formalizes the intuition behind the undesired behavior of these estimators for general distributions . Hence , we propose a class of importance weight corrections , based on the notion of power mean ( Section 4 ) . The rationale behind these corrections is to “ smoothly shrink ” the weights towards the mean , with different intensities . In this way , we mitigate the heavy-tailed behavior and , in the meantime , we exert control over the induced bias . Then , we derive bounds on the bias and variance that allow obtaining an exponential concentration inequality and , under certain conditions , subgaussian concentration ( Section 5 ) . Furthermore , the smooth transformation allows preserving the differentiability in the target policy , unlike some existing transformations , like weight truncation . To the best of our knowledge , this is the first IS correction that preserves the differentiablity and is proved to be subgaussian . This correction , however , requires knowledge of a distributional divergence between the target and behavioral policies , which may be unknown or difficult to compute . To this end , we introduce an approach to empirically estimate the correction parameter , preserving the desirable concentration properties ( Section 6 ) . After providing a comparative review of the literature ( Section 7 ) , we present an experimental study comparing our approach with traditional and modern off-policy baselines on synthetic domains and in the CMAB framework ( Section 8 ) . The proofs of the results presented in the main paper can be found in Appendix A . A preliminary version of this work was presented at the “ Workshop on Reinforcement Learning Theory ” of ICML 2021 [ 42 ] .1 2 Preliminaries . We start introducing the background about probability , importance sampling and contextual bandits . Probability We denote with PpYq the set of probability measures over a pY , FYq . Let P PPpYq , f : YÑR be a function , and µn be an estimator for the mean µ “ Ey „ P rfpyqs obtained with n i.i.d . samples . Suppose that with probability 1´δ it holds that |µn´µ|ď a gpn , δq . For βą0 , we say that µn admits : ( i ) polynomial concentration if gpn , δq “ O ` 1 { pnδqβ ˘ ; ( ii ) exponential concentration if gpδq “ O `` logp1 { δq { n ˘β˘ ; ( iii ) subgaussian concentration if ( ii ) holds with β “ 1 [ 37 ] . These cases correspond to Chebyshev ’ s , Bernstein ’ s , and Höeffding ’ s inequalities respectively [ 4 ] . Importance Sampling Let P , QPPpYq admitting p and q as density functions , if P ! Q , i.e. , P is absolutely continuous w.r.t . Q , for any αPp1,2s , we introduce the integral : IαpP } Qq “ ş Y ppyq αqpyq1´αdy . If P “ Q a.s. ( almost surely ) then IαpP } Qq “ 1 . IαpP } Qq allows defining several divergences , like Rényi [ 51 ] : pα´1q´1 logIαpP } Qq . Let f : YÑR be a function , ( vanilla ) importance sampling [ IS , 46 ] allows estimating the expectation of f under the target distribution P , i.e. , µ “ Ey „ P rfpyqs , using i.i.d . samples tyiuiPrns collected with the behavioral distribution Q : pµn “ 1 n ÿ iPrns ωpyiqfpyiq , where ωpyq “ ppyq qpyq , @ yPY . 1https : //lyang36.github.io/icml2021_rltheory/camera_ready/7.pdf . It is well-known that pµn is unbiased , i.e. , Eyi „ Qrpµns “ µ [ 46 ] . If f is bounded , the variance of the estimator can be upper-bounded as Varyi „ Qrpµnsď 1n } f } 8I2pP } Qq [ 40 ] . More in general , the integral IαpP } Qq represents the α-moment of the importance weight ωpyq under Q. Contextual Bandits A contextual multi-armed bandit [ CMAB , 30 ] is represented by the tuple C “ pX , A , ρ , pq , where X is the set of contexts , A is the finite set of actions ( or arms ) , ρPPpX q is the context distribution , and p : X ˆAÑPpRq is the reward distribution . The agent ’ s behavior is encoded by a policy π : XÑPpAq . At each round tPN , the agent observes a context xt „ ρ , plays an action at „ πp¨|xtq , gets the reward rt „ pp¨|xt , atq and the system moves on to the next round . The value of a policy π is given by vpπq “ ş X ρpxq ř aPAπpa|xq ş R ppr|x , aqrdrdx . We denote with rpx , aq “ ş R ppr|x , aqrdr the reward function . A policy π ˚ is optimal if it maximizes the value function , i.e. , π˚ PargmaxπPΠ vpπq , where Π “ tπ : XÑPpAqu is the set of all policies . Let D “ tpxt , at , rtqutPrns be a dataset of samples collected in a CMAB with a behavioral policy πb PΠ . The off-policy evaluation [ Off-PE , 21 ] problem consists in estimating the value function vpπeq of a target policy πe PΠ using the samples in D. The off-policy learning [ Off-PL , 14 ] problem consists in estimating an optimal policy π˚ PΠ using the samples in D. The simplest approach to address the Off-PE/Off-PL problem is to learn from D an estimate prpx , aq of the reward function rpx , aq via regression . This approach is known as direct method ( DM ) and its properties heavily depend on the quality of the estimate pr . Another approach is to simply apply IS to reweight the samples of D , leading to the inverse propensity scoring [ IS , 17 ] estimator . The two approaches are combined in the doubly-robust [ DR , 14 ] estimator , in which the DM estimate is corrected with an IS control variate to reduce the variance using the estimated reward pr ( see also Table 12 in Appendix D ) . 3 Anticoncentration of Vanilla Importance Sampling . In this section , we analyze the intrinsic limitations of the vanilla IS . It is well-known that under the assumption that for some αPp1,2s the divergence IαpP } Qq is finite and f is bounded , the vanilla IS estimator pµn admits polynomial concentration , i.e. , with probability at least 1´δ:2 |pµn´µ|ď } f } 8 ˆ 22´αIαpP } Qq δnα´1 ˙ 1 α . ( 1 ) We now show that the concentration in Equation ( 1 ) is tight , by deriving an anticoncentration bound for |pµn´µ| ; then , we discuss its implications and compare it with previous works . Theorem 3.1 . There exist two distributions P , QPPpYq with P ! Q and a bounded measurable function f : YÑR such that for every αPp1,2s and δPp0 , e´1q if něδemax 1 , pIαpP } Qq´1q 1 α´1 ( , with probability at least δ it holds that : |pµn´µ|ě } f } 8 ˆ IαpP } Qq´1 δnα´1 ˙ 1 α ˆ 1´ eδ n ˙ n´1 α . First of all , we note the polynomial dependence on the confidence level δ . The bound is vacuous when IαpP } Qq “ 1 , i.e. , when P “ Q a.s. , since in an on-policy setting and , being the function f bounded , subgaussian concentration bounds ( like Höeffding ’ s inequality ) hold . In particular , for α “ 2 , n and I2pP } Qq sufficiently large , the bound has order O ` a I2pP } Qq { pδnq ˘ , matching Chebyshev ’ s and the existing concentration inequalities for vanilla importance sampling [ 40 , 41 ] . Our result is of independent interest and applies for general distributions . Previous works considered the MAB [ 34 ] and CMAB [ 64 ] settings proving minimax lower bounds in mean squared error ( MSE ) Ey „ Qrppµn´µq2s . These results differ from ours in several respects . First , we focus on a specific estimator , the vanilla one , while those result are minimax . Second , they provide lower bounds to the MSE , while we focus on the deviations in probability.3 From our probabilistic result , it is immediate to derive an MSE guarantee ( Corollary A.1 of Appendix A.1 ) . Finally , they assume that the second moment of the importance weight I2pP } Qq is finite , whereas our result allows considering scenarios in which only moments of order αă2 are finite . 2The original result [ 41 , Theorem 2 ] was limited to α “ 2 and based on Cantelli ’ s inequality which approaches Chebyshev ’ s when δÑ0 . See Theorem A.1 in Appendix A.1 , for a proof of Equation ( 1 ) . 3As noted in [ 36 ] , when the estimator is not well-concentrated around its mean ( e.g. , in presence of heavy tails ) , the MSE is not adequate to capture the error and high-probability bounds are more advisable .", "role": "user" }, { "content": "The paper addresses off-policy evaluation. It proposes to transform importance weights using the function $\\frac{\\omega(y)}{1-\\lambda + \\lambda \\omega(y)}$, where $\\omega(y) = p(y)/q(y)$ is the standard importance weight. The estimator which uses these transformed weights is analyzed both theoretically and empirically. The obtained estimator is unique in having both subgaussian concentration rate and being differentiable. The estimator depends on the hyperparameter $\\lambda$, for which the authors give an estimation procedure that only depends on observable quantities.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Background/Objectives. Balance problems are well-established modifiable risk factors for falls, which are common in older adults. The objective of this study was to establish the efficacy of a Wii-Fit interactive video-game-led physical exercise program to improve balance in older Veterans. Methods. A prospective randomized controlled parallel-group trial was conducted at Veterans Affairs Medical Center. Thirty community dwelling Veterans aged 68 (±6.7) years were randomized to either the exercise or control groups. The exercise group performed Wii-Fit program while the control group performed a computer-based cognitive program for 45 minutes, three days per week for 8-weeks. The primary (Berg Balance Scale (BBS)) and secondary outcomes (fear of falling, physical activity enjoyment, and quality of life) were measured at baseline, 4 weeks, and 8 weeks. Results. Of 30 randomized subjects, 27 completed all aspects of the study protocol. There were no study-related adverse events. Intent-to-treat analysis showed a significantly greater improvement in BBS in the exercise group (6.0; 95% CI, 5.1–6.9) compared to the control group (0.5; 95% CI, −0.3–1.3) at 8 weeks (average intergroup difference (95% CI), 5.5 (4.3–6.7), p < 0.001) after adjusting for baseline. Conclusion. This study establishes that the Wii-Fit exercise program is efficacious in improving balance in community dwelling older Veterans. This trial is registered with ClinicalTrials.gov Identifier NCT02190045. Extracted Data:
[["Wi-Fit exercise program", "Control", "Berg Balance Scale score improvement 8 weeks", "significantly increased", "Intent-to-treat analysis showed a significantly greater improvement in BBS in the exercise group (6.0; 95% CI, 5.1\u20136.9) compared to the control group (0.5; 95% CI, \u22120.3\u20131.3) at 8 weeks (average intergroup difference (95% CI), 5.5 (4.3\u20136.7), p \u003c 0.001)"], ["Wi-Fit exercise program", "Control", "Adverse events", "no significant difference", "There were no study-related adverse events."]]
science.evidence_inference.86
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nBackground/Objectives. Balance problems are well-established modifiable risk factors for falls, which are common in older adults. The objective of this study was to establish the efficacy of a Wii-Fit interactive video-game-led physical exercise program to improve balance in older Veterans. Methods. A prospective randomized controlled parallel-group trial was conducted at Veterans Affairs Medical Center. Thirty community dwelling Veterans aged 68 (±6.7) years were randomized to either the exercise or control groups. The exercise group performed Wii-Fit program while the control group performed a computer-based cognitive program for 45 minutes, three days per week for 8-weeks. The primary (Berg Balance Scale (BBS)) and secondary outcomes (fear of falling, physical activity enjoyment, and quality of life) were measured at baseline, 4 weeks, and 8 weeks. Results. Of 30 randomized subjects, 27 completed all aspects of the study protocol. There were no study-related adverse events. Intent-to-treat analysis showed a significantly greater improvement in BBS in the exercise group (6.0; 95% CI, 5.1–6.9) compared to the control group (0.5; 95% CI, −0.3–1.3) at 8 weeks (average intergroup difference (95% CI), 5.5 (4.3–6.7), p < 0.001) after adjusting for baseline. Conclusion. This study establishes that the Wii-Fit exercise program is efficacious in improving balance in community dwelling older Veterans. This trial is registered with ClinicalTrials.gov Identifier NCT02190045.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Wi-Fit exercise program\", \"Control\", \"Berg Balance Scale score improvement 8 weeks\", \"significantly increased\", \"Intent-to-treat analysis showed a significantly greater improvement in BBS in the exercise group (6.0; 95% CI, 5.1\\u20136.9) compared to the control group (0.5; 95% CI, \\u22120.3\\u20131.3) at 8 weeks (average intergroup difference (95% CI), 5.5 (4.3\\u20136.7), p \\u003c 0.001)\"], [\"Wi-Fit exercise program\", \"Control\", \"Adverse events\", \"no significant difference\", \"There were no study-related adverse events.\"]]", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a `universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance. Introduction Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation. In this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains: Can we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models. Are there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone. Does it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random. Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here. Related Work Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 . There are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 . More relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing. Application Domain Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts. Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon. An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively. Predicting Annotation Difficulty We treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models. We also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer. Re-weighting by Difficulty We showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped. We instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\tau $ according to: $1-a\cdot (d_i-\tau )/(1-\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model. Table 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement. Expert annotations of Random and Difficult Instances We re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP. Following BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator. Table 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall. Rows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set. The last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower. Table 2: Average inter-worker agreement. Figure 1: Distributions of difficulty scores over all sentences that contain any span annotations Table 4: Medical IE performance by re-weighting sentences according to predicted agreement or difficulty scores. Table 5: Interventions IE model performance trained crowd or expert. The first four models are trained with a subset of 1k abstracts and the base model is trained with all 5k abstracts. Query: How much data is needed to train the task-specific encoder? JSON Response:
{"answer": "57,505 sentences", "evidence": ["Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal\u0027 sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.", "An abstract may contain some `easy\u0027 and some `difficult\u0027 sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively."]}
science.qasper_abstractive_qa.273.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nPredicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction\n\nModern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a `universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance.\n\nIntroduction\nAssembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation.\nIn this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains:\nCan we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models.\nAre there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone.\nDoes it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random.\nOur contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.\n\nRelated Work\nCrowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 .\nThere are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 .\nMore relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing.\n\nApplication Domain\nOur specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts.\nTable 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon.\nAn abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively.\n\nPredicting Annotation Difficulty\nWe treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models.\nWe also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer.\n\nRe-weighting by Difficulty\nWe showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped.\nWe instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\\tau $ according to: $1-a\\cdot (d_i-\\tau )/(1-\\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model.\nTable 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement.\n\nExpert annotations of Random and Difficult Instances\nWe re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP.\nFollowing BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator.\nTable 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall.\nRows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set.\nThe last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower.\n\nTable 2: Average inter-worker agreement.\n\nFigure 1: Distributions of difficulty scores over all sentences that contain any span annotations\n\nTable 4: Medical IE performance by re-weighting sentences according to predicted agreement or difficulty scores.\n\nTable 5: Interventions IE model performance trained crowd or expert. The first four models are trained with a subset of 1k abstracts and the base model is trained with all 5k abstracts.\n\nQuery: How much data is needed to train the task-specific encoder?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"57,505 sentences\", \"evidence\": [\"Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal\\u0027 sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.\", \"An abstract may contain some `easy\\u0027 and some `difficult\\u0027 sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively.\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The LSTM architecture used in this experiment is similar to that of<cite> Bowman et al. (2015b)</cite> .
["Similarities"]
science.multicite_intent_classification.503
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe LSTM architecture used in this experiment is similar to that of<cite> Bowman et al. (2015b)</cite> .", "role": "user" }, { "content": "[\"Similarities\"]", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Evidence for somatic mutation and affinity maturation of diabetes associated human autoantibodies to glutamate decarboxylase. The circulating autoantibodies to multiple islet autoantigens including GAD, insulin, and IA-2 are the important immunological features of type 1 diabetes. 100% (39/39) selected type 1 diabetes mellitus (DM) patients, Autoantibodies to the GM2-1 islet ganglioside and to GAD-65 at type 1 diabetes onset. Autoantibodies to the smaller isoform of glutamate decarboxylase (GAD) can be found in patients with type 1 diabetes and a number of neurological disorders, including stiff-person syndrome, cerebellar ataxia and limbic encephalitis. Anti-glutamic acid decarboxylase antibody (GAD-ab)-associated cerebellar ataxia is a rare neurological disorder characterized by cerebellar symptoms concomitant with high GAD-ab levels in serum and cerebrospinal fluid (CSF). The unique association of autoantibody-mediated cerebellar ataxia and late-onset T1DM in 2 siblings with similar clinical and paraclinical phenotypes strengthens the concept that hereditary factors might play a relevant role also in autoimmune diseases so far considered to be sporadic. Question: List autoimmune disorders associated with GAD65 autoantibodies.
["type 1 diabetes", "stiff-person syndrome", "cerebellar ataxia", "limbic encephalitis"]
science.bioasq_list_qa.273
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Evidence for somatic mutation and affinity maturation of diabetes associated human autoantibodies to glutamate decarboxylase.\nThe circulating autoantibodies to multiple islet autoantigens including GAD, insulin, and IA-2 are the important immunological features of type 1 diabetes.\n100% (39/39) selected type 1 diabetes mellitus (DM) patients,\nAutoantibodies to the GM2-1 islet ganglioside and to GAD-65 at type 1 diabetes onset.\nAutoantibodies to the smaller isoform of glutamate decarboxylase (GAD) can be found in patients with type 1 diabetes and a number of neurological disorders, including stiff-person syndrome, cerebellar ataxia and limbic encephalitis.\nAnti-glutamic acid decarboxylase antibody (GAD-ab)-associated cerebellar ataxia is a rare neurological disorder characterized by cerebellar symptoms concomitant with high GAD-ab levels in serum and cerebrospinal fluid (CSF).\nThe unique association of autoantibody-mediated cerebellar ataxia and late-onset T1DM in 2 siblings with similar clinical and paraclinical phenotypes strengthens the concept that hereditary factors might play a relevant role also in autoimmune diseases so far considered to be sporadic.\n\nQuestion: List autoimmune disorders associated with GAD65 autoantibodies.", "role": "user" }, { "content": "[\"type 1 diabetes\", \"stiff-person syndrome\", \"cerebellar ataxia\", \"limbic encephalitis\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Hyperparameters We use the same setup as<cite> Plank and Agić (2018)</cite> , i.e., 10 epochs, word dropout rate (p=.25) and l=40-dimensional lexicon embeddings for DSDS, except for downscaling the hidden dimensionality of the character representations from 100 to 32 dimensions.
["Uses"]
science.multicite_intent_classification.1239
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nHyperparameters We use the same setup as<cite> Plank and Agić (2018)</cite> , i.e., 10 epochs, word dropout rate (p=.25) and l=40-dimensional lexicon embeddings for DSDS, except for downscaling the hidden dimensionality of the character representations from 100 to 32 dimensions.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.discomat_te
| Sample | Strong OH% | Weak OH% | Free OH% | Total area (a.u.) | R 2 | | T2.14 air | 12.86 | 71.36 | 15.78 | 190 | 0.99621 | | T2.14 glove box | 4.87 | 63.4 | 31.72 | 24.9 | 0.99623 | Caption: Relative area of the bands assigned to free, weak and strong OH groups. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "12.86", "type": "Other"} {"value": "71.36", "type": "Other"} {"value": "15.78", "type": "Other"} {"value": "190", "type": "Other"} {"value": "0.99621", "type": "Other"} {"value": "4.87", "type": "Other"} {"value": "63.4", "type": "Other"} {"value": "31.72", "type": "Other"} {"value": "24.9", "type": "Other"} {"value": "0.99623", "type": "Other"}
science.discomat_te.184
[ { "content": "| Sample | Strong OH% | Weak OH% | Free OH% | Total area (a.u.) | R 2 |\n| T2.14 air | 12.86 | 71.36 | 15.78 | 190 | 0.99621 |\n| T2.14 glove box | 4.87 | 63.4 | 31.72 | 24.9 | 0.99623 |\n\nCaption: Relative area of the bands assigned to free, weak and strong OH groups.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"12.86\", \"type\": \"Other\"}\n{\"value\": \"71.36\", \"type\": \"Other\"}\n{\"value\": \"15.78\", \"type\": \"Other\"}\n{\"value\": \"190\", \"type\": \"Other\"}\n{\"value\": \"0.99621\", \"type\": \"Other\"}\n{\"value\": \"4.87\", \"type\": \"Other\"}\n{\"value\": \"63.4\", \"type\": \"Other\"}\n{\"value\": \"31.72\", \"type\": \"Other\"}\n{\"value\": \"24.9\", \"type\": \"Other\"}\n{\"value\": \"0.99623\", \"type\": \"Other\"}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization. Task Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected. We define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition. The task is complex, consisting of several interdependent subtasks. One has to extract appropriate labels for concepts and relations and recognize different expressions that refer to the same concept across multiple documents. Further, one has to select the most important concepts and relations for the summary and finally organize them in a graph satisfying the connectedness and size constraints. Related Work Some attempts have been made to automatically construct concept maps from text, working with either single documents BIBREF14 , BIBREF9 , BIBREF15 , BIBREF16 or document clusters BIBREF17 , BIBREF18 , BIBREF19 . These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. However, common task definitions and comparable evaluations are missing. In addition, only a few of them, namely Villalon.2012 and Valerio.2006, define summarization as their goal and try to compress the input to a substantially smaller size. Our newly proposed task and the created large-cluster dataset fill these gaps as they emphasize the summarization aspect of the task. For the subtask of selecting summary-worthy concepts and relations, techniques developed for traditional summarization BIBREF20 and keyphrase extraction BIBREF21 are related and applicable. Approaches that build graphs of propositions to create a summary BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 seem to be particularly related, however, there is one important difference: While they use graphs as an intermediate representation from which a textual summary is then generated, the goal of the proposed task is to create a graph that is directly interpretable and useful for a user. In contrast, these intermediate graphs, e.g. AMR, are hardly useful for a typical, non-linguist user. For traditional summarization, the most well-known datasets emerged out of the DUC and TAC competitions. They provide clusters of news articles with gold-standard summaries. Extending these efforts, several more specialized corpora have been created: With regard to size, Nakano.2010 present a corpus of summaries for large-scale collections of web pages. Recently, corpora with more heterogeneous documents have been suggested, e.g. BIBREF26 and BIBREF27 . The corpus we present combines these aspects, as it has large clusters of heterogeneous documents, and provides a necessary benchmark to evaluate the proposed task. For concept map generation, one corpus with human-created summary concept maps for student essays has been created BIBREF28 . In contrast to our corpus, it only deals with single documents, requires a two orders of magnitude smaller amount of compression of the input and is not publicly available . Other types of information representation that also model concepts and their relationships are knowledge bases, such as Freebase BIBREF29 , and ontologies. However, they both differ in important aspects: Whereas concept maps follow an open label paradigm and are meant to be interpretable by humans, knowledge bases and ontologies are usually more strictly typed and made to be machine-readable. Moreover, approaches to automatically construct them from text typically try to extract as much information as possible, while we want to summarize a document. Task Design We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ). In preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents. We distinguish two task variants: Instead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task. As an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition. Pilot Study To verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model. Following the observations of Lloret.2013, we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker's Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed. In addition, we included trap sentences, similar as in BIBREF13 , in around 80 of the tasks. In contrast to Lloret et al.'s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly. For Likert-scale tasks, we follow Snow.2008 and calculate agreement as the average Pearson correlation of a worker's Likert-score with the average score of the remaining workers. This measure is less strict than exact label agreement and can account for close labels and high- or low-scoring workers. We observe a correlation of 0.81, indicating substantial agreement. For comparisons, the majority agreement is 0.73. To further examine the reliability of the collected data, we followed the approach of Kiritchenko.2016 and simply repeated the crowdsourcing for one of the three topics. Between the importance estimates calculated from the first and second run, we found a Pearson correlation of 0.82 (Spearman 0.78) for Likert-scale tasks and 0.69 (Spearman 0.66) for comparison tasks. This shows that the approach, despite the subjectiveness of the task, allows us to collect reliable annotations. In addition to the reliability studies, we extrinsically evaluated the annotations in the task of summary evaluation. For each of the 58 peer summaries in TAC2008, we calculated a score as the sum of the importance estimates of the propositions it contains. Table TABREF13 shows how these peer scores, averaged over the three topics, correlate with the manual responsiveness scores assigned during TAC in comparison to ROUGE-2 and Pyramid scores. The results demonstrate that with both task designs, we obtain importance annotations that are similarly useful for summary evaluation as pyramid annotations or gold-standard summaries (used for ROUGE). Based on the pilot study, we conclude that the proposed crowdsourcing scheme allows us to obtain proper importance annotations for propositions. As workers are not required to read all documents, the annotation is much more efficient and scalable as with traditional methods. Corpus Creation This section presents the corpus construction process, as outlined in Figure FIGREF16 , combining automatic preprocessing, scalable crowdsourcing and high-quality expert annotations to be able to scale to the size of our document clusters. For every topic, we spent about $150 on crowdsourcing and 1.5h of expert annotations, while just a single annotator would already need over 8 hours (at 200 words per minute) to read all documents of a topic. Source Data As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic. It was created from a large web crawl using state-of-the-art information retrieval. We selected 30 of the topics for which we created the necessary concept map annotations. Proposition Filtering Despite the similarity of the Open IE paradigm, not every extracted tuple is a suitable proposition for a concept map. To reduce the effort in the subsequent steps, we therefore want to filter out unsuitable ones. A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts. We created a guideline explaining when to label a tuple as suitable for a concept map and performed a small annotation study. Three annotators independently labeled 500 randomly sampled tuples. The agreement was 82% ( INLINEFORM0 ). We found tuples to be unsuitable mostly because they had unresolvable pronouns, conflicting with (2), or arguments that were full clauses or propositions, conflicting with (3), while (1) was mostly taken care of by the confidence filtering in § SECREF21 . Due to the high number of tuples we decided to automate the filtering step. We trained a linear SVM on the majority voted annotations. As features, we used the extraction confidence, length of arguments and relations as well as part-of-speech tags, among others. To ensure that the automatic classification does not remove suitable propositions, we tuned the classifier to avoid false negatives. In particular, we introduced class weights, improving precision on the negative class at the cost of a higher fraction of positive classifications. Additionally, we manually verified a certain number of the most uncertain negative classifications to further improve performance. When 20% of the classifications are manually verified and corrected, we found that our model trained on 350 labeled instances achieves 93% precision on negative classifications on the unseen 150 instances. We found this to be a reasonable trade-off of automation and data quality and applied the model to the full dataset. The classifier filtered out 43% of the propositions, leaving 1622 per topic. We manually examined the 17k least confident negative classifications and corrected 955 of them. We also corrected positive classifications for certain types of tuples for which we knew the classifier to be imprecise. Finally, each topic was left with an average of 1554 propositions (47k in total). Importance Annotation Given the propositions identified in the previous step, we now applied our crowdsourcing scheme as described in § SECREF4 to determine their importance. To cope with the large number of propositions, we combine the two task designs: First, we collect Likert-scores from 5 workers for each proposition, clean the data and calculate average scores. Then, using only the top 100 propositions according to these scores, we crowdsource 10% of all possible pairwise comparisons among them. Using TrueSkill, we obtain a fine-grained ranking of the 100 most important propositions. For Likert-scores, the average agreement over all topics is 0.80, while the majority agreement for comparisons is 0.78. We repeated the data collection for three randomly selected topics and found the Pearson correlation between both runs to be 0.73 (Spearman 0.73) for Likert-scores and 0.72 (Spearman 0.71) for comparisons. These figures show that the crowdsourcing approach works on this dataset as reliably as on the TAC documents. In total, we uploaded 53k scoring and 12k comparison tasks to Mechanical Turk, spending $4425.45 including fees. From the fine-grained ranking of the 100 most important propositions, we select the top 50 per topic to construct a summary concept map in the subsequent steps. Proposition Revision Having a manageable number of propositions, an annotator then applied a few straightforward transformations that correct common errors of the Open IE system. First, we break down propositions with conjunctions in either of the arguments into separate propositions per conjunct, which the Open IE system sometimes fails to do. And second, we correct span errors that might occur in the argument or relation phrases, especially when sentences were not properly segmented. As a result, we have a set of high quality propositions for our concept map, consisting of, due to the first transformation, 56.1 propositions per topic on average. Corpus Analysis In this section, we describe our newly created corpus, which, in addition to having summaries in the form of concept maps, differs from traditional summarization corpora in several aspects. Baseline Experiments In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus. Conclusion In this work, we presented low-context importance annotation, a novel crowdsourcing scheme that we used to create a new benchmark corpus for concept-map-based MDS. The corpus has large-scale document clusters of heterogeneous web documents, posing a challenging summarization task. Together with the corpus, we provide implementations of a baseline method and evaluation scripts and hope that our efforts facilitate future research on this variant of summarization. Acknowledgments We would like to thank Teresa Botschen, Andreas Hanselowski and Markus Zopf for their help with the annotation work and Christian Meyer for his valuable feedback. This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1. Figure 1: Elements of a concept map. Figure 2: Excerpt from a summary concept map on the topic “students loans without credit history”. Figure 3: Likert-scale crowdsourcing task with topic description and two example propositions. Table 1: Correlation of peer scores with manual responsiveness scores on TAC2008 topics 01-03. Figure 4: Steps of the corpus creation (with references to the corresponding sections). Table 3: Size of concept maps (mean with std). Table 4: Baseline performance on test set. Query: How were crowd workers instructed to identify important elements in large document collections? JSON Response:
{"answer": "They break down the task of importance annotation to the level of single propositions and obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary.", "evidence": ["We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster\u0027s topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 )."]}
science.qasper_abstractive_qa.238.2
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nBringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps\n\nConcept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization.\n\nTask\nConcept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected.\nWe define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition.\nThe task is complex, consisting of several interdependent subtasks. One has to extract appropriate labels for concepts and relations and recognize different expressions that refer to the same concept across multiple documents. Further, one has to select the most important concepts and relations for the summary and finally organize them in a graph satisfying the connectedness and size constraints.\n\nRelated Work\nSome attempts have been made to automatically construct concept maps from text, working with either single documents BIBREF14 , BIBREF9 , BIBREF15 , BIBREF16 or document clusters BIBREF17 , BIBREF18 , BIBREF19 . These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. However, common task definitions and comparable evaluations are missing. In addition, only a few of them, namely Villalon.2012 and Valerio.2006, define summarization as their goal and try to compress the input to a substantially smaller size. Our newly proposed task and the created large-cluster dataset fill these gaps as they emphasize the summarization aspect of the task.\nFor the subtask of selecting summary-worthy concepts and relations, techniques developed for traditional summarization BIBREF20 and keyphrase extraction BIBREF21 are related and applicable. Approaches that build graphs of propositions to create a summary BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 seem to be particularly related, however, there is one important difference: While they use graphs as an intermediate representation from which a textual summary is then generated, the goal of the proposed task is to create a graph that is directly interpretable and useful for a user. In contrast, these intermediate graphs, e.g. AMR, are hardly useful for a typical, non-linguist user.\nFor traditional summarization, the most well-known datasets emerged out of the DUC and TAC competitions. They provide clusters of news articles with gold-standard summaries. Extending these efforts, several more specialized corpora have been created: With regard to size, Nakano.2010 present a corpus of summaries for large-scale collections of web pages. Recently, corpora with more heterogeneous documents have been suggested, e.g. BIBREF26 and BIBREF27 . The corpus we present combines these aspects, as it has large clusters of heterogeneous documents, and provides a necessary benchmark to evaluate the proposed task.\nFor concept map generation, one corpus with human-created summary concept maps for student essays has been created BIBREF28 . In contrast to our corpus, it only deals with single documents, requires a two orders of magnitude smaller amount of compression of the input and is not publicly available .\nOther types of information representation that also model concepts and their relationships are knowledge bases, such as Freebase BIBREF29 , and ontologies. However, they both differ in important aspects: Whereas concept maps follow an open label paradigm and are meant to be interpretable by humans, knowledge bases and ontologies are usually more strictly typed and made to be machine-readable. Moreover, approaches to automatically construct them from text typically try to extract as much information as possible, while we want to summarize a document.\n\nTask Design\nWe break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).\nIn preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents.\nWe distinguish two task variants:\nInstead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task.\nAs an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition.\n\nPilot Study\nTo verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model.\nFollowing the observations of Lloret.2013, we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker's Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed.\nIn addition, we included trap sentences, similar as in BIBREF13 , in around 80 of the tasks. In contrast to Lloret et al.'s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly.\nFor Likert-scale tasks, we follow Snow.2008 and calculate agreement as the average Pearson correlation of a worker's Likert-score with the average score of the remaining workers. This measure is less strict than exact label agreement and can account for close labels and high- or low-scoring workers. We observe a correlation of 0.81, indicating substantial agreement. For comparisons, the majority agreement is 0.73. To further examine the reliability of the collected data, we followed the approach of Kiritchenko.2016 and simply repeated the crowdsourcing for one of the three topics. Between the importance estimates calculated from the first and second run, we found a Pearson correlation of 0.82 (Spearman 0.78) for Likert-scale tasks and 0.69 (Spearman 0.66) for comparison tasks. This shows that the approach, despite the subjectiveness of the task, allows us to collect reliable annotations.\nIn addition to the reliability studies, we extrinsically evaluated the annotations in the task of summary evaluation. For each of the 58 peer summaries in TAC2008, we calculated a score as the sum of the importance estimates of the propositions it contains. Table TABREF13 shows how these peer scores, averaged over the three topics, correlate with the manual responsiveness scores assigned during TAC in comparison to ROUGE-2 and Pyramid scores. The results demonstrate that with both task designs, we obtain importance annotations that are similarly useful for summary evaluation as pyramid annotations or gold-standard summaries (used for ROUGE).\nBased on the pilot study, we conclude that the proposed crowdsourcing scheme allows us to obtain proper importance annotations for propositions. As workers are not required to read all documents, the annotation is much more efficient and scalable as with traditional methods.\n\nCorpus Creation\nThis section presents the corpus construction process, as outlined in Figure FIGREF16 , combining automatic preprocessing, scalable crowdsourcing and high-quality expert annotations to be able to scale to the size of our document clusters. For every topic, we spent about $150 on crowdsourcing and 1.5h of expert annotations, while just a single annotator would already need over 8 hours (at 200 words per minute) to read all documents of a topic.\n\nSource Data\nAs a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic. It was created from a large web crawl using state-of-the-art information retrieval. We selected 30 of the topics for which we created the necessary concept map annotations.\n\nProposition Filtering\nDespite the similarity of the Open IE paradigm, not every extracted tuple is a suitable proposition for a concept map. To reduce the effort in the subsequent steps, we therefore want to filter out unsuitable ones. A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts. We created a guideline explaining when to label a tuple as suitable for a concept map and performed a small annotation study. Three annotators independently labeled 500 randomly sampled tuples. The agreement was 82% ( INLINEFORM0 ). We found tuples to be unsuitable mostly because they had unresolvable pronouns, conflicting with (2), or arguments that were full clauses or propositions, conflicting with (3), while (1) was mostly taken care of by the confidence filtering in § SECREF21 .\nDue to the high number of tuples we decided to automate the filtering step. We trained a linear SVM on the majority voted annotations. As features, we used the extraction confidence, length of arguments and relations as well as part-of-speech tags, among others. To ensure that the automatic classification does not remove suitable propositions, we tuned the classifier to avoid false negatives. In particular, we introduced class weights, improving precision on the negative class at the cost of a higher fraction of positive classifications. Additionally, we manually verified a certain number of the most uncertain negative classifications to further improve performance. When 20% of the classifications are manually verified and corrected, we found that our model trained on 350 labeled instances achieves 93% precision on negative classifications on the unseen 150 instances. We found this to be a reasonable trade-off of automation and data quality and applied the model to the full dataset.\nThe classifier filtered out 43% of the propositions, leaving 1622 per topic. We manually examined the 17k least confident negative classifications and corrected 955 of them. We also corrected positive classifications for certain types of tuples for which we knew the classifier to be imprecise. Finally, each topic was left with an average of 1554 propositions (47k in total).\n\nImportance Annotation\nGiven the propositions identified in the previous step, we now applied our crowdsourcing scheme as described in § SECREF4 to determine their importance. To cope with the large number of propositions, we combine the two task designs: First, we collect Likert-scores from 5 workers for each proposition, clean the data and calculate average scores. Then, using only the top 100 propositions according to these scores, we crowdsource 10% of all possible pairwise comparisons among them. Using TrueSkill, we obtain a fine-grained ranking of the 100 most important propositions.\nFor Likert-scores, the average agreement over all topics is 0.80, while the majority agreement for comparisons is 0.78. We repeated the data collection for three randomly selected topics and found the Pearson correlation between both runs to be 0.73 (Spearman 0.73) for Likert-scores and 0.72 (Spearman 0.71) for comparisons. These figures show that the crowdsourcing approach works on this dataset as reliably as on the TAC documents.\nIn total, we uploaded 53k scoring and 12k comparison tasks to Mechanical Turk, spending $4425.45 including fees. From the fine-grained ranking of the 100 most important propositions, we select the top 50 per topic to construct a summary concept map in the subsequent steps.\n\nProposition Revision\nHaving a manageable number of propositions, an annotator then applied a few straightforward transformations that correct common errors of the Open IE system. First, we break down propositions with conjunctions in either of the arguments into separate propositions per conjunct, which the Open IE system sometimes fails to do. And second, we correct span errors that might occur in the argument or relation phrases, especially when sentences were not properly segmented. As a result, we have a set of high quality propositions for our concept map, consisting of, due to the first transformation, 56.1 propositions per topic on average.\n\nCorpus Analysis\nIn this section, we describe our newly created corpus, which, in addition to having summaries in the form of concept maps, differs from traditional summarization corpora in several aspects.\n\nBaseline Experiments\nIn this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus.\n\nConclusion\nIn this work, we presented low-context importance annotation, a novel crowdsourcing scheme that we used to create a new benchmark corpus for concept-map-based MDS. The corpus has large-scale document clusters of heterogeneous web documents, posing a challenging summarization task. Together with the corpus, we provide implementations of a baseline method and evaluation scripts and hope that our efforts facilitate future research on this variant of summarization.\n\nAcknowledgments\nWe would like to thank Teresa Botschen, Andreas Hanselowski and Markus Zopf for their help with the annotation work and Christian Meyer for his valuable feedback. This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1.\n\nFigure 1: Elements of a concept map.\n\nFigure 2: Excerpt from a summary concept map on the topic “students loans without credit history”.\n\nFigure 3: Likert-scale crowdsourcing task with topic description and two example propositions.\n\nTable 1: Correlation of peer scores with manual responsiveness scores on TAC2008 topics 01-03.\n\nFigure 4: Steps of the corpus creation (with references to the corresponding sections).\n\nTable 3: Size of concept maps (mean with std).\n\nTable 4: Baseline performance on test set.\n\nQuery: How were crowd workers instructed to identify important elements in large document collections?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"They break down the task of importance annotation to the level of single propositions and obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary.\", \"evidence\": [\"We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster\\u0027s topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.AIM: To assess the effects of a combination therapy with natriuretic and aquaretic drugs in cirrhotic ascites patients. ABSTRACT.METHODS: A two-center, randomized, open-label, prospective study was conducted. Japanese patients who met the criteria were randomized to trial group and the combination diuretic group (received 7.5 mg of tolvaptan) or the conventional diuretic group (received 40 mg of furosemide) for 7 d in addition to the natriuretic drug which was used prior to enrolment in this study. The primary endpoint was the change in body weight from the baseline. Vital signs, fluid intake, and laboratory and urinary data were assessed to determine the pharmacological effects after administration of aquaretic and natriuretic drugs. ABSTRACT.RESULTS: A total of 56 patients were randomized to receive either tolvaptan (n = 28) or furosemide (n = 28). In the combination and conventional diuretic groups, the average decrease in body weight from the baseline was 3.21 ± 3.17 kg (P < 0.0001) and 1.75 ± 2.36 kg (P = 0.0006), respectively, when measured on the final dosing day. Following 1 wk of treatment, a significantly greater reduction in body weight was observed in the combination diuretic group compared to that in the conventional diuretic group (P = 0.0412). ABSTRACT.CONCLUSION: Compared to a conventional diuretic therapy with only a natriuretic drug, a combination diuretic therapy with natriuretic and aquaretic drugs is more effective for patients with cirrhotic ascites. Extracted Data:
[["combination diuretic group (received 7.5 mg of tolvaptan)", "conventional diuretic group (received 40 mg of furosemide)", "body weight", "significantly decreased", "Following 1 wk of treatment, a significantly greater reduction in body weight was observed in the combination diuretic group compared to that in the conventional diuretic group (P = 0.0412)."]]
science.evidence_inference.217
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.AIM: To assess the effects of a combination therapy with natriuretic and aquaretic drugs in cirrhotic ascites patients.\n\nABSTRACT.METHODS: A two-center, randomized, open-label, prospective study was conducted. Japanese patients who met the criteria were randomized to trial group and the combination diuretic group (received 7.5 mg of tolvaptan) or the conventional diuretic group (received 40 mg of furosemide) for 7 d in addition to the natriuretic drug which was used prior to enrolment in this study. The primary endpoint was the change in body weight from the baseline. Vital signs, fluid intake, and laboratory and urinary data were assessed to determine the pharmacological effects after administration of aquaretic and natriuretic drugs.\n\nABSTRACT.RESULTS: A total of 56 patients were randomized to receive either tolvaptan (n = 28) or furosemide (n = 28). In the combination and conventional diuretic groups, the average decrease in body weight from the baseline was 3.21 ± 3.17 kg (P < 0.0001) and 1.75 ± 2.36 kg (P = 0.0006), respectively, when measured on the final dosing day. Following 1 wk of treatment, a significantly greater reduction in body weight was observed in the combination diuretic group compared to that in the conventional diuretic group (P = 0.0412).\n\nABSTRACT.CONCLUSION: Compared to a conventional diuretic therapy with only a natriuretic drug, a combination diuretic therapy with natriuretic and aquaretic drugs is more effective for patients with cirrhotic ascites.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"combination diuretic group (received 7.5 mg of tolvaptan)\", \"conventional diuretic group (received 40 mg of furosemide)\", \"body weight\", \"significantly decreased\", \"Following 1 wk of treatment, a significantly greater reduction in body weight was observed in the combination diuretic group compared to that in the conventional diuretic group (P = 0.0412).\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: The minor G allele of FOXO3 up-regulates IL-10. Research Abstract: The clinical course and eventual outcome, or prognosis, of complex diseases varies enormously between affected individuals. This variability critically determines the impact a disease has on a patient's life but is very poorly understood. Here, we exploit existing genome-wide association study data to gain insight into the role of genetics in prognosis. We identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria. Minor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGFβ1 reduces production of proinflammatory cytokines, including TNFα, and increases production of anti-inflammatory cytokines, including IL-10. Thus, we uncover a shared genetic contribution to prognosis in distinct diseases that operates via a FOXO3-driven pathway modulating inflammatory responses.
{"verdict": "SUPPORT", "evidence": ["Minor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGF\u03b21 reduces production of proinflammatory cytokines, including TNF\u03b1, and increases production of anti-inflammatory cytokines, including IL-10."]}
science.scifact_entailment.804
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe minor G allele of FOXO3 up-regulates IL-10.\n\nResearch Abstract:\nThe clinical course and eventual outcome, or prognosis, of complex diseases varies enormously between affected individuals.\nThis variability critically determines the impact a disease has on a patient's life but is very poorly understood.\nHere, we exploit existing genome-wide association study data to gain insight into the role of genetics in prognosis.\nWe identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria.\nMinor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGFβ1 reduces production of proinflammatory cytokines, including TNFα, and increases production of anti-inflammatory cytokines, including IL-10.\nThus, we uncover a shared genetic contribution to prognosis in distinct diseases that operates via a FOXO3-driven pathway modulating inflammatory responses.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Minor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGF\\u03b21 reduces production of proinflammatory cytokines, including TNF\\u03b1, and increases production of anti-inflammatory cytokines, including IL-10.\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Adaptive Adversarial Imitation Learning Main Content: 1 INTRODUCTION . Humans and animals can learn complex behaviors via imitation . Inspired by these learning mechanisms , Imitation Learning ( IL ) has long been a popular method for training autonomous agents from human-provided demonstrations . However , human and animal imitation differs markedly from commonly used approaches in machine learning . Firstly , humans and animals tend to imitate the goal of the task rather than the particular motions of the demonstrator ( Baker et al. , 2007 ) . Secondly , humans and animals can easily handle imitation scenarios where there is a shift in embodiment and dynamics between themselves and a demonstrator . The first feature of human IL can be represented within the framework of Inverse Reinforcement Learning ( IRL ) ( Ng et al. , 2000 ; Abbeel & Ng , 2004 ; Ziebart et al. , 2008 ) , which at a high level casts the problem of imitation as one of matching outcomes rather than actions . Recent work in adversarial imitation learning ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) has accomplished this by using a discriminator to judge whether a given behavior is from an expert or imitator , and then a policy is trained using the discriminator expert likelihood as a reward . While successful in multiple problem domains , this approach makes it difficult to accommodate the second feature of human learning : imitation across shifts in embodiment and dynamics . This is because in the presence of such shifts , the discriminator may either simply use the embodiment or dynamics to infer whether it is evaluating expert behavior , and as a consequence fails to provide a meaningful reward signal . In this paper we are concerned with the problem of learning adaptive policies that can be transferred to environments with varying dynamics , by imitating a small number of expert demonstrations collected from a single source domain . This problem is important in robotic learning because it is better aligned with real world constraints : 1 ) reward functions are hard to obtain , 2 ) learned policies from one domain are hard to deploy to different domains due to varying source to target domain statistics , and 3 ) the target domain dynamics oftentimes changes while executing the learned policy . As such , this work assumes ground truth rewards are not available , and furthermore we assume that expert demonstrations come from only a single domain ( i.e . an instance of an environment where dynamics can not be exactly replicated by the policy at training time ) . To the best of our knowledge , this is the first work to tackle this challenging problem formulation . Our proposed method solves the above problem by building upon the GAIL ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) framework , by firstly conditioning the policy on a learned dynamics embedding ( “ context variable ” in policy search literature ( Deisenroth et al. , 2013 ) ) . We propose two embedding approaches on which the policy is conditioned , namely , a direct supervised learning approach and a variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) based unsupervised approach . Secondly , to prevent the discriminator from inferring whether it is evaluating the expert behavior or imitator behavior purely through the dynamics , we propose using a Gradient Reversal Layer ( GRL ) to learn a dynamics-invariant discriminator . We demonstrate the effectiveness of the proposed algorithm on benchmark Mujoco simulated control tasks . The main contributions of our work include : 1 ) present a general and novel problem formulation that is well aligned with real world scenarios in comparison to recent literature 2 ) devise a conceptually simple architecture that is capable of learning an adaptive policy from a small number of expert demonstrations ( order of 10s ) collected from only one source environment , 3 ) design an adversarial loss for addressing the covariate shift issue in discriminator learning . 2 RELATED WORK . Historically , two main avenues have been heavily studied for imitation learning : 1 ) Behavioral Cloning ( BC ) and 2 ) Inverse Reinforcement Learning ( IRL ) . Though conceptually simple , BC suffers from compound errors caused by covariate shift , and subsequently , often requires a large quantity of demonstrations ( Pomerleau , 1989 ) , or access to the expert policy ( Ross et al. , 2011 ) in order to recover a stable policy . Recent advancements in imitation learning ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) have adopted an adversarial formation that interleaves between 1 ) discriminating the generated policy against the expert demonstrations and 2 ) a policy improvement step where the policy aims to fool the learned discriminator . Dynamics randomization ( Tobin et al. , 2017 ; Sadeghi & Levine , 2016 ; Mandlekar et al. , 2017 ; Tan et al. , 2018 ; Pinto et al. , 2017 ; Peng et al. , 2018 ; Chebotar et al. , 2018 ; Rajeswaran et al. , 2016 ) has been one of the prevailing vehicles for addressing varying simulation to real-world domain statistics . This avenue of methods typically involves perturbing the environment dynamics ( often times adversarially ) in simulation in order to learn an adaptive policy that is robust enough to bridge the “ Reality Gap ” . While dynamics randomization has been explored for learning robust policies in an RL setting , it has a critical limitation in the imitation learning context : large domain shifts might result in directional differences in dynamics , therefore , the demonstrated actions might no longer be admissible for solving the task in the target domain . Our method ( Figure 2 ) also involves training in a variety of environments with different dynamics . However , we propose conditioning the policy on an explicitly learned dynamics embedding to enable adaptive policies based on online system ID . Yu et al . ( 2017 ) adopted a similar approach towards building adaptive policies . They learn an online system identification model and condition the policy on the predicted model parameters in an RL setting . In comparison to their work , we do not assume access to the ground truth reward signals or the ground truth physics parameters at evaluation time , which makes this work ’ s problem formulation a harder learning problem , but with greater potential for real-world applications . We will compare our method with Yu et al . ( 2017 ) in the experimental section . Third person imitation learning ( Stadie et al. , 2017 ) also employs a GRL ( Ganin & Lempitsky , 2014 ) under a GAIL-like formulation with the goal of learning expert behaviors in a new domain . In comparison , our method also enables learning adaptive policies by employing an online dynamics identification component , so that the policies can be transferred to a class of domains , as opposed to one domain . In addition , learned policies using our proposed method can handle online dynamics perturbations . Meta learning ( Finn et al. , 2017 ) has also been applied to address varying source to target domain dynamics ( Duan et al. , 2017 ; Nagabandi et al. , 2018 ) . The idea behind meta learning in the context of robotic learning is to learn a meta policy that is “ initialized ” for a variety of tasks in simulation , and then fine-tune the policy in the real-world setting given a specific goal . After the meta-learning phase , the agent requires significantly fewer environment interactions to obtain a policy that solves the task . In comparison to meta learning based approaches , fine-tuning on the test environment is not required in our method , with the caveat being that this is true only within the target domain where the dynamics posterior is effective . 2.1 BACKGROUND . In this section we will briefly review GAIL ( Ho & Ermon , 2016 ) . Inspired by GANs , the GAIL objective is defined as : min θ max ω EπE [ logDω ( s , a ) ] +Eπθ [ log ( 1−Dω ( s , a ) ) ] ( 1 ) Where πE denotes the expert policy that generated the demonstrations ; πθ is the policy to imitate the expert ; D is a discriminator that learns to distinguish between πθ and πE with generated stateaction pairs . In comparison to GAN optimization , the GAIL objective is rarely differentiable since differentiation through the environment step is often intractable . Optimization is instead achieved via RL-based policy gradient algorithms , e.g. , PPO ( Schulman et al. , 2017 ) or off policy methods , e.g. , TD3 ( Kostrikov et al. , 2018 ) . Without an explicit reward function , GAIL relies on reward signals provided by the learned discriminator , where a common reward formulation is rω ( s , a ) = − log ( 1−Dω ( s , a ) ) . 3 ADAPTIVE ADVERSARIAL IMITATION LEARNING ( ADAIL ) . 3.1 PROBLEM DEFINITION . Suppose we are given a classE of environments with different dynamics but similar goals , a domain generator g ( c ) which takes in a code c and generates an environment ec ∈ E , and a set of expert demonstrations { τexp } collected from one source environment eexp ∈ E. In adaptive imitation learning , one attempts to learn an adaptive policy πθ that can generalize across environments within E. We assume that the ground truth dynamics parameters c , which are used to generate the simulated environments , are given ( or manually sampled ) during the training phase . 3.2 ALGORITHM OVERVIEW . We allow the agent to interact with a class of similar simulated environments with varying dynamics parameters , which we call “ adaptive training ” . To be able to capture high-level goals from a small set of demonstrations , we adopt a approach similar to GAIL . To provide consistent feedback signals during training across environments with different dynamics , the discriminator should be dynamicsinvariant . We enable this desirable feature by learning a dynamics-invariant feature layer for the discriminator by 1 ) adding another head DR ( c|s , a ) to the discriminator to predict the dynamics parameters , and 2 ) inserting a GRL in-between DR and the dynamics-invariant feature layer . The discriminator design is illustrated in Figure 1 . In addition , to enable adaptive policies , we introduced a dynamics posterior that takes a roll-out trajectory and outputs an embedding , on which the policy is conditioned . Intuitively , explicit dynamics latent variable learning endows the agent with the ability to identify the system and act differently against changes in dynamics . Note that a policy can learn to infer dynamics implicitly , without the need for an external dynamics embedding . However , we find experimentally that policies conditioned explicitly on the environment parameters outperform those that do not . The overall architecture is illustrated in Figure 2 . We call the algorithm Adaptive Adversarial Imitation Learning ( ADAIL ) , with the following objective ( note that for brevity , we for now omit the GRL term discussed in Section 3.4 ) : min θ max ω , φ EπE [ logDω ( s , a ) ] +Eπθ ( ·|c ) [ log ( 1−Dω ( s , a ) ) ] +Eτ∼πθ ( ·|c ) [ logQφ ( c|τ ) ] ( 2 ) Where c is a learned latent dynamics representation that is associated with the rollout environment in each gradient step ; τ is a roll-out trajectory using πθ ( ·|c ) in the corresponding environment ; Q ( c|τ ) is a “ dynamics posterior ” for inferring the dynamics during test time ; The last term in the objective , Eτ∼πθ ( ·|c ) [ logQφ ( c|τ ) ] , is a general form of the expected log likelihood of c given τ . Note that , the posterior training is on-policy , meaning that the rollouts are collected online using the current Algorithm 1 ADAIL 1 : Inputs : 2 : An environment class E. 3 : Initial parameters of policy θ , discriminator ω , and posterior φ . 4 : A set of expert demonstrations { τexp } on one of the environment eexp ∈ E. An environment generator g ( c ) that takes a code and generates an environment ec ∈ E. A prior distribution of p ( c ) . 5 : for i = 1 , 2 , .. do 6 : Sample c ∼ p ( c ) and generate environment ec ∼ g ( c ) 7 : Sample trajectories τi ∼ πθ ( ·|c ) in ec and τei ∼ { τexp } 8 : Update the discriminator parameters ω with the gradients : Ê ( s , a ) ∼τi [ ∇w log ( Dw ( s , a ) ) ] + Ê ( s , a ) ∼τei [ ∇w log ( 1−Dw ( s , a ) ] 9 : Update the discriminator parameters ω again with the following loss , such that the gradients are reversed when back-prop through the dynamics-invariant layer : −Ê ( s , a ) ∼τi [ log ( D R ( c|s , a ) ) ] 10 : Update the posterior parameters φ with gradients Êτi [ ∇φ logQφ ( c|τi ) ) ] 11 : Update policy πθ ( ·|c ) using policy optimization method ( PPO ) with : Ê ( s , a ) ∼τi [ − log ( 1−Dω ( s , a ) ) ] 12 : Output : Learned policy πθ , and posterior Qφ . policy , thereby the last term of the objective is dependent on θ . One can employ various supervised and unsupervised methods towards optimizing this term . We will explore a few methods in the following subsections . The algorithm is outlined in Algorithm 1 .
The submission considers the problem of imitation learning when the dynamics of the expert are not known to the agent and the dynamics of the agent may change frequently. It is however assumed that the agent has access to a parameterized simulator that can simulate the expert dynamics. The parameters for the simulator are not known but are assumed to be drawn from a known distribution.
science.mup_single_document_summarization.17912
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAdaptive Adversarial Imitation Learning\n\nMain Content:\n1 INTRODUCTION . Humans and animals can learn complex behaviors via imitation . Inspired by these learning mechanisms , Imitation Learning ( IL ) has long been a popular method for training autonomous agents from human-provided demonstrations . However , human and animal imitation differs markedly from commonly used approaches in machine learning . Firstly , humans and animals tend to imitate the goal of the task rather than the particular motions of the demonstrator ( Baker et al. , 2007 ) . Secondly , humans and animals can easily handle imitation scenarios where there is a shift in embodiment and dynamics between themselves and a demonstrator . The first feature of human IL can be represented within the framework of Inverse Reinforcement Learning ( IRL ) ( Ng et al. , 2000 ; Abbeel & Ng , 2004 ; Ziebart et al. , 2008 ) , which at a high level casts the problem of imitation as one of matching outcomes rather than actions . Recent work in adversarial imitation learning ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) has accomplished this by using a discriminator to judge whether a given behavior is from an expert or imitator , and then a policy is trained using the discriminator expert likelihood as a reward . While successful in multiple problem domains , this approach makes it difficult to accommodate the second feature of human learning : imitation across shifts in embodiment and dynamics . This is because in the presence of such shifts , the discriminator may either simply use the embodiment or dynamics to infer whether it is evaluating expert behavior , and as a consequence fails to provide a meaningful reward signal . In this paper we are concerned with the problem of learning adaptive policies that can be transferred to environments with varying dynamics , by imitating a small number of expert demonstrations collected from a single source domain . This problem is important in robotic learning because it is better aligned with real world constraints : 1 ) reward functions are hard to obtain , 2 ) learned policies from one domain are hard to deploy to different domains due to varying source to target domain statistics , and 3 ) the target domain dynamics oftentimes changes while executing the learned policy . As such , this work assumes ground truth rewards are not available , and furthermore we assume that expert demonstrations come from only a single domain ( i.e . an instance of an environment where dynamics can not be exactly replicated by the policy at training time ) . To the best of our knowledge , this is the first work to tackle this challenging problem formulation . Our proposed method solves the above problem by building upon the GAIL ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) framework , by firstly conditioning the policy on a learned dynamics embedding ( “ context variable ” in policy search literature ( Deisenroth et al. , 2013 ) ) . We propose two embedding approaches on which the policy is conditioned , namely , a direct supervised learning approach and a variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) based unsupervised approach . Secondly , to prevent the discriminator from inferring whether it is evaluating the expert behavior or imitator behavior purely through the dynamics , we propose using a Gradient Reversal Layer ( GRL ) to learn a dynamics-invariant discriminator . We demonstrate the effectiveness of the proposed algorithm on benchmark Mujoco simulated control tasks . The main contributions of our work include : 1 ) present a general and novel problem formulation that is well aligned with real world scenarios in comparison to recent literature 2 ) devise a conceptually simple architecture that is capable of learning an adaptive policy from a small number of expert demonstrations ( order of 10s ) collected from only one source environment , 3 ) design an adversarial loss for addressing the covariate shift issue in discriminator learning . 2 RELATED WORK . Historically , two main avenues have been heavily studied for imitation learning : 1 ) Behavioral Cloning ( BC ) and 2 ) Inverse Reinforcement Learning ( IRL ) . Though conceptually simple , BC suffers from compound errors caused by covariate shift , and subsequently , often requires a large quantity of demonstrations ( Pomerleau , 1989 ) , or access to the expert policy ( Ross et al. , 2011 ) in order to recover a stable policy . Recent advancements in imitation learning ( Ho & Ermon , 2016 ; Finn et al. , 2016 ) have adopted an adversarial formation that interleaves between 1 ) discriminating the generated policy against the expert demonstrations and 2 ) a policy improvement step where the policy aims to fool the learned discriminator . Dynamics randomization ( Tobin et al. , 2017 ; Sadeghi & Levine , 2016 ; Mandlekar et al. , 2017 ; Tan et al. , 2018 ; Pinto et al. , 2017 ; Peng et al. , 2018 ; Chebotar et al. , 2018 ; Rajeswaran et al. , 2016 ) has been one of the prevailing vehicles for addressing varying simulation to real-world domain statistics . This avenue of methods typically involves perturbing the environment dynamics ( often times adversarially ) in simulation in order to learn an adaptive policy that is robust enough to bridge the “ Reality Gap ” . While dynamics randomization has been explored for learning robust policies in an RL setting , it has a critical limitation in the imitation learning context : large domain shifts might result in directional differences in dynamics , therefore , the demonstrated actions might no longer be admissible for solving the task in the target domain . Our method ( Figure 2 ) also involves training in a variety of environments with different dynamics . However , we propose conditioning the policy on an explicitly learned dynamics embedding to enable adaptive policies based on online system ID . Yu et al . ( 2017 ) adopted a similar approach towards building adaptive policies . They learn an online system identification model and condition the policy on the predicted model parameters in an RL setting . In comparison to their work , we do not assume access to the ground truth reward signals or the ground truth physics parameters at evaluation time , which makes this work ’ s problem formulation a harder learning problem , but with greater potential for real-world applications . We will compare our method with Yu et al . ( 2017 ) in the experimental section . Third person imitation learning ( Stadie et al. , 2017 ) also employs a GRL ( Ganin & Lempitsky , 2014 ) under a GAIL-like formulation with the goal of learning expert behaviors in a new domain . In comparison , our method also enables learning adaptive policies by employing an online dynamics identification component , so that the policies can be transferred to a class of domains , as opposed to one domain . In addition , learned policies using our proposed method can handle online dynamics perturbations . Meta learning ( Finn et al. , 2017 ) has also been applied to address varying source to target domain dynamics ( Duan et al. , 2017 ; Nagabandi et al. , 2018 ) . The idea behind meta learning in the context of robotic learning is to learn a meta policy that is “ initialized ” for a variety of tasks in simulation , and then fine-tune the policy in the real-world setting given a specific goal . After the meta-learning phase , the agent requires significantly fewer environment interactions to obtain a policy that solves the task . In comparison to meta learning based approaches , fine-tuning on the test environment is not required in our method , with the caveat being that this is true only within the target domain where the dynamics posterior is effective . 2.1 BACKGROUND . In this section we will briefly review GAIL ( Ho & Ermon , 2016 ) . Inspired by GANs , the GAIL objective is defined as : min θ max ω EπE [ logDω ( s , a ) ] +Eπθ [ log ( 1−Dω ( s , a ) ) ] ( 1 ) Where πE denotes the expert policy that generated the demonstrations ; πθ is the policy to imitate the expert ; D is a discriminator that learns to distinguish between πθ and πE with generated stateaction pairs . In comparison to GAN optimization , the GAIL objective is rarely differentiable since differentiation through the environment step is often intractable . Optimization is instead achieved via RL-based policy gradient algorithms , e.g. , PPO ( Schulman et al. , 2017 ) or off policy methods , e.g. , TD3 ( Kostrikov et al. , 2018 ) . Without an explicit reward function , GAIL relies on reward signals provided by the learned discriminator , where a common reward formulation is rω ( s , a ) = − log ( 1−Dω ( s , a ) ) . 3 ADAPTIVE ADVERSARIAL IMITATION LEARNING ( ADAIL ) . 3.1 PROBLEM DEFINITION . Suppose we are given a classE of environments with different dynamics but similar goals , a domain generator g ( c ) which takes in a code c and generates an environment ec ∈ E , and a set of expert demonstrations { τexp } collected from one source environment eexp ∈ E. In adaptive imitation learning , one attempts to learn an adaptive policy πθ that can generalize across environments within E. We assume that the ground truth dynamics parameters c , which are used to generate the simulated environments , are given ( or manually sampled ) during the training phase . 3.2 ALGORITHM OVERVIEW . We allow the agent to interact with a class of similar simulated environments with varying dynamics parameters , which we call “ adaptive training ” . To be able to capture high-level goals from a small set of demonstrations , we adopt a approach similar to GAIL . To provide consistent feedback signals during training across environments with different dynamics , the discriminator should be dynamicsinvariant . We enable this desirable feature by learning a dynamics-invariant feature layer for the discriminator by 1 ) adding another head DR ( c|s , a ) to the discriminator to predict the dynamics parameters , and 2 ) inserting a GRL in-between DR and the dynamics-invariant feature layer . The discriminator design is illustrated in Figure 1 . In addition , to enable adaptive policies , we introduced a dynamics posterior that takes a roll-out trajectory and outputs an embedding , on which the policy is conditioned . Intuitively , explicit dynamics latent variable learning endows the agent with the ability to identify the system and act differently against changes in dynamics . Note that a policy can learn to infer dynamics implicitly , without the need for an external dynamics embedding . However , we find experimentally that policies conditioned explicitly on the environment parameters outperform those that do not . The overall architecture is illustrated in Figure 2 . We call the algorithm Adaptive Adversarial Imitation Learning ( ADAIL ) , with the following objective ( note that for brevity , we for now omit the GRL term discussed in Section 3.4 ) : min θ max ω , φ EπE [ logDω ( s , a ) ] +Eπθ ( ·|c ) [ log ( 1−Dω ( s , a ) ) ] +Eτ∼πθ ( ·|c ) [ logQφ ( c|τ ) ] ( 2 ) Where c is a learned latent dynamics representation that is associated with the rollout environment in each gradient step ; τ is a roll-out trajectory using πθ ( ·|c ) in the corresponding environment ; Q ( c|τ ) is a “ dynamics posterior ” for inferring the dynamics during test time ; The last term in the objective , Eτ∼πθ ( ·|c ) [ logQφ ( c|τ ) ] , is a general form of the expected log likelihood of c given τ . Note that , the posterior training is on-policy , meaning that the rollouts are collected online using the current Algorithm 1 ADAIL 1 : Inputs : 2 : An environment class E. 3 : Initial parameters of policy θ , discriminator ω , and posterior φ . 4 : A set of expert demonstrations { τexp } on one of the environment eexp ∈ E. An environment generator g ( c ) that takes a code and generates an environment ec ∈ E. A prior distribution of p ( c ) . 5 : for i = 1 , 2 , .. do 6 : Sample c ∼ p ( c ) and generate environment ec ∼ g ( c ) 7 : Sample trajectories τi ∼ πθ ( ·|c ) in ec and τei ∼ { τexp } 8 : Update the discriminator parameters ω with the gradients : Ê ( s , a ) ∼τi [ ∇w log ( Dw ( s , a ) ) ] + Ê ( s , a ) ∼τei [ ∇w log ( 1−Dw ( s , a ) ] 9 : Update the discriminator parameters ω again with the following loss , such that the gradients are reversed when back-prop through the dynamics-invariant layer : −Ê ( s , a ) ∼τi [ log ( D R ( c|s , a ) ) ] 10 : Update the posterior parameters φ with gradients Êτi [ ∇φ logQφ ( c|τi ) ) ] 11 : Update policy πθ ( ·|c ) using policy optimization method ( PPO ) with : Ê ( s , a ) ∼τi [ − log ( 1−Dω ( s , a ) ) ] 12 : Output : Learned policy πθ , and posterior Qφ . policy , thereby the last term of the objective is dependent on θ . One can employ various supervised and unsupervised methods towards optimizing this term . We will explore a few methods in the following subsections . The algorithm is outlined in Algorithm 1 .", "role": "user" }, { "content": "The submission considers the problem of imitation learning when the dynamics of the expert are not known to the agent and the dynamics of the agent may change frequently. It is however assumed that the agent has access to a parameterized simulator that can simulate the expert dynamics. The parameters for the simulator are not known but are assumed to be drawn from a known distribution.", "role": "assistant" } ]
science.discomat_te
| Sample No. | PA50 | PB50 | PC50 | PC55 | PA55 | PB55 | Al15 | Na10 | PA62.5 | | [NaO0.5]ex/[BO1.5] | 0 | 0 | 0 | 0.13 | 0.17 | 0.33 | 0.33 | 0.33 | 0.42 | | Above Gd solubility | P | P | P | P | P | P | P | P | P | | | | | | | | | | | | | Sample No. | PC70 | PA70 | Al10 | Na15 | B20 | B15 | Al2 | PC100 | PA90 | | [NaO0.5]ex/[BO1.5] | 0.50 | 0.67 | 0.67 | 0.67 | 0.75 | 1.00 | 1.20 | 1.25 | 1.33 | | Above Gd solubility | P | C | C | C | C | C | C | C | C | | | | | | | | | | | | | Sample No. | PB70 | B10 | PA100 | Na30 | B5 | PB100 | B0 | | | | [NaO0.5]ex/[BO1.5] | 1.33 | 1.50 | 1.67 | 1.67 | 3.00 | 3.33 | - | | | | Above Gd solubility | C | C | C | C | C | C | C | | | Caption: The [NaO0.5]ex/[BO1.5] a molar ratio in each baseline glass Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0", "type": "Other"} {"value": "0", "type": "Other"} {"value": "0", "type": "Other"} {"value": "0.13", "type": "Other"} {"value": "0.17", "type": "Other"} {"value": "0.33", "type": "Other"} {"value": "0.33", "type": "Other"} {"value": "0.33", "type": "Other"} {"value": "0.42", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "0.67", "type": "Other"} {"value": "0.67", "type": "Other"} {"value": "0.67", "type": "Other"} {"value": "0.75", "type": "Other"} {"value": "1.00", "type": "Other"} {"value": "1.20", "type": "Other"} {"value": "1.25", "type": "Other"} {"value": "1.33", "type": "Other"} {"value": "1.33", "type": "Other"} {"value": "1.50", "type": "Other"} {"value": "1.67", "type": "Other"} {"value": "1.67", "type": "Other"} {"value": "3.00", "type": "Other"} {"value": "3.33", "type": "Other"}
science.discomat_te.82
[ { "content": "| Sample No. | PA50 | PB50 | PC50 | PC55 | PA55 | PB55 | Al15 | Na10 | PA62.5 |\n| [NaO0.5]ex/[BO1.5] | 0 | 0 | 0 | 0.13 | 0.17 | 0.33 | 0.33 | 0.33 | 0.42 |\n| Above Gd solubility | P | P | P | P | P | P | P | P | P |\n| | | | | | | | | | |\n| Sample No. | PC70 | PA70 | Al10 | Na15 | B20 | B15 | Al2 | PC100 | PA90 |\n| [NaO0.5]ex/[BO1.5] | 0.50 | 0.67 | 0.67 | 0.67 | 0.75 | 1.00 | 1.20 | 1.25 | 1.33 |\n| Above Gd solubility | P | C | C | C | C | C | C | C | C |\n| | | | | | | | | | |\n| Sample No. | PB70 | B10 | PA100 | Na30 | B5 | PB100 | B0 | | |\n| [NaO0.5]ex/[BO1.5] | 1.33 | 1.50 | 1.67 | 1.67 | 3.00 | 3.33 | - | | |\n| Above Gd solubility | C | C | C | C | C | C | C | | |\n\nCaption: The [NaO0.5]ex/[BO1.5] a molar ratio in each baseline glass\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.13\", \"type\": \"Other\"}\n{\"value\": \"0.17\", \"type\": \"Other\"}\n{\"value\": \"0.33\", \"type\": \"Other\"}\n{\"value\": \"0.33\", \"type\": \"Other\"}\n{\"value\": \"0.33\", \"type\": \"Other\"}\n{\"value\": \"0.42\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"0.67\", \"type\": \"Other\"}\n{\"value\": \"0.67\", \"type\": \"Other\"}\n{\"value\": \"0.67\", \"type\": \"Other\"}\n{\"value\": \"0.75\", \"type\": \"Other\"}\n{\"value\": \"1.00\", \"type\": \"Other\"}\n{\"value\": \"1.20\", \"type\": \"Other\"}\n{\"value\": \"1.25\", \"type\": \"Other\"}\n{\"value\": \"1.33\", \"type\": \"Other\"}\n{\"value\": \"1.33\", \"type\": \"Other\"}\n{\"value\": \"1.50\", \"type\": \"Other\"}\n{\"value\": \"1.67\", \"type\": \"Other\"}\n{\"value\": \"1.67\", \"type\": \"Other\"}\n{\"value\": \"3.00\", \"type\": \"Other\"}\n{\"value\": \"3.33\", \"type\": \"Other\"}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Clusters of multidrug-resistant Mycobacterium tuberculosis cases, Europe. Molecular surveillance of multidrug-resistant tuberculosis (MDR TB) was implemented in Europe as case reporting in 2005. For all new MDR TB cases detected from January 2003 through June 2007, countries reported case-based epidemiologic data and DNA fingerprint patterns of MDR TB strains when available. International clusters were detected and analyzed. From 2003 through mid-2007 in Europe, 2,494 cases of MDR TB were reported from 24 European countries. Epidemiologic and molecular data were linked for 593 (39%) cases, and 672 insertion sequence 6110 DNA fingerprint patterns were reported from 19 countries. Of these patterns, 288 (43%) belonged to 18 European clusters; 7 clusters (242/288 cases, 84%) were characterized by strains of the Beijing genotype family, including the largest cluster (175/288 cases, 61%). Both clustering and the Beijing genotype were associated with strains originating in eastern European countries. Molecular cluster detection contributes to identification of transmission profile, risk factors, and control measures.
{"CellLine": [], "Chemical": [], "Disease": ["multidrug-resistant tuberculosis", "MDR TB"], "Gene": [], "Species": ["Mycobacterium tuberculosis"], "Variant": []}
science.biored_ner.31
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nClusters of multidrug-resistant Mycobacterium tuberculosis cases, Europe. Molecular surveillance of multidrug-resistant tuberculosis (MDR TB) was implemented in Europe as case reporting in 2005. For all new MDR TB cases detected from January 2003 through June 2007, countries reported case-based epidemiologic data and DNA fingerprint patterns of MDR TB strains when available. International clusters were detected and analyzed. From 2003 through mid-2007 in Europe, 2,494 cases of MDR TB were reported from 24 European countries. Epidemiologic and molecular data were linked for 593 (39%) cases, and 672 insertion sequence 6110 DNA fingerprint patterns were reported from 19 countries. Of these patterns, 288 (43%) belonged to 18 European clusters; 7 clusters (242/288 cases, 84%) were characterized by strains of the Beijing genotype family, including the largest cluster (175/288 cases, 61%). Both clustering and the Beijing genotype were associated with strains originating in eastern European countries. Molecular cluster detection contributes to identification of transmission profile, risk factors, and control measures.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"multidrug-resistant tuberculosis\", \"MDR TB\"], \"Gene\": [], \"Species\": [\"Mycobacterium tuberculosis\"], \"Variant\": []}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B In this paper, we detail our submission to the 2019, 7th year, BioASQ competition. We present our approach for Task-7b, Phase B, Exact Answering Task. These Question Answering (QA) tasks include Factoid, Yes/No, List Type Question answering. Our system is based on a contextual word embedding model. We have used a Bidirectional Encoder Representations from Transformers(BERT) based system, fined tuned for biomedical question answering task using BioBERT. In the third test batch set, our system achieved the highest MRR score for Factoid Question Answering task. Also, for List type question answering task our system achieved the highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions, and also highlight identified downsides for our current approach and ways to improve them in our future experiments. Introduction BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.) The QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question. BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure. Related Work ::: A minimum background on BERT BERT stands for "Bidirectional Encoder Representations from Transformers" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4. BERT was originally trained to perform tasks such as language model creation using masked words and next-sentence-prediction. In other words BERT weights are learned such that context is used in building the representation of the word, not just as a loss function to help learn a context-independent representation. For detailed understanding of BERT Architecture, please refer to the original BERT paper BIBREF1. Related Work ::: A minimum background on BERT ::: Comparison of Word Embeddings and Contextual Word Embeddings A ‘word embedding’ is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration. Our Systems and Their Performance on Factoid Questions We have experimented with several systems and their variations, e.g. created by training with specific additional features (see next subsection). Here is their list and short descriptions. Unfortunately we did not pay attention to naming, and the systems evolved between test batches, so the overall picture can only be understood by looking at the details. When we started the experiments our objective was to see whether BioBERT and entailment-based techniques can provide value for in the context of biomedical question answering. The answer to both questions was a yes, qualified by many examples clearly showing the limitations of both methods. Therefore we tried to address some of these limitations using feature engineering with mixed results: some clear errors got corrected and new errors got introduced, without overall improvement but convincing us that in future experiments it might be worth trying feature engineering again especially if more training data were available. Overall we experimented with several approaches with the following aspects of the systems changing between batches, that is being absent or present: training on BioAsq data vs. training on SQuAD using the BioAsq snippets for context vs. using the documents from the provided URLs for context adding or not the LAT, i.e. lexical answer type, feature (see BIBREF9, BIBREF10 and an explanation in the subsection just below). For Yes/No questions (only) we experimented with the entailment methods. We will discuss the performance of these models below and in Section 6. But before we do that, let us discuss a feature engineering experiment which eventually produced mixed results, but where we feel it is potentially useful in future experiments. Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative) ::: Assumptions and rules for deriving lexical answer type. There are different question types: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. Question words are identified through parts of speech tags: 'WDT', 'WRB' ,'WP'. We assumed that LAT is a ‘Noun’ and follows the question word. Often it was also a subject (nsubj). This process is illustrated in Fig.FIGREF15. LAT computation was governed by a few simple rules, e.g. when a question has multiple words that are 'Subjects’ (and ‘Noun’), a word that is in proximity to the question word is considered as ‘LAT’. These rules are different for each "Wh" word. Namely, when the word immediately following the question word is a Noun, window size is set to ‘3’. The window size ‘3’ means we iterate through the next ‘3’ words to check if any of the word is both Noun and Subject, If so, such word is considered the ‘LAT’; else the word that is present very next to the question word is considered as the ‘LAT’. For questions with words ‘Which’ , ‘What’, ‘When’; a Noun immediately following the question word is very often the LAT, e.g. 'enzyme' in Which enzyme is targeted by Evolocumab?. When the word immediately following the question word is not a Noun, e.g. in What is the function of the protein Magt1? the window size is set to ‘5’, and we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as the ‘LAT’; else, the Noun in close proximity to the question word and following it is returned as the ‘LAT’. For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself. For the word ‘How', e.g. in How many selenoproteins are encoded in the human genome?, we look at the adjective and if we find one, we take it to be the LAT, otherwise the word 'How' is considered as the ‘LAT’. Perhaps because of using only very simple rules, the accuracy for ‘LAT’ derivation is 75%; that is, in the remaining 25% of the cases the LAT word is identified incorrectly. Worth noting is that the overall performance the system that used LATs was slightly inferior to the system without LATs, but the types of errors changed. The training used BioBERT with the LAT feature as part of the input string. The errors it introduces usually involve finding the wrong element of the correct type e.g. wrong enzyme when two similar enzymes are described in the text, or 'neuron' when asked about a type of cell with a certain function, when the answer calls for a different cell category, adipocytes, and both are mentioned in the text. We feel with more data and additional tuning or perhaps using an ensemble model, we might be able to keep the correct answers, and improve the results on the confusing examples like the one mentioned above. Therefore if we improve our ‘LAT’ derivation logic, or have larger datasets, then perhaps the neural network techniques they will yield better results. Our Systems and Their Performance on Factoid Questions ::: Impact of Using Context from URLs (negative) In some experiments, for context in testing, we used documents for which URL pointers are provided in BioAsq. However, our system UNCC_QA3 underperformed our other system tested only on the provided snippets. In Batch 5 the underperformance was about 6% of MRR, compared to our best system UNCC_QA1, and by 9% to the top performer. Performance on Yes/No and List questions Our work focused on Factoid questions. But we also have done experiments on List-type and Yes/No questions. Performance on Yes/No and List questions ::: Entailment improves Yes/No accuracy We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question. Summary of our results ::: Factoid questions ::: Systems used in Batch 5 experiments System description for ‘UNCC_QA1’: The system was finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data. System description for ‘QA1’ : ‘LAT’ feature was added and finetuned with SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data. System description for ‘UNCC_QA3’ : Fine tuning process is same as it is done for the system ‘UNCC_QA1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated from the relevant documents for which URLS are included in the test data. Summary of our results ::: List Questions For List-type questions, although post processing helped in the later batches, we never managed to obtain competitive precision, although our recall was good. Summary of our results ::: Yes/No questions The only thing worth remembering from our performance is that using entailment can have a measurable impact (at least with respect to a weak baseline). The results (weak) are in Table 3. APPENDIX In this appendix we provide additional details about the implementations. APPENDIX ::: Systems and their descriptions: ::: System description for QA1: We generated Context/paragraph by aggregating relevant snippets available in the test data and mapped it against the question text and question id. We ignored the content present in the documents (document URLS were provided in the original test data). The model is finetuned with BioASQ data. data preprocessing is done in the same way as it is done for test batch-1. Model fine tuned on BioASQ data. ‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data. APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA3: System is finetuned on the SQuAD 2.0 [reference] and BioASQ dataset[].For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data. Fine tuning process is same as it is done for the system ‘UNCC_QA_1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated form from the relevant documents for which URLS are included in the test data. Figure 2: A simple way of finding the lexical answer types, LATs, of factoid questions: using POS tags to find the question word (e.g. ’which’), and a dependency parse to find the LAT within the window of 3 words. If a noun is not found near the ”Wh-” word, we iterate looking for it, as in the second panel. Table 2: List Questions Figure 6: Dependency parse visualization for the example ’how’ question. Query: What dataset did they use? JSON Response:
{"answer": "BioASQ dataset", "evidence": ["BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure."]}
science.qasper_abstractive_qa.146.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nUNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B\n\nIn this paper, we detail our submission to the 2019, 7th year, BioASQ competition. We present our approach for Task-7b, Phase B, Exact Answering Task. These Question Answering (QA) tasks include Factoid, Yes/No, List Type Question answering. Our system is based on a contextual word embedding model. We have used a Bidirectional Encoder Representations from Transformers(BERT) based system, fined tuned for biomedical question answering task using BioBERT. In the third test batch set, our system achieved the highest MRR score for Factoid Question Answering task. Also, for List type question answering task our system achieved the highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions, and also highlight identified downsides for our current approach and ways to improve them in our future experiments.\n\nIntroduction\nBioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)\nThe QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question.\nBioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure.\n\nRelated Work ::: A minimum background on BERT\nBERT stands for \"Bidirectional Encoder Representations from Transformers\" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4.\nBERT was originally trained to perform tasks such as language model creation using masked words and next-sentence-prediction. In other words BERT weights are learned such that context is used in building the representation of the word, not just as a loss function to help learn a context-independent representation. For detailed understanding of BERT Architecture, please refer to the original BERT paper BIBREF1.\n\nRelated Work ::: A minimum background on BERT ::: Comparison of Word Embeddings and Contextual Word Embeddings\nA ‘word embedding’ is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration.\n\nOur Systems and Their Performance on Factoid Questions\nWe have experimented with several systems and their variations, e.g. created by training with specific additional features (see next subsection). Here is their list and short descriptions. Unfortunately we did not pay attention to naming, and the systems evolved between test batches, so the overall picture can only be understood by looking at the details.\nWhen we started the experiments our objective was to see whether BioBERT and entailment-based techniques can provide value for in the context of biomedical question answering. The answer to both questions was a yes, qualified by many examples clearly showing the limitations of both methods. Therefore we tried to address some of these limitations using feature engineering with mixed results: some clear errors got corrected and new errors got introduced, without overall improvement but convincing us that in future experiments it might be worth trying feature engineering again especially if more training data were available.\nOverall we experimented with several approaches with the following aspects of the systems changing between batches, that is being absent or present:\ntraining on BioAsq data vs. training on SQuAD\nusing the BioAsq snippets for context vs. using the documents from the provided URLs for context\nadding or not the LAT, i.e. lexical answer type, feature (see BIBREF9, BIBREF10 and an explanation in the subsection just below).\nFor Yes/No questions (only) we experimented with the entailment methods.\nWe will discuss the performance of these models below and in Section 6. But before we do that, let us discuss a feature engineering experiment which eventually produced mixed results, but where we feel it is potentially useful in future experiments.\n\nOur Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative) ::: Assumptions and rules for deriving lexical answer type.\nThere are different question types: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. Question words are identified through parts of speech tags: 'WDT', 'WRB' ,'WP'. We assumed that LAT is a ‘Noun’ and follows the question word. Often it was also a subject (nsubj). This process is illustrated in Fig.FIGREF15.\nLAT computation was governed by a few simple rules, e.g. when a question has multiple words that are 'Subjects’ (and ‘Noun’), a word that is in proximity to the question word is considered as ‘LAT’. These rules are different for each \"Wh\" word.\nNamely, when the word immediately following the question word is a Noun, window size is set to ‘3’. The window size ‘3’ means we iterate through the next ‘3’ words to check if any of the word is both Noun and Subject, If so, such word is considered the ‘LAT’; else the word that is present very next to the question word is considered as the ‘LAT’.\nFor questions with words ‘Which’ , ‘What’, ‘When’; a Noun immediately following the question word is very often the LAT, e.g. 'enzyme' in Which enzyme is targeted by Evolocumab?. When the word immediately following the question word is not a Noun, e.g. in What is the function of the protein Magt1? the window size is set to ‘5’, and we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as the ‘LAT’; else, the Noun in close proximity to the question word and following it is returned as the ‘LAT’.\nFor questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself. For the word ‘How', e.g. in How many selenoproteins are encoded in the human genome?, we look at the adjective and if we find one, we take it to be the LAT, otherwise the word 'How' is considered as the ‘LAT’.\nPerhaps because of using only very simple rules, the accuracy for ‘LAT’ derivation is 75%; that is, in the remaining 25% of the cases the LAT word is identified incorrectly. Worth noting is that the overall performance the system that used LATs was slightly inferior to the system without LATs, but the types of errors changed. The training used BioBERT with the LAT feature as part of the input string. The errors it introduces usually involve finding the wrong element of the correct type e.g. wrong enzyme when two similar enzymes are described in the text, or 'neuron' when asked about a type of cell with a certain function, when the answer calls for a different cell category, adipocytes, and both are mentioned in the text. We feel with more data and additional tuning or perhaps using an ensemble model, we might be able to keep the correct answers, and improve the results on the confusing examples like the one mentioned above. Therefore if we improve our ‘LAT’ derivation logic, or have larger datasets, then perhaps the neural network techniques they will yield better results.\n\nOur Systems and Their Performance on Factoid Questions ::: Impact of Using Context from URLs (negative)\nIn some experiments, for context in testing, we used documents for which URL pointers are provided in BioAsq. However, our system UNCC_QA3 underperformed our other system tested only on the provided snippets.\nIn Batch 5 the underperformance was about 6% of MRR, compared to our best system UNCC_QA1, and by 9% to the top performer.\n\nPerformance on Yes/No and List questions\nOur work focused on Factoid questions. But we also have done experiments on List-type and Yes/No questions.\n\nPerformance on Yes/No and List questions ::: Entailment improves Yes/No accuracy\nWe started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question.\n\nSummary of our results ::: Factoid questions ::: Systems used in Batch 5 experiments\nSystem description for ‘UNCC_QA1’: The system was finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.\nSystem description for ‘QA1’ : ‘LAT’ feature was added and finetuned with SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.\nSystem description for ‘UNCC_QA3’ : Fine tuning process is same as it is done for the system ‘UNCC_QA1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated from the relevant documents for which URLS are included in the test data.\n\nSummary of our results ::: List Questions\nFor List-type questions, although post processing helped in the later batches, we never managed to obtain competitive precision, although our recall was good.\n\nSummary of our results ::: Yes/No questions\nThe only thing worth remembering from our performance is that using entailment can have a measurable impact (at least with respect to a weak baseline). The results (weak) are in Table 3.\n\nAPPENDIX\nIn this appendix we provide additional details about the implementations.\n\nAPPENDIX ::: Systems and their descriptions: ::: System description for QA1:\nWe generated Context/paragraph by aggregating relevant snippets available in the test data and mapped it against the question text and question id. We ignored the content present in the documents (document URLS were provided in the original test data). The model is finetuned with BioASQ data.\ndata preprocessing is done in the same way as it is done for test batch-1. Model fine tuned on BioASQ data.\n‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.\n\nAPPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA3:\nSystem is finetuned on the SQuAD 2.0 [reference] and BioASQ dataset[].For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.\nFine tuning process is same as it is done for the system ‘UNCC_QA_1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated form from the relevant documents for which URLS are included in the test data.\n\nFigure 2: A simple way of finding the lexical answer types, LATs, of factoid questions: using POS tags to find the question word (e.g. ’which’), and a dependency parse to find the LAT within the window of 3 words. If a noun is not found near the ”Wh-” word, we iterate looking for it, as in the second panel.\n\nTable 2: List Questions\n\nFigure 6: Dependency parse visualization for the example ’how’ question.\n\nQuery: What dataset did they use?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"BioASQ dataset\", \"evidence\": [\"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure.\"]}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language. (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars. Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
Output Entities: {"Generic": ["core modules"], "Material": ["verb final language", "Korean newspaper articles", "Korean", "missiles and chemical biological warfare"], "Method": ["knowledge-based automated acquisition of grammars", "language neutral meaning representation", "word sense disambiguation", "CCLINC (Common Coalition Language System at Lincoln Laboratory)", "Korean-to-English machine translation system"], "Metric": [], "OtherScientificTerm": ["semantic frame", "overt case markers"], "Task": ["word order generation", "translation", "parsing of Korean"]}
science.scierc_ner.20
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nAt MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language. (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars. Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"core modules\"], \"Material\": [\"verb final language\", \"Korean newspaper articles\", \"Korean\", \"missiles and chemical biological warfare\"], \"Method\": [\"knowledge-based automated acquisition of grammars\", \"language neutral meaning representation\", \"word sense disambiguation\", \"CCLINC (Common Coalition Language System at Lincoln Laboratory)\", \"Korean-to-English machine translation system\"], \"Metric\": [], \"OtherScientificTerm\": [\"semantic frame\", \"overt case markers\"], \"Task\": [\"word order generation\", \"translation\", \"parsing of Korean\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Quinine-induced arrhythmia in a patient with severe malaria. It was reported that there was a case of severe malaria patient with jaundice who presented with arrhythmia (premature ventricular contraction) while getting quinine infusion was reported. A man, 25 years old, was admitted to hospital with high fever, chill, vomiting, jaundice. The patient was fully conscious, blood pressure 120/80 mmHg, pulse rate 100 x/minute, regular. On admission, laboratory examination showed Plasmodium falciparum (++++), total bilirubin 8.25 mg/dL, conjugated bilirubin 4.36 mg/dL, unconjugated bilirubin 3.89 mg/dL, potassium 3.52 meq/L Patient was diagnosed as severe malaria with jaundice and got quinine infusion in dextrose 5% 500 mg/8 hour. On the second day the patient had vomitus, diarrhea, tinnitus, loss of hearing. After 30 hours of quinine infusion the patient felt palpitation and electrocardiography (ECG) recording showed premature ventricular contraction (PVC) > 5 x/minute, trigemini, constant type--sinoatrial block, positive U wave. He was treated with lidocaine 50 mg intravenously followed by infusion 1500 mg in dextrose 5%/24 hour and potassium aspartate tablet. Quinine infusion was discontinued and changed with sulfate quinine tablets. Three hours later the patient felt better, the frequency of PVC reduced to 4 - 5 x/minute and on the third day ECG was normal, potassium level was 3.34 meq/L. He was discharged on 7th day in good condition. Quinine, like quinidine, is a chincona alkaloid that has anti-arrhythmic property, although it also pro-arrhythmic that can cause various arrhythmias, including severe arrhythmia such as multiple PVC. Administration of parenteral quinine must be done carefully and with good observation because of its pro-arrhythmic effect, especially in older patients who have heart diseases or patients with electrolyte disorder (hypokalemia) which frequently occurs due to vomiting and or diarrhea in malaria cases.
{"CellLine": [], "Chemical": ["lidocaine", "bilirubin", "potassium", "potassium aspartate", "quinine", "dextrose", "quinidine", "Quinine", "chincona alkaloid"], "Disease": ["severe malaria", "heart diseases", "premature ventricular contraction", "sinoatrial block", "electrolyte disorder", "arrhythmia", "tinnitus", "PVC", "arrhythmic", "jaundice", "vomitus", "loss of hearing", "arrhythmias", "fever", "palpitation", "malaria", "chill", "diarrhea", "vomiting", "hypokalemia"], "Gene": [], "Species": ["patient", "Patient", "patients", "Plasmodium falciparum", "man"], "Variant": []}
science.biored_ner.86
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nQuinine-induced arrhythmia in a patient with severe malaria. It was reported that there was a case of severe malaria patient with jaundice who presented with arrhythmia (premature ventricular contraction) while getting quinine infusion was reported. A man, 25 years old, was admitted to hospital with high fever, chill, vomiting, jaundice. The patient was fully conscious, blood pressure 120/80 mmHg, pulse rate 100 x/minute, regular. On admission, laboratory examination showed Plasmodium falciparum (++++), total bilirubin 8.25 mg/dL, conjugated bilirubin 4.36 mg/dL, unconjugated bilirubin 3.89 mg/dL, potassium 3.52 meq/L Patient was diagnosed as severe malaria with jaundice and got quinine infusion in dextrose 5% 500 mg/8 hour. On the second day the patient had vomitus, diarrhea, tinnitus, loss of hearing. After 30 hours of quinine infusion the patient felt palpitation and electrocardiography (ECG) recording showed premature ventricular contraction (PVC) > 5 x/minute, trigemini, constant type--sinoatrial block, positive U wave. He was treated with lidocaine 50 mg intravenously followed by infusion 1500 mg in dextrose 5%/24 hour and potassium aspartate tablet. Quinine infusion was discontinued and changed with sulfate quinine tablets. Three hours later the patient felt better, the frequency of PVC reduced to 4 - 5 x/minute and on the third day ECG was normal, potassium level was 3.34 meq/L. He was discharged on 7th day in good condition. Quinine, like quinidine, is a chincona alkaloid that has anti-arrhythmic property, although it also pro-arrhythmic that can cause various arrhythmias, including severe arrhythmia such as multiple PVC. Administration of parenteral quinine must be done carefully and with good observation because of its pro-arrhythmic effect, especially in older patients who have heart diseases or patients with electrolyte disorder (hypokalemia) which frequently occurs due to vomiting and or diarrhea in malaria cases.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"lidocaine\", \"bilirubin\", \"potassium\", \"potassium aspartate\", \"quinine\", \"dextrose\", \"quinidine\", \"Quinine\", \"chincona alkaloid\"], \"Disease\": [\"severe malaria\", \"heart diseases\", \"premature ventricular contraction\", \"sinoatrial block\", \"electrolyte disorder\", \"arrhythmia\", \"tinnitus\", \"PVC\", \"arrhythmic\", \"jaundice\", \"vomitus\", \"loss of hearing\", \"arrhythmias\", \"fever\", \"palpitation\", \"malaria\", \"chill\", \"diarrhea\", \"vomiting\", \"hypokalemia\"], \"Gene\": [], \"Species\": [\"patient\", \"Patient\", \"patients\", \"Plasmodium falciparum\", \"man\"], \"Variant\": []}", "role": "assistant" } ]
science.discomat_te
| Sample No. | T g (degC) | T c (degC) | DT (degC) | DH (J/g) | | 1 | 221 | - | - | - | | 2 | 223 | 316 | 93 | 8.0 | | 3 | 220 | 295 | 75 | 9.7 | | 4 | 216 | 289 | 73 | 13.7 | | 5 | 211 | 280 | 69 | 17.1 | | 6 | 191 | 268 | 77 | 20.4 | | 7 | 173 | 265 | 92 | 55.8 | | 8 | 185 | 273 | 88 | 4.2 | | 9 | 220 | 295 | 73 | 13.7 | | 10 | 226 | 325 | 99 | 28.1 | Caption: DSC results of glasses in As2Se3-GeTe-CuI system Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1", "type": "Other"} {"value": "221", "type": "Other"} {"value": "2", "type": "Other"} {"value": "223", "type": "Other"} {"value": "316", "type": "Other"} {"value": "93", "type": "Other"} {"value": "8.0", "type": "Other"} {"value": "3", "type": "Other"} {"value": "220", "type": "Other"} {"value": "295", "type": "Other"} {"value": "75", "type": "Other"} {"value": "9.7", "type": "Other"} {"value": "4", "type": "Other"} {"value": "216", "type": "Other"} {"value": "289", "type": "Other"} {"value": "73", "type": "Other"} {"value": "13.7", "type": "Other"} {"value": "5", "type": "Other"} {"value": "211", "type": "Other"} {"value": "280", "type": "Other"} {"value": "69", "type": "Other"} {"value": "17.1", "type": "Other"} {"value": "6", "type": "Other"} {"value": "191", "type": "Other"} {"value": "268", "type": "Other"} {"value": "77", "type": "Other"} {"value": "20.4", "type": "Other"} {"value": "7", "type": "Other"} {"value": "173", "type": "Other"} {"value": "265", "type": "Other"} {"value": "92", "type": "Other"} {"value": "55.8", "type": "Other"} {"value": "8", "type": "Other"} {"value": "185", "type": "Other"} {"value": "273", "type": "Other"} {"value": "88", "type": "Other"} {"value": "4.2", "type": "Other"} {"value": "9", "type": "Other"} {"value": "220", "type": "Other"} {"value": "295", "type": "Other"} {"value": "73", "type": "Other"} {"value": "13.7", "type": "Other"} {"value": "10", "type": "Other"} {"value": "226", "type": "Other"} {"value": "325", "type": "Other"} {"value": "99", "type": "Other"} {"value": "28.1", "type": "Other"}
science.discomat_te.428
[ { "content": "| Sample No. | T g (degC) | T c (degC) | DT (degC) | DH (J/g) |\n| 1 | 221 | - | - | - |\n| 2 | 223 | 316 | 93 | 8.0 |\n| 3 | 220 | 295 | 75 | 9.7 |\n| 4 | 216 | 289 | 73 | 13.7 |\n| 5 | 211 | 280 | 69 | 17.1 |\n| 6 | 191 | 268 | 77 | 20.4 |\n| 7 | 173 | 265 | 92 | 55.8 |\n| 8 | 185 | 273 | 88 | 4.2 |\n| 9 | 220 | 295 | 73 | 13.7 |\n| 10 | 226 | 325 | 99 | 28.1 |\n\nCaption: DSC results of glasses in As2Se3-GeTe-CuI system\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"221\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"223\", \"type\": \"Other\"}\n{\"value\": \"316\", \"type\": \"Other\"}\n{\"value\": \"93\", \"type\": \"Other\"}\n{\"value\": \"8.0\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"220\", \"type\": \"Other\"}\n{\"value\": \"295\", \"type\": \"Other\"}\n{\"value\": \"75\", \"type\": \"Other\"}\n{\"value\": \"9.7\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"216\", \"type\": \"Other\"}\n{\"value\": \"289\", \"type\": \"Other\"}\n{\"value\": \"73\", \"type\": \"Other\"}\n{\"value\": \"13.7\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"211\", \"type\": \"Other\"}\n{\"value\": \"280\", \"type\": \"Other\"}\n{\"value\": \"69\", \"type\": \"Other\"}\n{\"value\": \"17.1\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"191\", \"type\": \"Other\"}\n{\"value\": \"268\", \"type\": \"Other\"}\n{\"value\": \"77\", \"type\": \"Other\"}\n{\"value\": \"20.4\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"173\", \"type\": \"Other\"}\n{\"value\": \"265\", \"type\": \"Other\"}\n{\"value\": \"92\", \"type\": \"Other\"}\n{\"value\": \"55.8\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"185\", \"type\": \"Other\"}\n{\"value\": \"273\", \"type\": \"Other\"}\n{\"value\": \"88\", \"type\": \"Other\"}\n{\"value\": \"4.2\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"220\", \"type\": \"Other\"}\n{\"value\": \"295\", \"type\": \"Other\"}\n{\"value\": \"73\", \"type\": \"Other\"}\n{\"value\": \"13.7\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"226\", \"type\": \"Other\"}\n{\"value\": \"325\", \"type\": \"Other\"}\n{\"value\": \"99\", \"type\": \"Other\"}\n{\"value\": \"28.1\", \"type\": \"Other\"}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND:: Aloe vera is a medicinal herb used as an anti-inflammatory and sedative agent. ABSTRACT.OBJECTIVES:: The current study aimed to evaluate the effect of Aloe vera aqueous extract on morphine withdrawal symptoms in morphine-dependent female rats. ABSTRACT.PATIENTS AND METHODS:: The current research was performed on 40 female Wista-Albino rats which were made dependent on morphine using Houshyar protocol and were randomly divided into five groups (A, B, C, D, and E). Group A did not receive any agent in the period of handling but other groups (B, C, D and E) received 5, 10, 20 and 40 mg/kg of Aloe vera aqueous extract by gavage, three times daily for a week, respectively. Withdrawal symptoms, stool form, agitation, disparity, floppy eyelids, and body mass variations were checked for 10 days. The obtained data were analyzed using SPSS v.11 software, and Friedman, Kruskal-Wallis, and Mann-Whitney statistical tests. Statistical difference was considered significant (P < 0.05). ABSTRACT.RESULTS:: The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups; however, these symptoms in group C were significantly lower than those of the other groups. ABSTRACT.CONCLUSIONS:: The results of the present study revealed that the Aloe vera aqueous extract had various effects on morphine withdrawal syndrome in morphine-dependent female rats . Extracted Data:
[["Aloe vera aqueous extract 40 mg/kg ", "Control", "Agitation", "significantly increased", "The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups;"], ["Aloe vera aqueous extract 40 mg/kg ", "Control", "Floppy eyelids", "significantly increased", "The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups"]]
science.evidence_inference.173
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Aloe vera is a medicinal herb used as an anti-inflammatory and sedative agent.\n\nABSTRACT.OBJECTIVES:: The current study aimed to evaluate the effect of Aloe vera aqueous extract on morphine withdrawal symptoms in morphine-dependent female rats.\n\nABSTRACT.PATIENTS AND METHODS:: The current research was performed on 40 female Wista-Albino rats which were made dependent on morphine using Houshyar protocol and were randomly divided into five groups (A, B, C, D, and E). Group A did not receive any agent in the period of handling but other groups (B, C, D and E) received 5, 10, 20 and 40 mg/kg of Aloe vera aqueous extract by gavage, three times daily for a week, respectively. Withdrawal symptoms, stool form, agitation, disparity, floppy eyelids, and body mass variations were checked for 10 days. The obtained data were analyzed using SPSS v.11 software, and Friedman, Kruskal-Wallis, and Mann-Whitney statistical tests. Statistical difference was considered significant (P < 0.05).\n\nABSTRACT.RESULTS:: The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups; however, these symptoms in group C were significantly lower than those of the other groups.\n\nABSTRACT.CONCLUSIONS:: The results of the present study revealed that the Aloe vera aqueous extract had various effects on morphine withdrawal syndrome in morphine-dependent female rats .\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Aloe vera aqueous extract 40 mg/kg \", \"Control\", \"Agitation\", \"significantly increased\", \"The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups;\"], [\"Aloe vera aqueous extract 40 mg/kg \", \"Control\", \"Floppy eyelids\", \"significantly increased\", \"The results of the present study showed that agitation, disparity, and floppy eyelids in group E were significantly higher than those of others groups\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: On the pivotal role of PPARa in adaptation of the heart to hypoxia and why fat in the diet increases hypoxic injury. The role of peroxisome proliferator-activated receptor a (PPARa)-mediated metabolic remodeling in cardiac adaptation to hypoxia has yet to be defined. Here, mice were housed in hypoxia for 3 wk before in vivo contractile function was measured using cine MRI. In isolated, perfused hearts, energetics were measured using (31)P magnetic resonance spectroscopy (MRS), and glycolysis and fatty acid oxidation were measured using [(3)H] labeling. Compared with a normoxic, chow-fed control mouse heart, hypoxia decreased PPARa expression, fatty acid oxidation, and mitochondrial uncoupling protein 3 (UCP3) levels, while increasing glycolysis, all of which served to maintain normal ATP concentrations ([ATP]) and thereby, ejection fractions. A high-fat diet increased cardiac PPARa expression, fatty acid oxidation, and UCP3 levels with decreased glycolysis. Hypoxia was unable to alter the high PPARa expression or reverse the metabolic changes caused by the high-fat diet, with the result that [ATP] and contractile function decreased significantly. The adaptive metabolic changes caused by hypoxia in control mouse hearts were found to have occurred already in PPARa-deficient (PPARa(-/-)) mouse hearts and sustained function in hypoxia despite an inability for further metabolic remodeling. We conclude that decreased cardiac PPARa expression is essential for adaptive metabolic remodeling in hypoxia, but is prevented by dietary fat.-Cole, M. A., Abd Jamil, A. H., Heather, L. C., Murray, A. J., Sutton, E. R., Slingo, M., Sebag-Montefiore, L., Tan, S. C., Aksentijevic, D., Gildea, O. S., Stuckey, D. J., Yeoh, K. K., Carr, C. A., Evans, R. D., Aasum, E., Schofield, C. J., Ratcliffe, P. J., Neubauer, S., Robbins, P. A., Clarke, K. On the pivotal role of PPARa in adaptation of the heart to hypoxia and why fat in the diet increases hypoxic injury.
{"CellLine": [], "Chemical": ["(3)H", "fatty acid", "ATP"], "Disease": ["hypoxia", "Hypoxia", "hypoxic"], "Gene": ["peroxisome proliferator-activated receptor a", "UCP3", "PPARa", "mitochondrial uncoupling protein 3"], "Species": ["mice", "mouse"], "Variant": []}
science.biored_ner.44
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nOn the pivotal role of PPARa in adaptation of the heart to hypoxia and why fat in the diet increases hypoxic injury. The role of peroxisome proliferator-activated receptor a (PPARa)-mediated metabolic remodeling in cardiac adaptation to hypoxia has yet to be defined. Here, mice were housed in hypoxia for 3 wk before in vivo contractile function was measured using cine MRI. In isolated, perfused hearts, energetics were measured using (31)P magnetic resonance spectroscopy (MRS), and glycolysis and fatty acid oxidation were measured using [(3)H] labeling. Compared with a normoxic, chow-fed control mouse heart, hypoxia decreased PPARa expression, fatty acid oxidation, and mitochondrial uncoupling protein 3 (UCP3) levels, while increasing glycolysis, all of which served to maintain normal ATP concentrations ([ATP]) and thereby, ejection fractions. A high-fat diet increased cardiac PPARa expression, fatty acid oxidation, and UCP3 levels with decreased glycolysis. Hypoxia was unable to alter the high PPARa expression or reverse the metabolic changes caused by the high-fat diet, with the result that [ATP] and contractile function decreased significantly. The adaptive metabolic changes caused by hypoxia in control mouse hearts were found to have occurred already in PPARa-deficient (PPARa(-/-)) mouse hearts and sustained function in hypoxia despite an inability for further metabolic remodeling. We conclude that decreased cardiac PPARa expression is essential for adaptive metabolic remodeling in hypoxia, but is prevented by dietary fat.-Cole, M. A., Abd Jamil, A. H., Heather, L. C., Murray, A. J., Sutton, E. R., Slingo, M., Sebag-Montefiore, L., Tan, S. C., Aksentijevic, D., Gildea, O. S., Stuckey, D. J., Yeoh, K. K., Carr, C. A., Evans, R. D., Aasum, E., Schofield, C. J., Ratcliffe, P. J., Neubauer, S., Robbins, P. A., Clarke, K. On the pivotal role of PPARa in adaptation of the heart to hypoxia and why fat in the diet increases hypoxic injury.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"(3)H\", \"fatty acid\", \"ATP\"], \"Disease\": [\"hypoxia\", \"Hypoxia\", \"hypoxic\"], \"Gene\": [\"peroxisome proliferator-activated receptor a\", \"UCP3\", \"PPARa\", \"mitochondrial uncoupling protein 3\"], \"Species\": [\"mice\", \"mouse\"], \"Variant\": []}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: This approach is similar to Nothman et al (2008) <cite>[4]</cite> to generate the NER data from wikilinks.
["Similarities"]
science.multicite_intent_classification.1016
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThis approach is similar to Nothman et al (2008) <cite>[4]</cite> to generate the NER data from wikilinks.", "role": "user" }, { "content": "[\"Similarities\"]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We follow the same evaluation procedure as Kiros et al. (2015) and<cite> Hill et al. (2016)</cite> , using 10-fold cross-validation when a train and test split is not pre-defined for a given data set.
["Similarities", "Uses"]
science.multicite_intent_classification.460
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe follow the same evaluation procedure as Kiros et al. (2015) and<cite> Hill et al. (2016)</cite> , using 10-fold cross-validation when a train and test split is not pre-defined for a given data set.", "role": "user" }, { "content": "[\"Similarities\", \"Uses\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: In research with long-term follow-up and repeated measurements, quick and complete response to questionnaires helps ensure a study's validity, precision and efficiency. Evidence on the effect of non-monetary incentives on response rates in observational longitudinal research is scarce. ABSTRACT.OBJECTIVES: To study the impact of two strategies to enhance completeness and efficiency in observational cohort studies with follow-up durations of around 2 years. ABSTRACT.METHOD AND INTERVENTION: In a factorial design, 771 children between 2 and 5 years old and their parents participating in a prospective cohort study were randomized to three intervention groups and a control group. Three types of lotteries were run: (i) daytrip tickets for the whole family to a popular amusement park if they returned all postal questionnaires, (ii) €12.50-worth gift vouchers for sending back the questionnaire on time after each questionnaire round and (iii) a combination of (i) and (ii). ABSTRACT.MAIN OUTCOME MEASURES: Primary outcome was the proportion of participants who returned all questionnaires without any reminder. Secondary outcomes were '100% returned with or without reminder', 'probability of 100% non-response', 'probability of withdrawal', 'proportion of returned questionnaires' and 'overall number of reminders sent'. ABSTRACT.STATISTICAL ANALYSIS: After testing for interaction between the two lottery interventions, the two trials were analysed separately. We calculated risk differences (RD) and numbers needed to "treat" and their 95% confidence intervals. ABSTRACT.RESULTS: Daytrip nor voucher intervention had an effect on the proportion of participants who returned all questionnaires (RD −0.01; 95% CI-0.07 – 0.06) and (RD 0.02; 95% CI-0.50 – 0.08), respectively. No effects were found on the secondary outcomes. ABSTRACT.CONCLUSION: Our findings do not support the idea that lottery-style incentives lead to more complete response to postal questionnaires in observational cohort studies with repeated data collection and follow-up durations of around 2 years. Extracted Data:
[["(i) daytrip tickets for the whole family to a popular amusement park if they returned all postal questionnaires, (ii) \u20ac12.50-worth gift vouchers for sending back the questionnaire on time after each questionnaire round and (iii) a combination of (i) and (ii)", "contol", "the proportion of participants who returned all questionnaires", "no significant difference", "Daytrip nor voucher intervention had an effect on the proportion of participants who returned all questionnaires (RD \u22120.01; 95% CI-0.07 \u2013 0.06) and (RD 0.02; 95% CI-0.50 \u2013 0.08), respectively."]]
science.evidence_inference.196
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: In research with long-term follow-up and repeated measurements, quick and complete response to questionnaires helps ensure a study's validity, precision and efficiency. Evidence on the effect of non-monetary incentives on response rates in observational longitudinal research is scarce.\n\nABSTRACT.OBJECTIVES: To study the impact of two strategies to enhance completeness and efficiency in observational cohort studies with follow-up durations of around 2 years.\n\nABSTRACT.METHOD AND INTERVENTION: In a factorial design, 771 children between 2 and 5 years old and their parents participating in a prospective cohort study were randomized to three intervention groups and a control group. Three types of lotteries were run: (i) daytrip tickets for the whole family to a popular amusement park if they returned all postal questionnaires, (ii) €12.50-worth gift vouchers for sending back the questionnaire on time after each questionnaire round and (iii) a combination of (i) and (ii).\n\nABSTRACT.MAIN OUTCOME MEASURES: Primary outcome was the proportion of participants who returned all questionnaires without any reminder. Secondary outcomes were '100% returned with or without reminder', 'probability of 100% non-response', 'probability of withdrawal', 'proportion of returned questionnaires' and 'overall number of reminders sent'.\n\nABSTRACT.STATISTICAL ANALYSIS: After testing for interaction between the two lottery interventions, the two trials were analysed separately. We calculated risk differences (RD) and numbers needed to \"treat\" and their 95% confidence intervals.\n\nABSTRACT.RESULTS: Daytrip nor voucher intervention had an effect on the proportion of participants who returned all questionnaires (RD −0.01; 95% CI-0.07 – 0.06) and (RD 0.02; 95% CI-0.50 – 0.08), respectively. No effects were found on the secondary outcomes.\n\nABSTRACT.CONCLUSION: Our findings do not support the idea that lottery-style incentives lead to more complete response to postal questionnaires in observational cohort studies with repeated data collection and follow-up durations of around 2 years.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"(i) daytrip tickets for the whole family to a popular amusement park if they returned all postal questionnaires, (ii) \\u20ac12.50-worth gift vouchers for sending back the questionnaire on time after each questionnaire round and (iii) a combination of (i) and (ii)\", \"contol\", \"the proportion of participants who returned all questionnaires\", \"no significant difference\", \"Daytrip nor voucher intervention had an effect on the proportion of participants who returned all questionnaires (RD \\u22120.01; 95% CI-0.07 \\u2013 0.06) and (RD 0.02; 95% CI-0.50 \\u2013 0.08), respectively.\"]]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: We found the statistical difference between <cite>ABS</cite>(re-run) and <cite>ABS</cite>+AMR on ROUGE-1 and ROUGE-2.
["Differences"]
science.multicite_intent_classification.2269
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe found the statistical difference between <cite>ABS</cite>(re-run) and <cite>ABS</cite>+AMR on ROUGE-1 and ROUGE-2.", "role": "user" }, { "content": "[\"Differences\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models Main Content: Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent ’ s latent state space . However , learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging . One source of difficulty is the presence of irrelevant but hard-to-model background distractions , and unimportant visual details of taskrelevant entities . We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation . This simple model leads to surprisingly robust robotic control even with simultaneous camera , background , and color distractions . We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions . We obtain state-of-the-art results on the Distracting Control Suite , a challenging benchmark for pixel-based robotic control . 1 INTRODUCTION . For a robot , predicting the future conditioned on its actions is a rich source of information about itself and the world that it lives in . The gradient-rich training signal from future prediction can be used to shape a robot ’ s internal latent representation of the world . These models can be used to generate imagined interactions , facilitate model-predictive control , and dynamically attend to mispredicted events . Video prediction models ( Finn et al. , 2016 ) have been shown to be effective for planning robot actions . Model-based Reinforcement Learning ( MBRL ) methods such as Dreamer ( Hafner et al. , 2020 ) and SLAC ( Lee et al. , 2020 ) have been shown to reduce the sample complexity of learning control tasks from pixel-based observations . These methods learn an action-conditioned model of the observations received by the robot . This works well when observations are clean , backgrounds are stationary , and only task-relevant information is contained in the observations . However , in a real-world unconstrained environment , observations are likely to include extraneous information that is irrelevant to the task ( see Figure 1a ) . Extraneous information can include independently moving objects , changes in lighting , colors , as well as changes due to camera motion . High-frequency details , even in task-relevant entities , such as the texture of a door or the shininess of a door handle that a robot is trying to open are irrelevant . Modeling the observations at a pixel-level requires spending capacity to explain all of these attributes , which is inefficient . The prevalence of extraneous information poses an immediate challenge to reconstruction based methods , e.g . Dreamer . One promising approach is to impose a metric on the latent state space which groups together states that are indistinguishable with respect to future reward sequences ( DBC , Zhang et al . ( 2021 ) ) or future action sequences ( PSE , Agarwal et al . ( 2021 ) ) when running the optimal policy . However , using rewards alone for grounding is sample inefficient , especially in tasks with sparse rewards . Similarly , actions may not provide enough training signal , especially if they are low-dimensional . On the other hand , observations are usually high-dimensional and carry more bits of information . One way to make use of this signal while avoiding pixel-level reconstruction is to maximize the mutual information between the observation and its learned representation ( Hjelm et al. , 2019 ) . This objective can be approximated using contrastive learning models where the task is to predict a learned representation which matches the encoding of the true future observation ( positive pair ) , but does not match the encoding of other observations ( negative pairs ) . However , the Under review as a conference paper at ICLR 2022CoRe | Visualization Binder Networks | Progress 06/07 performance of contrastive variants of Dreamer and DBC has been shown to be inferior , for example , Fig . 8 in Hafner et al . ( 2020 ) and Fig . 3 in Zhang et al . ( 2021 ) . In this work , we show that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions , provided that a recurrent state-space model is used . We call our model CoRe : Contrastive Recurrent State-Space Model . The key intuition is that recurrent models such as GRUs and LSTMs maintain temporal smoothness in their hidden state because they propagate their state using gating and incremental modifications . When used along with contrastive learning , this smoothness ensures the presence of informative hard negatives in the same mini-batch of training . Using CoRe , we get state-of-the-art results on the challenging Distracting Control Suite benchmark ( Stone et al. , 2021 ) which includes background , camera , and color distractions applied simultaneously . Video results from our model can be seen at supplementary/index.html . 2 MODEL DESCRIPTION . We formulate the robotics control problem from visual observations as a discrete-time partially observable Markov decision process ( POMDP ) . At any time step t , the robot agent has an internal state denoted by zt . It receives an observation ot , takes action at , and moves to state zt+1 , receiving a reward rt+1 . We present our model with continuous-valued states , actions , and observations . A straightforward extension can be made to discrete states . The objective is to find a policy that maximizes the expected sum of future rewards ∑∞ t=0 γ trt , where γ is a discount factor . 2.1 MODEL COMPONENTS . As shown in Figure 1b , our model consists of three components : a recurrent state space model ( RSSM ) ( Hafner et al. , 2019 ) which encodes the learned dynamics , a behavior model based on Soft-Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) which is used to control the robot , and an observation encoder . Observation encoder : This component is a convolutional network f which takes an observation as input and outputs its encoding xt = f ( ot ) . LayerNorm ( Ba et al. , 2016 ) is applied in the last layer of the encoder . Recurrent State Space Model ( RSSM ) : This component consists of the following modules , Future predictor p ( ẑt|zt−1 , at−1 ) , Observation representation decoder yt = g ( ẑt ) , Correction Model q ( zt|ẑt , xt ) , Inverse Dynamics model ât−1 = a ( zt , zt−1 ) , Reward predictor r̂t = r ( zt ) . Each module is parameterized as a fully-connected MLP , except the future predictor which uses GRU cells ( Cho et al. , 2014 ) . The future predictor outputs a prior distribution ẑt over the next state , given the previous state and action . This is decoded into yt which corresponds to an encoding of the observation that the agent expects to see next . The correction model q is given access to the current observation encoding xt , along with the prior ẑt and outputs the posterior distribution zt , i.e. , it corrects the prior belief given the new observation . The posterior state zt is used to predict the reward , and also feeds into the actor and critic models that control the agent ’ s behavior . The inverse dynamics model predicts the action that took the agent from state zt−1 to zt . The RSSM is similar to the one used in Dreamer and SLAC except that it includes an inverse dynamics model and that the observation representation is predicted from the prior latent state instead of the posterior latent state . These differences lead to slightly better results and a different probabilistic interpretation of the underlying model which we discuss in Section 2.2 . Similar to Dreamer , the latent state zt consists of a deterministic and stochastic component ( More details in Appendix A ) . Behavior Model : This component consists of an actor πφ ( at|zt ) and two critic networks Qθi ( zt , a ) , i ∈ { 1 , 2 } which are standard for training using the SAC algorithm ( Haarnoja et al. , 2018 ) . 2.2 REPRESENTATION LEARNING WITH CORE . Our goal is to represent the agent ’ s latent state by extracting task-relevant information from the observations , while ignoring the distractions . We formulate this representation learning problem as dynamics-regularized mutual information maximization between the observations ot and the model ’ s prediction of the observation ’ s encoding yt . Mutual information maximization ( Hjelm et al. , 2019 ) has been used extensively for representation learning . In our case , it can be approximated using a contrastive learning objective defined over a mini-batch of sequences , in a way similar to Contrastive Predictive Coding ( CPC ) ( van den Oord et al. , 2018 ) . At time-step t in example i in the mini-batch , let xi , t and yi , t denote the real observation ’ s encoding and the predicted observation encoding , respectively . The loss function is Lc = − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i , tyi′ , t′ ) ) − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i′ , t′yi , t ) ) , ( 1 ) where λ is a learned inverse temperature parameter and i′ and t′ index over all sequences and timesteps in the mini-batch , respectively . Intuitively , the loss function makes the predicted observation encoding match the corresponding real observation ’ s encoding , but not match other real observation encodings , and vice-versa . In addition , the predicted and corrected latent state distributions should match , which can be formulated as minimizing a KL-divergence term LKL = KL ( q ( zt|ẑt , xt ) ||p ( ẑt|zt−1 , at−1 ) ) . ( 2 ) An additional source of training signal is the reward prediction loss , ( rt−r̂t ) 2 . Furthermore , modeling the inverse dynamics , i.e . predicting at from zt and zt+1 is a different way of representing the transition dynamics . This does not provide any additional information because we already model forward dynamics . However , we found that it helps in making faster training progress . We incorporate this by adding an action prediction loss . The combined loss is JM ( Θ ) = Lc + βLKL + αr ( rt − r̂t ) 2 + αa||at − ât||2 , where β , αr , αa are tunable hyperparameters and Θ is the combined set of model parameters which includes parameters of the observation encoder , the RSSM , and the inverse temperature λ . Relationship with Action-Conditioned Sequential VAEs The above objective function bears resemblance to the ELBO from a Conditional Sequential VAE which is used as the underlying probabilistic model in Dreamer and SLAC . The objective function there is the sum of an observation reconstruction term ||ot − ôt||2 and LKL scaled by β , corresponding to a β-VAE formulation ( Higgins et al. , 2017 ) . This formulation relies on decoding the posterior latent state to compute ôt . Since the posterior latent state distribution q ( zt|ẑt , xt ) is conditioned on the true observation xt , observation reconstruction is an auto-encoding task . Only the KL-term corresponds to the future prediction task . On the other hand , in our case the prior latent state is decoded to a representation of the observation , which already constitutes a future prediction task . The KL-term only provides additional regularization . This makes the model less reliant on careful tuning of β ( empirically validated in Appendix C.3 ) . In fact , setting it to zero also leads to reasonable performance in our experiments , though not the best .
The paper proposes a recurrent state-space model that learns robust representations for robotic control. The proposed method builds on top of prior works on world-models which learn a latent dynamics model of the agent which can be used for planning and action selection. Different from prior work such as Dreamer and SLAC which rely on pixel-based observation reconstruction, this paper highlights that a simpler contrastive loss for the next-observation prediction achieves better results **if** a recurrent state-space model is used for the latent space. Results are presented on the Distracting Control Suite benchmark and show strong improvements over prior approaches.
science.mup_single_document_summarization.5370
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nRobust Robotic Control from Pixels using Contrastive Recurrent State-Space Models\n\nMain Content:\nModeling the world can benefit robot learning by providing a rich training signal for shaping an agent ’ s latent state space . However , learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging . One source of difficulty is the presence of irrelevant but hard-to-model background distractions , and unimportant visual details of taskrelevant entities . We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation . This simple model leads to surprisingly robust robotic control even with simultaneous camera , background , and color distractions . We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions . We obtain state-of-the-art results on the Distracting Control Suite , a challenging benchmark for pixel-based robotic control . 1 INTRODUCTION . For a robot , predicting the future conditioned on its actions is a rich source of information about itself and the world that it lives in . The gradient-rich training signal from future prediction can be used to shape a robot ’ s internal latent representation of the world . These models can be used to generate imagined interactions , facilitate model-predictive control , and dynamically attend to mispredicted events . Video prediction models ( Finn et al. , 2016 ) have been shown to be effective for planning robot actions . Model-based Reinforcement Learning ( MBRL ) methods such as Dreamer ( Hafner et al. , 2020 ) and SLAC ( Lee et al. , 2020 ) have been shown to reduce the sample complexity of learning control tasks from pixel-based observations . These methods learn an action-conditioned model of the observations received by the robot . This works well when observations are clean , backgrounds are stationary , and only task-relevant information is contained in the observations . However , in a real-world unconstrained environment , observations are likely to include extraneous information that is irrelevant to the task ( see Figure 1a ) . Extraneous information can include independently moving objects , changes in lighting , colors , as well as changes due to camera motion . High-frequency details , even in task-relevant entities , such as the texture of a door or the shininess of a door handle that a robot is trying to open are irrelevant . Modeling the observations at a pixel-level requires spending capacity to explain all of these attributes , which is inefficient . The prevalence of extraneous information poses an immediate challenge to reconstruction based methods , e.g . Dreamer . One promising approach is to impose a metric on the latent state space which groups together states that are indistinguishable with respect to future reward sequences ( DBC , Zhang et al . ( 2021 ) ) or future action sequences ( PSE , Agarwal et al . ( 2021 ) ) when running the optimal policy . However , using rewards alone for grounding is sample inefficient , especially in tasks with sparse rewards . Similarly , actions may not provide enough training signal , especially if they are low-dimensional . On the other hand , observations are usually high-dimensional and carry more bits of information . One way to make use of this signal while avoiding pixel-level reconstruction is to maximize the mutual information between the observation and its learned representation ( Hjelm et al. , 2019 ) . This objective can be approximated using contrastive learning models where the task is to predict a learned representation which matches the encoding of the true future observation ( positive pair ) , but does not match the encoding of other observations ( negative pairs ) . However , the Under review as a conference paper at ICLR 2022CoRe | Visualization Binder Networks | Progress 06/07 performance of contrastive variants of Dreamer and DBC has been shown to be inferior , for example , Fig . 8 in Hafner et al . ( 2020 ) and Fig . 3 in Zhang et al . ( 2021 ) . In this work , we show that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions , provided that a recurrent state-space model is used . We call our model CoRe : Contrastive Recurrent State-Space Model . The key intuition is that recurrent models such as GRUs and LSTMs maintain temporal smoothness in their hidden state because they propagate their state using gating and incremental modifications . When used along with contrastive learning , this smoothness ensures the presence of informative hard negatives in the same mini-batch of training . Using CoRe , we get state-of-the-art results on the challenging Distracting Control Suite benchmark ( Stone et al. , 2021 ) which includes background , camera , and color distractions applied simultaneously . Video results from our model can be seen at supplementary/index.html . 2 MODEL DESCRIPTION . We formulate the robotics control problem from visual observations as a discrete-time partially observable Markov decision process ( POMDP ) . At any time step t , the robot agent has an internal state denoted by zt . It receives an observation ot , takes action at , and moves to state zt+1 , receiving a reward rt+1 . We present our model with continuous-valued states , actions , and observations . A straightforward extension can be made to discrete states . The objective is to find a policy that maximizes the expected sum of future rewards ∑∞ t=0 γ trt , where γ is a discount factor . 2.1 MODEL COMPONENTS . As shown in Figure 1b , our model consists of three components : a recurrent state space model ( RSSM ) ( Hafner et al. , 2019 ) which encodes the learned dynamics , a behavior model based on Soft-Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) which is used to control the robot , and an observation encoder . Observation encoder : This component is a convolutional network f which takes an observation as input and outputs its encoding xt = f ( ot ) . LayerNorm ( Ba et al. , 2016 ) is applied in the last layer of the encoder . Recurrent State Space Model ( RSSM ) : This component consists of the following modules , Future predictor p ( ẑt|zt−1 , at−1 ) , Observation representation decoder yt = g ( ẑt ) , Correction Model q ( zt|ẑt , xt ) , Inverse Dynamics model ât−1 = a ( zt , zt−1 ) , Reward predictor r̂t = r ( zt ) . Each module is parameterized as a fully-connected MLP , except the future predictor which uses GRU cells ( Cho et al. , 2014 ) . The future predictor outputs a prior distribution ẑt over the next state , given the previous state and action . This is decoded into yt which corresponds to an encoding of the observation that the agent expects to see next . The correction model q is given access to the current observation encoding xt , along with the prior ẑt and outputs the posterior distribution zt , i.e. , it corrects the prior belief given the new observation . The posterior state zt is used to predict the reward , and also feeds into the actor and critic models that control the agent ’ s behavior . The inverse dynamics model predicts the action that took the agent from state zt−1 to zt . The RSSM is similar to the one used in Dreamer and SLAC except that it includes an inverse dynamics model and that the observation representation is predicted from the prior latent state instead of the posterior latent state . These differences lead to slightly better results and a different probabilistic interpretation of the underlying model which we discuss in Section 2.2 . Similar to Dreamer , the latent state zt consists of a deterministic and stochastic component ( More details in Appendix A ) . Behavior Model : This component consists of an actor πφ ( at|zt ) and two critic networks Qθi ( zt , a ) , i ∈ { 1 , 2 } which are standard for training using the SAC algorithm ( Haarnoja et al. , 2018 ) . 2.2 REPRESENTATION LEARNING WITH CORE . Our goal is to represent the agent ’ s latent state by extracting task-relevant information from the observations , while ignoring the distractions . We formulate this representation learning problem as dynamics-regularized mutual information maximization between the observations ot and the model ’ s prediction of the observation ’ s encoding yt . Mutual information maximization ( Hjelm et al. , 2019 ) has been used extensively for representation learning . In our case , it can be approximated using a contrastive learning objective defined over a mini-batch of sequences , in a way similar to Contrastive Predictive Coding ( CPC ) ( van den Oord et al. , 2018 ) . At time-step t in example i in the mini-batch , let xi , t and yi , t denote the real observation ’ s encoding and the predicted observation encoding , respectively . The loss function is Lc = − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i , tyi′ , t′ ) ) − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i′ , t′yi , t ) ) , ( 1 ) where λ is a learned inverse temperature parameter and i′ and t′ index over all sequences and timesteps in the mini-batch , respectively . Intuitively , the loss function makes the predicted observation encoding match the corresponding real observation ’ s encoding , but not match other real observation encodings , and vice-versa . In addition , the predicted and corrected latent state distributions should match , which can be formulated as minimizing a KL-divergence term LKL = KL ( q ( zt|ẑt , xt ) ||p ( ẑt|zt−1 , at−1 ) ) . ( 2 ) An additional source of training signal is the reward prediction loss , ( rt−r̂t ) 2 . Furthermore , modeling the inverse dynamics , i.e . predicting at from zt and zt+1 is a different way of representing the transition dynamics . This does not provide any additional information because we already model forward dynamics . However , we found that it helps in making faster training progress . We incorporate this by adding an action prediction loss . The combined loss is JM ( Θ ) = Lc + βLKL + αr ( rt − r̂t ) 2 + αa||at − ât||2 , where β , αr , αa are tunable hyperparameters and Θ is the combined set of model parameters which includes parameters of the observation encoder , the RSSM , and the inverse temperature λ . Relationship with Action-Conditioned Sequential VAEs The above objective function bears resemblance to the ELBO from a Conditional Sequential VAE which is used as the underlying probabilistic model in Dreamer and SLAC . The objective function there is the sum of an observation reconstruction term ||ot − ôt||2 and LKL scaled by β , corresponding to a β-VAE formulation ( Higgins et al. , 2017 ) . This formulation relies on decoding the posterior latent state to compute ôt . Since the posterior latent state distribution q ( zt|ẑt , xt ) is conditioned on the true observation xt , observation reconstruction is an auto-encoding task . Only the KL-term corresponds to the future prediction task . On the other hand , in our case the prior latent state is decoded to a representation of the observation , which already constitutes a future prediction task . The KL-term only provides additional regularization . This makes the model less reliant on careful tuning of β ( empirically validated in Appendix C.3 ) . In fact , setting it to zero also leads to reasonable performance in our experiments , though not the best .", "role": "user" }, { "content": "The paper proposes a recurrent state-space model that learns robust representations for robotic control. The proposed method builds on top of prior works on world-models which learn a latent dynamics model of the agent which can be used for planning and action selection. Different from prior work such as Dreamer and SLAC which rely on pixel-based observation reconstruction, this paper highlights that a simpler contrastive loss for the next-observation prediction achieves better results **if** a recurrent state-space model is used for the latent space. Results are presented on the Distracting Control Suite benchmark and show strong improvements over prior approaches.", "role": "assistant" } ]
science.discomat_te
| Materials | System | s em(l ext) (x10-20 cm2) | l ext (nm) | t f (ms) | I min (kW/cm2) | s abs(l p) (x10-20 cm2) | | | QX | Phosphate | 0.70 | 1018 | 2.00 | 1.8 | 0.50 | [24] | | ADY | Aluminate | 1.03 | 1020 | 1.58 | 1.1 | 0.70 | [25] | | LY | Silicate | 0.80 | 1028 | 1.68 | 2.0 | 0.55 | [25] | | PNK | Phosphate | 1.08 | 1016 | 2.00 | 1.3 | 0.68 | [25] | | FP | Fluorophosphate | 0.50 | 1020 | 1.20 | 0.8 | 0.43 | [26] | | SNTL | Niobosilicate | 2.07 | 1026 | 0.90 | 1.0 | 1.54 | [27] | | GPB1 | Lead bismuth germante | 0.8+-0.1 | 1021 | 0.54+-0.03 | 2.0+-0.2 | 1.50+-0.09 | [14] | | GP | Lead germanate | 0.6+-0.1 | 1009 | 0.80+-0.04 | 2.8+-0.3 | 1.20+-0.06 | This work | Caption: Spectroscopic properties and the I min parameter of Yb3+-doped glasses and GP glass Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.70", "type": "Other"} {"value": "1018", "type": "Other"} {"value": "2.00", "type": "Other"} {"value": "1.8", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "1.03", "type": "Other"} {"value": "1020", "type": "Other"} {"value": "1.58", "type": "Other"} {"value": "1.1", "type": "Other"} {"value": "0.70", "type": "Other"} {"value": "0.80", "type": "Other"} {"value": "1028", "type": "Other"} {"value": "1.68", "type": "Other"} {"value": "2.0", "type": "Other"} {"value": "0.55", "type": "Other"} {"value": "1.08", "type": "Other"} {"value": "1016", "type": "Other"} {"value": "2.00", "type": "Other"} {"value": "1.3", "type": "Other"} {"value": "0.68", "type": "Other"} {"value": "0.50", "type": "Other"} {"value": "1020", "type": "Other"} {"value": "1.20", "type": "Other"} {"value": "0.8", "type": "Other"} {"value": "0.43", "type": "Other"} {"value": "2.07", "type": "Other"} {"value": "1026", "type": "Other"} {"value": "0.90", "type": "Other"} {"value": "1.0", "type": "Other"} {"value": "1.54", "type": "Other"} {"value": "0.8", "type": "Other"} {"value": "1021", "type": "Other"} {"value": "0.54", "type": "Other"} {"value": "2.0", "type": "Other"} {"value": "1.50", "type": "Other"} {"value": "0.6", "type": "Other"} {"value": "1009", "type": "Other"} {"value": "0.80", "type": "Other"} {"value": "2.8", "type": "Other"} {"value": "1.20", "type": "Other"}
science.discomat_te.207
[ { "content": "| Materials | System | s em(l ext) (x10-20 cm2) | l ext (nm) | t f (ms) | I min (kW/cm2) | s abs(l p) (x10-20 cm2) | |\n| QX | Phosphate | 0.70 | 1018 | 2.00 | 1.8 | 0.50 | [24] |\n| ADY | Aluminate | 1.03 | 1020 | 1.58 | 1.1 | 0.70 | [25] |\n| LY | Silicate | 0.80 | 1028 | 1.68 | 2.0 | 0.55 | [25] |\n| PNK | Phosphate | 1.08 | 1016 | 2.00 | 1.3 | 0.68 | [25] |\n| FP | Fluorophosphate | 0.50 | 1020 | 1.20 | 0.8 | 0.43 | [26] |\n| SNTL | Niobosilicate | 2.07 | 1026 | 0.90 | 1.0 | 1.54 | [27] |\n| GPB1 | Lead bismuth germante | 0.8+-0.1 | 1021 | 0.54+-0.03 | 2.0+-0.2 | 1.50+-0.09 | [14] |\n| GP | Lead germanate | 0.6+-0.1 | 1009 | 0.80+-0.04 | 2.8+-0.3 | 1.20+-0.06 | This work |\n\nCaption: Spectroscopic properties and the I min parameter of Yb3+-doped glasses and GP glass\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.70\", \"type\": \"Other\"}\n{\"value\": \"1018\", \"type\": \"Other\"}\n{\"value\": \"2.00\", \"type\": \"Other\"}\n{\"value\": \"1.8\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"1.03\", \"type\": \"Other\"}\n{\"value\": \"1020\", \"type\": \"Other\"}\n{\"value\": \"1.58\", \"type\": \"Other\"}\n{\"value\": \"1.1\", \"type\": \"Other\"}\n{\"value\": \"0.70\", \"type\": \"Other\"}\n{\"value\": \"0.80\", \"type\": \"Other\"}\n{\"value\": \"1028\", \"type\": \"Other\"}\n{\"value\": \"1.68\", \"type\": \"Other\"}\n{\"value\": \"2.0\", \"type\": \"Other\"}\n{\"value\": \"0.55\", \"type\": \"Other\"}\n{\"value\": \"1.08\", \"type\": \"Other\"}\n{\"value\": \"1016\", \"type\": \"Other\"}\n{\"value\": \"2.00\", \"type\": \"Other\"}\n{\"value\": \"1.3\", \"type\": \"Other\"}\n{\"value\": \"0.68\", \"type\": \"Other\"}\n{\"value\": \"0.50\", \"type\": \"Other\"}\n{\"value\": \"1020\", \"type\": \"Other\"}\n{\"value\": \"1.20\", \"type\": \"Other\"}\n{\"value\": \"0.8\", \"type\": \"Other\"}\n{\"value\": \"0.43\", \"type\": \"Other\"}\n{\"value\": \"2.07\", \"type\": \"Other\"}\n{\"value\": \"1026\", \"type\": \"Other\"}\n{\"value\": \"0.90\", \"type\": \"Other\"}\n{\"value\": \"1.0\", \"type\": \"Other\"}\n{\"value\": \"1.54\", \"type\": \"Other\"}\n{\"value\": \"0.8\", \"type\": \"Other\"}\n{\"value\": \"1021\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"2.0\", \"type\": \"Other\"}\n{\"value\": \"1.50\", \"type\": \"Other\"}\n{\"value\": \"0.6\", \"type\": \"Other\"}\n{\"value\": \"1009\", \"type\": \"Other\"}\n{\"value\": \"0.80\", \"type\": \"Other\"}\n{\"value\": \"2.8\", \"type\": \"Other\"}\n{\"value\": \"1.20\", \"type\": \"Other\"}", "role": "assistant" } ]
science.discomat_te
| Sample No. As2Se3-GeTe-CuI | As2Se3 | GeTe | CuI | | 1 | 40 | 60 | 0 | | 2 | 35 | 55 | 10 | | 3 | 30 | 50 | 20 | | 4 | 25 | 45 | 30 | | 5 | 20 | 40 | 40 | | 6 | 15 | 35 | 50 | | 7 | 60 | 0 | 40 | | 8 | 45 | 25 | 30 | | 9 | 30 | 50 | 20 | | 10 | 15 | 75 | 10 | Caption: Glass compositions (mol%) of the present test samples Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "1", "type": "Other"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "1"} {"value": "60", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "1"} {"value": "0", "type": "Other"} {"value": "2", "type": "Other"} {"value": "35", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "2"} {"value": "55", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "2"} {"value": "10", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "2"} {"value": "3", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "3"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "3"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "3"} {"value": "4", "type": "Other"} {"value": "25", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "4"} {"value": "45", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "4"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "4"} {"value": "5", "type": "Other"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "5"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "5"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "5"} {"value": "6", "type": "Other"} {"value": "15", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "6"} {"value": "35", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "6"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "6"} {"value": "7", "type": "Other"} {"value": "60", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "7"} {"value": "0", "type": "Other"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "7"} {"value": "8", "type": "Other"} {"value": "45", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "8"} {"value": "25", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "8"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "8"} {"value": "9", "type": "Other"} {"value": "30", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "9"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "9"} {"value": "20", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "9"} {"value": "10", "type": "Other"} {"value": "15", "type": "Glass_Compound_Amount", "constituent": "As2Se3", "unit": "mol", "material": "10"} {"value": "75", "type": "Glass_Compound_Amount", "constituent": "GeTe", "unit": "mol", "material": "10"} {"value": "10", "type": "Glass_Compound_Amount", "constituent": "CuI", "unit": "mol", "material": "10"}
science.discomat_te.427
[ { "content": "| Sample No. As2Se3-GeTe-CuI | As2Se3 | GeTe | CuI |\n| 1 | 40 | 60 | 0 |\n| 2 | 35 | 55 | 10 |\n| 3 | 30 | 50 | 20 |\n| 4 | 25 | 45 | 30 |\n| 5 | 20 | 40 | 40 |\n| 6 | 15 | 35 | 50 |\n| 7 | 60 | 0 | 40 |\n| 8 | 45 | 25 | 30 |\n| 9 | 30 | 50 | 20 |\n| 10 | 15 | 75 | 10 |\n\nCaption: Glass compositions (mol%) of the present test samples\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"1\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"1\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"2\"}\n{\"value\": \"55\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"2\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"2\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"3\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"3\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"3\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"4\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"4\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"4\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"5\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"5\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"5\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"6\"}\n{\"value\": \"35\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"6\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"6\"}\n{\"value\": \"7\", \"type\": \"Other\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"7\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"7\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"8\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"8\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"8\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"9\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"9\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"9\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"As2Se3\", \"unit\": \"mol\", \"material\": \"10\"}\n{\"value\": \"75\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeTe\", \"unit\": \"mol\", \"material\": \"10\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CuI\", \"unit\": \"mol\", \"material\": \"10\"}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Characterizing Political Fake News in Twitter by its Meta-Data This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. Unnamed section 10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist. Figure 2: Density distributions of achieved retweets for tweets in our dataset 1)containing fake news and 2)not containing them. No differences are apparent. Figure 4: Distribution of the number of hashtags used in tweets labelled as containing fake news and those labelled as not containing them. Figure 5: Tweets labelled as containing fake news mostly come from non-verified users. This contrasts with the opposite pattern for tweets non containing them (which mostly originate from verified accounts). Figure 9: Density distribution of friends/followers ratio. Note that they do not follow a normal distribution. A higher friends/followers ratio exists for accounts that has at least produced a tweet labelled as containing fake news. Figure 11: Number of media elements embedded within viral tweets (labelled as containing fake news vs. labelled as not containing them) Figure 12: Number of URLs embedded within viral tweets (with fake news vs. without them). Differences are statistically significant with α = 0.05 Table 2: Summary of our conclusions, and tested hypothesis Query: What is their definition of tweets going viral? JSON Response:
{"answer": "Viral tweets are the ones that are retweeted more than 1000 times", "evidence": ["One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one\u2019s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times."]}
science.qasper_abstractive_qa.252.2
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCharacterizing Political Fake News in Twitter by its Meta-Data\n\nThis article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.\n\nUnnamed section\n10pt\n1.10pt\n[ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.\n]\n\nIntroduction\nWhile fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited.\nSince the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers.\nSuch use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”.\nOur current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter.\nSpecifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views.\nFor our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties.\nFrom our results, the following main observations can be made:\nOur findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter.\nThe rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work.\n\nResearch Hypotheses\nPrevious works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:\nTaking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset.\nExposure.\nCharacterization.\nPolarization.\n\nData and Methodology\nFor this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016).\nOne straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times.\nOnce we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused.\nFinally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news:\nIn the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal.\n\nResults\nThe sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window.\nThe 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth.\nThe following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered.\n\nExposure\nFigure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time.\nHowever, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal.\nIn relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant.\nFinally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news.\n\nPolarization\nFinally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates.\n\nConclusions\nWith the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news.\nWe believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle.\nWithin the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample.\n\nAuthor Disclosure Statement\nNo competing financial interest exist.\n\nFigure 2: Density distributions of achieved retweets for tweets in our dataset 1)containing fake news and 2)not containing them. No differences are apparent.\n\nFigure 4: Distribution of the number of hashtags used in tweets labelled as containing fake news and those labelled as not containing them.\n\nFigure 5: Tweets labelled as containing fake news mostly come from non-verified users. This contrasts with the opposite pattern for tweets non containing them (which mostly originate from verified accounts).\n\nFigure 9: Density distribution of friends/followers ratio. Note that they do not follow a normal distribution. A higher friends/followers ratio exists for accounts that has at least produced a tweet labelled as containing fake news.\n\nFigure 11: Number of media elements embedded within viral tweets (labelled as containing fake news vs. labelled as not containing them)\n\nFigure 12: Number of URLs embedded within viral tweets (with fake news vs. without them). Differences are statistically significant with α = 0.05\n\nTable 2: Summary of our conclusions, and tested hypothesis\n\nQuery: What is their definition of tweets going viral?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Viral tweets are the ones that are retweeted more than 1000 times\", \"evidence\": [\"One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one\\u2019s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Recently some have called for randomized controlled trials comparing RFA to hepatic resection, particularly for patients with only a few small metastases. The objectives were to compare local recurrence and survival following RFA and hepatic resection for colorectal liver metastases. This was a retrospective review of open RFA and hepatic resection for colorectal liver metastases between January 1998 and May 2007. All patients who had RFA were considered to have unresectable disease. 58 patients had hepatic resection and 43 had RFA. A 5-year survival after resection was 43% compared to 23% after RFA. For patients with solitary lesions, a 5-year survival was 48% after resection and 15% after RFA. Sixty percent of patients suffered local recurrences after RFA compared to 7% after hepatic resection. RFA is inferior to resection. The results observed in this study support the consensus that RFA cannot be considered an equivalent procedure to hepatic resection. Extracted Data:
[["Radiofrequency ablation", "Hepatic resection", "Local recurrence rate through the study", "significantly increased", "Sixty percent\r\r\r\r\nof patients suffered local recurrences after RFA\r\r\r\r\ncompared to 7% after hepatic resection. "], ["Radiofrequency ablation", "Hepatic resection", "Need for ICU ", "no significant difference", "Rates of ICU admission (6.5% of patients overall), and length of stay (median of 7 days for all patients; range 1 to 48) were not significantly different between patients having hepatic resection and RFA."], ["Radiofrequency ablation", "Hepatic resection", "Duration of surgery", "significantly decreased", "The operative time was significantly longer for the resection group compared to the RFA group (median of 269 minutes (range 118 to 452) versus 204 minutes (range 113 to 316); P \u003c .0005)."], ["Radiofrequency ablation", "Hepatic resection", "Transfusion requirements", "significantly decreased", "Operative blood loss (median 1400 mL (range 100 to 9000) versus 150 mL (range 50 to 2300); P \u003c .0005) and transfusion requirements (44% of patients versus 5%; P \u003c .0005) were higher for resection than for RFA."]]
science.evidence_inference.1
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Recently some have\ncalled for randomized controlled trials\ncomparing RFA to hepatic resection, particularly\nfor patients with only a few small metastases.\nThe objectives were to compare local\nrecurrence and survival following RFA and\nhepatic resection for colorectal liver\nmetastases. This was a retrospective review of\nopen RFA and hepatic resection for colorectal\nliver metastases between January 1998 and May\n2007. All patients who had RFA were considered\nto have unresectable disease. 58 patients had\nhepatic resection and 43 had RFA. A 5-year\nsurvival after resection was 43% compared to\n23% after RFA. For patients with solitary\nlesions, a 5-year survival was 48% after\nresection and 15% after RFA. Sixty percent\nof patients suffered local recurrences after RFA\ncompared to 7% after hepatic resection. RFA\nis inferior to resection. The results observed\nin this study support the consensus that RFA\ncannot be considered an equivalent procedure to\nhepatic resection.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Radiofrequency ablation\", \"Hepatic resection\", \"Local recurrence rate through the study\", \"significantly increased\", \"Sixty percent\\r\\r\\r\\r\\nof patients suffered local recurrences after RFA\\r\\r\\r\\r\\ncompared to 7% after hepatic resection. \"], [\"Radiofrequency ablation\", \"Hepatic resection\", \"Need for ICU \", \"no significant difference\", \"Rates of ICU admission (6.5% of patients overall), and length of stay (median of 7 days for all patients; range 1 to 48) were not significantly different between patients having hepatic resection and RFA.\"], [\"Radiofrequency ablation\", \"Hepatic resection\", \"Duration of surgery\", \"significantly decreased\", \"The operative time was significantly longer for the resection group compared to the RFA group (median of 269 minutes (range 118 to 452) versus 204 minutes (range 113 to 316); P \\u003c .0005).\"], [\"Radiofrequency ablation\", \"Hepatic resection\", \"Transfusion requirements\", \"significantly decreased\", \"Operative blood loss (median 1400 mL (range 100 to 9000) versus 150 mL (range 50 to 2300); P \\u003c .0005) and transfusion requirements (44% of patients versus 5%; P \\u003c .0005) were higher for resection than for RFA.\"]]", "role": "assistant" } ]
science.discomat_te
| | Sample A (T f=900 degC) | Sample B (T f=1400 degC) | | Ph 1(F2) b | 6x10-5 | 6x10-5 | | Ph 2 A(F2) d | 8x10-12 | 1x10-11 | | Ph 2 A(ArF) e | 3x10-14 | | | Ph 2 A(KrF) f | (2+-1)x10-15 | | Caption: Quantum yields of the E' center formed by the irradiation with F2, ArF, or KrF laser Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "2", "type": "Other"}
science.discomat_te.498
[ { "content": "| | Sample A (T f=900 degC) | Sample B (T f=1400 degC) |\n| Ph 1(F2) b | 6x10-5 | 6x10-5 |\n| Ph 2 A(F2) d | 8x10-12 | 1x10-11 |\n| Ph 2 A(ArF) e | 3x10-14 | |\n| Ph 2 A(KrF) f | (2+-1)x10-15 | |\n\nCaption: Quantum yields of the E' center formed by the irradiation with F2, ArF, or KrF laser\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"2\", \"type\": \"Other\"}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: HMMpTM: improving transmembrane protein topology prediction using phosphorylation and glycosylation site prediction We report the development of a Hidden Markov Model based method, capable of predicting the topology of transmembrane proteins and the existence of kinase specific phosphorylation and N/O-linked glycosylation sites along the protein sequence. Our method integrates a novel feature in transmembrane protein topology prediction, which results in improved performance for topology prediction and reliable prediction of phosphorylation and glycosylation sites. The method is freely available at http://bioinformatics.biol.uoa.gr/HMMpTM MetaTM - a consensus method for transmembrane protein topology prediction. A novel TM consensus method, named MetaTM, is proposed in this work. MetaTM is based on support vector machine models and combines the results of six TM topology predictors and two signal peptide predictors. On a large data set comprising 1460 sequences of TM proteins with known topologies and 2362 globular protein sequences it correctly predicts 86.7% of all topologies. CONCLUSION: Combining several TM predictors in a consensus prediction framework improves overall accuracy compared to any of the individual methods. Our proposed SVM-based system also has higher accuracy than a previous consensus predictor. MetaTM is made available both as downloadable source code and as DAS server at http://MetaTM.sbc.su.se We present a support vector machine-based (SVM) TM protein topology predictor that integrates both signal peptide and re-entrant helix prediction, benchmarked with full cross-validation on a novel data set of 131 sequences with known crystal structures. The method achieves topology prediction accuracy of 89%, while signal peptides and re-entrant helices are predicted with 93% and 44% accuracy respectively. An additional SVM trained to discriminate between globular and TM proteins detected zero false positives, with a low false negative rate of 0.4%. We present the results of applying these tools to a number of complete genomes. Source code, data sets and a web server are freely available from http://bioinf.cs.ucl.ac.uk/psipred/ Hidden Markov models (HMMs) have been successfully applied to the tasks of transmembrane protein topology prediction and signal peptide prediction. In this paper we expand upon this work by making use of the more powerful class of dynamic Bayesian networks (DBNs). Our model, Philius, is inspired by a previously published HMM, Phobius, and combines a signal peptide submodel with a transmembrane submodel. We introduce a two-stage DBN decoder that combines the power of posterior decoding with the grammar constraints of Viterbi-style decoding. Philius also provides protein type, segment, and topology confidence metrics to aid in the interpretation of the predictions We report a relative improvement of 13% over Phobius in full-topology prediction accuracy on transmembrane proteins, and a sensitivity and specificity of 0.96 in detecting signal peptides HMM_RA: an improved method for alpha-helical transmembrane protein topology prediction This paper presents a Hidden Markov Model (referred to as HMM_RA) that can predict the topology of alpha-helical transmembrane proteins with improved performance. HMM_RA adopts the same structure as the HMMTOP method, which has five modules: inside loop, inside helix tail, membrane helix, outside helix tail and outside loop. Each module consists of one or multiple states A new method (MEMSAT3) for predicting transmembrane protein topology from sequence profiles is described and benchmarked with full cross-validation on a standard data set of 184 transmembrane proteins The algorithms presented here, are easily implemented in any kind of a Hidden Markov Model, whereas the prediction method (HMM-TM) is freely available for academic users TMHMM, the current state-of-the-art method, has less than 52% accuracy in topology prediction on one set of transmembrane proteins of known topology. Based on the observation that there are functional domains that occur preferentially internal or external to the membrane, we have extended the model of TMHMM to incorporate functional domains, using a probabilistic approach originally developed for computational gene finding. Our extension is better than TMHMM in predicting the topology of transmembrane proteins. As prediction of functional domain improves, our system's prediction accuracy will likely improve as well Here, we present Phobius, a combined transmembrane protein topology and signal peptide predictor. The predictor is based on a hidden Markov model (HMM) that models the different sequence regions of a signal peptide and the different regions of a transmembrane protein in a series of interconnected states. Training was done on a newly assembled and curated dataset. Compared to TMHMM and SignalP, errors coming from cross-prediction between transmembrane segments and signal peptides were reduced substantially by Phobius. False classifications of signal peptides were reduced from 26.1% to 3.9% and false classifications of transmembrane helices were reduced from 19.0% to 7.7%. Phobius was applied to the proteomes of Homo sapiens and Escherichia coli. Here we also noted a drastic reduction of false classifications compared to TMHMM/SignalP, suggesting that Phobius is well suited for whole-genome annotation of signal peptides and transmembrane regions HMMpTM: improving transmembrane protein topology prediction using phosphorylation and glycosylation site prediction. The HMMTOP transmembrane topology prediction server. Question: List available methods for transmembrane protein topology prediction.
["HMMpTM", "MetaTM", "Philius", "HMM_RA", "HMMTOP", "MEMSAT3", "HMM-TM", "TMHMM", "Phobius", "SignalP"]
science.bioasq_list_qa.168
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: HMMpTM: improving transmembrane protein topology prediction using phosphorylation and glycosylation site prediction\nWe report the development of a Hidden Markov Model based method, capable of predicting the topology of transmembrane proteins and the existence of kinase specific phosphorylation and N/O-linked glycosylation sites along the protein sequence. Our method integrates a novel feature in transmembrane protein topology prediction, which results in improved performance for topology prediction and reliable prediction of phosphorylation and glycosylation sites. The method is freely available at http://bioinformatics.biol.uoa.gr/HMMpTM\nMetaTM - a consensus method for transmembrane protein topology prediction.\nA novel TM consensus method, named MetaTM, is proposed in this work. MetaTM is based on support vector machine models and combines the results of six TM topology predictors and two signal peptide predictors. On a large data set comprising 1460 sequences of TM proteins with known topologies and 2362 globular protein sequences it correctly predicts 86.7% of all topologies. CONCLUSION: Combining several TM predictors in a consensus prediction framework improves overall accuracy compared to any of the individual methods. Our proposed SVM-based system also has higher accuracy than a previous consensus predictor. MetaTM is made available both as downloadable source code and as DAS server at http://MetaTM.sbc.su.se\nWe present a support vector machine-based (SVM) TM protein topology predictor that integrates both signal peptide and re-entrant helix prediction, benchmarked with full cross-validation on a novel data set of 131 sequences with known crystal structures. The method achieves topology prediction accuracy of 89%, while signal peptides and re-entrant helices are predicted with 93% and 44% accuracy respectively. An additional SVM trained to discriminate between globular and TM proteins detected zero false positives, with a low false negative rate of 0.4%. We present the results of applying these tools to a number of complete genomes. Source code, data sets and a web server are freely available from http://bioinf.cs.ucl.ac.uk/psipred/\nHidden Markov models (HMMs) have been successfully applied to the tasks of transmembrane protein topology prediction and signal peptide prediction. In this paper we expand upon this work by making use of the more powerful class of dynamic Bayesian networks (DBNs). Our model, Philius, is inspired by a previously published HMM, Phobius, and combines a signal peptide submodel with a transmembrane submodel. We introduce a two-stage DBN decoder that combines the power of posterior decoding with the grammar constraints of Viterbi-style decoding. Philius also provides protein type, segment, and topology confidence metrics to aid in the interpretation of the predictions\nWe report a relative improvement of 13% over Phobius in full-topology prediction accuracy on transmembrane proteins, and a sensitivity and specificity of 0.96 in detecting signal peptides\nHMM_RA: an improved method for alpha-helical transmembrane protein topology prediction\nThis paper presents a Hidden Markov Model (referred to as HMM_RA) that can predict the topology of alpha-helical transmembrane proteins with improved performance. HMM_RA adopts the same structure as the HMMTOP method, which has five modules: inside loop, inside helix tail, membrane helix, outside helix tail and outside loop. Each module consists of one or multiple states\nA new method (MEMSAT3) for predicting transmembrane protein topology from sequence profiles is described and benchmarked with full cross-validation on a standard data set of 184 transmembrane proteins\nThe algorithms presented here, are easily implemented in any kind of a Hidden Markov Model, whereas the prediction method (HMM-TM) is freely available for academic users\nTMHMM, the current state-of-the-art method, has less than 52% accuracy in topology prediction on one set of transmembrane proteins of known topology. Based on the observation that there are functional domains that occur preferentially internal or external to the membrane, we have extended the model of TMHMM to incorporate functional domains, using a probabilistic approach originally developed for computational gene finding. Our extension is better than TMHMM in predicting the topology of transmembrane proteins. As prediction of functional domain improves, our system's prediction accuracy will likely improve as well\nHere, we present Phobius, a combined transmembrane protein topology and signal peptide predictor. The predictor is based on a hidden Markov model (HMM) that models the different sequence regions of a signal peptide and the different regions of a transmembrane protein in a series of interconnected states. Training was done on a newly assembled and curated dataset. Compared to TMHMM and SignalP, errors coming from cross-prediction between transmembrane segments and signal peptides were reduced substantially by Phobius. False classifications of signal peptides were reduced from 26.1% to 3.9% and false classifications of transmembrane helices were reduced from 19.0% to 7.7%. Phobius was applied to the proteomes of Homo sapiens and Escherichia coli. Here we also noted a drastic reduction of false classifications compared to TMHMM/SignalP, suggesting that Phobius is well suited for whole-genome annotation of signal peptides and transmembrane regions\nHMMpTM: improving transmembrane protein topology prediction using phosphorylation and glycosylation site prediction.\nThe HMMTOP transmembrane topology prediction server.\n\nQuestion: List available methods for transmembrane protein topology prediction.", "role": "user" }, { "content": "[\"HMMpTM\", \"MetaTM\", \"Philius\", \"HMM_RA\", \"HMMTOP\", \"MEMSAT3\", \"HMM-TM\", \"TMHMM\", \"Phobius\", \"SignalP\"]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Circulating Fatty Acid Synthase in pregnant women: Relationship to blood pressure, maternal metabolism and newborn parameters. The enzyme FASN (fatty acid synthase) is potentially related with hypertension and metabolic dysfunction. FASN is highly expressed in the human placenta. We aimed to investigate the relationship circulating FASN has with blood pressure, maternal metabolism and newborn parameters in healthy pregnant women. Circulating FASN was assessed in 115 asymptomatic pregnant women in the second trimester of gestation along with C-peptide, fasting glucose and insulin, post-load glucose lipids, HMW-adiponectin and blood pressure (the latter was assessed in each trimester of gestation). At birth, newborns and placentas were weighed. FASN expression was also able to be assessed in 80 placentas. Higher circulating FASN was associated with lower systolic blood pressure (SBP), with a more favourable metabolic phenotype (lower fasting glucose and insulin, post load glucose, HbAc1, HOMA-IR and C-peptide), and with lower placental and birth weight (all p < 0.05 to p < 0.001). Placental FASN expression related positively to circulating FASN (p < 0.005) and negatively to placental weight (p < 0.05). Our observations suggest a physiological role of placental FASN in human pregnancy. Future studies will clarify whether circulating FASN of placental origin does actually regulate placental and fetal growth, and (thereby) has a favourable influence on the pregnant mother's insulin sensitivity and blood pressure.
{"CellLine": [], "Chemical": ["lipids", "C-peptide", "glucose"], "Disease": ["hypertension", "metabolic dysfunction"], "Gene": ["fatty acid synthase", "insulin", "FASN", "HbAc1", "Fatty Acid Synthase", "adiponectin", "HMW"], "Species": ["women", "human"], "Variant": []}
science.biored_ner.21
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nCirculating Fatty Acid Synthase in pregnant women: Relationship to blood pressure, maternal metabolism and newborn parameters. The enzyme FASN (fatty acid synthase) is potentially related with hypertension and metabolic dysfunction. FASN is highly expressed in the human placenta. We aimed to investigate the relationship circulating FASN has with blood pressure, maternal metabolism and newborn parameters in healthy pregnant women. Circulating FASN was assessed in 115 asymptomatic pregnant women in the second trimester of gestation along with C-peptide, fasting glucose and insulin, post-load glucose lipids, HMW-adiponectin and blood pressure (the latter was assessed in each trimester of gestation). At birth, newborns and placentas were weighed. FASN expression was also able to be assessed in 80 placentas. Higher circulating FASN was associated with lower systolic blood pressure (SBP), with a more favourable metabolic phenotype (lower fasting glucose and insulin, post load glucose, HbAc1, HOMA-IR and C-peptide), and with lower placental and birth weight (all p < 0.05 to p < 0.001). Placental FASN expression related positively to circulating FASN (p < 0.005) and negatively to placental weight (p < 0.05). Our observations suggest a physiological role of placental FASN in human pregnancy. Future studies will clarify whether circulating FASN of placental origin does actually regulate placental and fetal growth, and (thereby) has a favourable influence on the pregnant mother's insulin sensitivity and blood pressure.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"lipids\", \"C-peptide\", \"glucose\"], \"Disease\": [\"hypertension\", \"metabolic dysfunction\"], \"Gene\": [\"fatty acid synthase\", \"insulin\", \"FASN\", \"HbAc1\", \"Fatty Acid Synthase\", \"adiponectin\", \"HMW\"], \"Species\": [\"women\", \"human\"], \"Variant\": []}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Aim: Evaluate efficacy/safety of bremelanotide (BMT), a melanocortin-receptor-4 agonist, to treat female sexual dysfunctions in premenopausal women. Methods: Patients randomized to receive placebo or BMT 0.75, 1.25 or 1.75 mg self-administered subcutaneously, as desired, over 12 weeks. Primary end point was change in satisfying sexual events/month. Secondary end points included total score changes on female sexual function index and female sexual distress scale-desire/arousal/orgasm. Results: Efficacy data, n = 327. For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), −11.1 versus −6.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014). Adverse events: nausea, flushing, headache. Conclusion: In premenopausal women with female sexual dysfunctions, self-administered, as desired, subcutaneous BMT was safe, effective, and well tolerated (NCT01382719). Extracted Data:
[["Bremelanotide 1.75", "Placebo", "Satisfying sexual events/month", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."], ["Bremelanotide 1.25", "Placebo", "Sexual function index total score", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."], ["Bremelanotide 1.25", "Placebo", "Sexual distress improvement ", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."], ["Bremelanotide 1.25", "Placebo", "Satisfying sexual events/month", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."], ["Bremelanotide 1.25", "Placebo", "Satisfying sexual events/month", "significantly increased", ". For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180),"], ["Bremelanotide 1.75", "Placebo", "Sexual function index total score", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."], ["Bremelanotide 1.75", "Placebo", "Sexual distress improvement ", "significantly increased", "For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \u221211.1 versus \u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014)."]]
science.evidence_inference.111
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Aim: Evaluate efficacy/safety of bremelanotide (BMT), a melanocortin-receptor-4 agonist, to treat female sexual dysfunctions in premenopausal women. Methods: Patients randomized to receive placebo or BMT 0.75, 1.25 or 1.75 mg self-administered subcutaneously, as desired, over 12 weeks. Primary end point was change in satisfying sexual events/month. Secondary end points included total score changes on female sexual function index and female sexual distress scale-desire/arousal/orgasm. Results: Efficacy data, n = 327. For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), −11.1 versus −6.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014). Adverse events: nausea, flushing, headache. Conclusion: In premenopausal women with female sexual dysfunctions, self-administered, as desired, subcutaneous BMT was safe, effective, and well tolerated (NCT01382719).\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Bremelanotide 1.75\", \"Placebo\", \"Satisfying sexual events/month\", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"], [\"Bremelanotide 1.25\", \"Placebo\", \"Sexual function index total score\", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"], [\"Bremelanotide 1.25\", \"Placebo\", \"Sexual distress improvement \", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"], [\"Bremelanotide 1.25\", \"Placebo\", \"Satisfying sexual events/month\", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"], [\"Bremelanotide 1.25\", \"Placebo\", \"Satisfying sexual events/month\", \"significantly increased\", \". For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180),\"], [\"Bremelanotide 1.75\", \"Placebo\", \"Sexual function index total score\", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"], [\"Bremelanotide 1.75\", \"Placebo\", \"Sexual distress improvement \", \"significantly increased\", \"For 1.25/1.75-mg pooled versus placebo, mean changes from baseline to study end were +0.7 versus +0.2 satisfying sexual events/month (p = 0.0180), +3.6 versus +1.9 female sexual function index total score (p = 0.0017), \\u221211.1 versus \\u22126.8 female sexual distress scale-desire/arousal/orgasm total score (p = 0.0014).\"]]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: In subsequent work, we showed that we could augment the E2E training data with synthetically generated stylistic variants and train a neural generator to reproduce these variants, however the models can still only generate what they have seen in training<cite> [5]</cite> .
["Background"]
science.multicite_intent_classification.1274
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIn subsequent work, we showed that we could augment the E2E training data with synthetically generated stylistic variants and train a neural generator to reproduce these variants, however the models can still only generate what they have seen in training<cite> [5]</cite> .", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Tennis elbow is an overuse injury affecting people performing repetitive forearm movements. It is a soft tissue disorder that causes significant disability and pain. <p> The aim of the study was to establish that an intramuscular steroid injection is effective in the short-term pain relief and functional improvement of tennis elbow. The severity of pain at the injection site was monitored to determine whether the intramuscular injection is better tolerated than the intralesional injection. ABSTRACT.METHODS AND RESULTS: 19 patients, who had no treatment for tennis elbow in the preceding 3 months, were recruited from Whipps Cross University Hospital, London, and were randomised to receive either 80 mg of intramuscular Depo-Medrone or 40 mg of intralesional Depo-Medrone injection. Blinding proved difficult as the injection sites differed and placebo arms were not included in the study. A Patient-Rated Tennis Elbow Evaluation (PRTEE) Questionnaire and a 10-point Likert scale were used to assess primary outcome. Six weeks after the treatment, there was a reduction in pain, improvement in function and total PRTEE scores in both intramuscular and intralesional groups (p=0.008) using a 95% CI for mean treatment difference of −26 to +16 points. A statistically significant result (p=0.001) in favour of intramuscular causing less pain at the injection site was noted. ABSTRACT.CONCLUSION: Non-inferiority of intramuscular to intralesional injections was not confirmed; however, the intramuscular injection proved to be effective in reducing tennis elbow-related symptoms and was found less painful at the site of injection at the time of administration. ABSTRACT.TRIAL REGISTRATION NUMBER: EUDRACT Number: 2010-022131-11. <p> REC Number: 10/H0718/76 (NRES, Central London REC 1). Extracted Data:
[["80\u2009mg of intramuscular Depo-Medrone", "40\u2009mg of intralesional Depo-Medrone", "pain at the injection site", "significantly decreased", "A statistically significant result (p=0.001) in favour of intramuscular causing less pain at the injection site was noted."]]
science.evidence_inference.200
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Tennis elbow is an overuse injury affecting people performing repetitive forearm movements. It is a soft tissue disorder that causes significant disability and pain. <p> The aim of the study was to establish that an intramuscular steroid injection is effective in the short-term pain relief and functional improvement of tennis elbow. The severity of pain at the injection site was monitored to determine whether the intramuscular injection is better tolerated than the intralesional injection.\n\nABSTRACT.METHODS AND RESULTS: 19 patients, who had no treatment for tennis elbow in the preceding 3 months, were recruited from Whipps Cross University Hospital, London, and were randomised to receive either 80 mg of intramuscular Depo-Medrone or 40 mg of intralesional Depo-Medrone injection. Blinding proved difficult as the injection sites differed and placebo arms were not included in the study. A Patient-Rated Tennis Elbow Evaluation (PRTEE) Questionnaire and a 10-point Likert scale were used to assess primary outcome. Six weeks after the treatment, there was a reduction in pain, improvement in function and total PRTEE scores in both intramuscular and intralesional groups (p=0.008) using a 95% CI for mean treatment difference of −26 to +16 points. A statistically significant result (p=0.001) in favour of intramuscular causing less pain at the injection site was noted.\n\nABSTRACT.CONCLUSION: Non-inferiority of intramuscular to intralesional injections was not confirmed; however, the intramuscular injection proved to be effective in reducing tennis elbow-related symptoms and was found less painful at the site of injection at the time of administration.\n\nABSTRACT.TRIAL REGISTRATION NUMBER: EUDRACT Number: 2010-022131-11. <p> REC Number: 10/H0718/76 (NRES, Central London REC 1).\n\nExtracted Data:", "role": "user" }, { "content": "[[\"80\\u2009mg of intramuscular Depo-Medrone\", \"40\\u2009mg of intralesional Depo-Medrone\", \"pain at the injection site\", \"significantly decreased\", \"A statistically significant result (p=0.001) in favour of intramuscular causing less pain at the injection site was noted.\"]]", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: A parent report of comorbid diagnosis of attention-deficit/hyperactivity disorder, bipolar disorder, obsessive-compulsive disorder, depression, or anxiety was associated with a high rate of use, with 80% receiving ≥ 1 psychotropic medication Mindfulness-based therapy (MBT) has been found effective in reducing anxiety and depression symptoms, however research in autism is limited. Therefore, we examined the effects of a modified MBT protocol (MBT-AS) in high-functioning adults with ASD. the present study is the first controlled trial to demonstrate that adults with ASD can benefit from MBT-AS. This pilot, randomized controlled trial evaluated the feasibility and preliminary outcomes of the Multimodal Anxiety and Social Skills Intervention (MASSI) program in a sample of 30 adolescents with ASD and anxiety symptoms of moderate or greater severity. These findings suggest MASSI is a feasible treatment program and further evaluation is warranted. The purpose of this pilot study was to evaluate whether a modified version of the Coping Cat program could be effective in reducing anxiety in children with autism spectrum disorder (ASD). the Coping Cat program (cognitive-behavioral therapy; CBT) Results provide preliminary evidence that a modified version of the Coping Cat program may be a feasible and effective program for reducing clinically significant levels of anxiety in children with high-functioning ASD. The intent of this article is to explore the efficacy of both the literal and concrete externalization aspects within narrative therapy, and the implementation of interactive metaphors as a combined psychotherapeutic approach for decreasing anxiety with people who present with high-functioning autism. This paper reports a case series of children and adolescents with ASD and an anxiety disorder who were treated with a standard cognitive behaviour therapy (CBT) rationale adapted to take account of the neuropsychological features of ASD. children with moderate autistic symptomology (per SRS-P) were significantly more likely to improve from family CBT (FCBT) than individual CBT (ICBT; OR = 8.67) Though both treatments reduced anxiety, FCBT outperformed ICBT for children with moderate ASD symptoms, a benefit potentially linked to more at-home exposures and greater child involvement in FCBT. A structural model-building approach was used to test the extent to which family and peer variables directly or indirectly affected ASD via child anxiety/depression. The key findings were that anxiety/depression and ASD symptomatology were significantly related, and family conflict was more predictive of ASD symptomatology than positive family/peer influences. The results point to the utility of expanding interventions to include conflict management for couples, even when conflict and family distress is low. A family-based, cognitive behavioural treatment for anxiety in 47 children with comorbid anxiety disorders and High Functioning Autism Spectrum Disorder (HFA) was evaluated. Following treatment, 71.4% of the treated participants no longer fulfilled diagnostic criteria for an anxiety disorder. Comparisons between the two conditions indicated significant reductions in anxiety symptoms as measured by self-report, parent report and teacher report Question: List all reported treatment options for anxiety in autism spectrum disorder.
["Mindfulness Based Therapy (MBT)", "Multimodal Anxiety and Social Skills Intervention (MASSI) program", "modified version of the Coping Cat program", "Family cognitive-behavioral therapy", "Individual cognitive-behavioral therapy", "Conflict management for couples", "Sertraline"]
science.bioasq_list_qa.145
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: A parent report of comorbid diagnosis of attention-deficit/hyperactivity disorder, bipolar disorder, obsessive-compulsive disorder, depression, or anxiety was associated with a high rate of use, with 80% receiving ≥ 1 psychotropic medication\nMindfulness-based therapy (MBT) has been found effective in reducing anxiety and depression symptoms, however research in autism is limited.\nTherefore, we examined the effects of a modified MBT protocol (MBT-AS) in high-functioning adults with ASD.\nthe present study is the first controlled trial to demonstrate that adults with ASD can benefit from MBT-AS.\nThis pilot, randomized controlled trial evaluated the feasibility and preliminary outcomes of the Multimodal Anxiety and Social Skills Intervention (MASSI) program in a sample of 30 adolescents with ASD and anxiety symptoms of moderate or greater severity.\nThese findings suggest MASSI is a feasible treatment program and further evaluation is warranted.\nThe purpose of this pilot study was to evaluate whether a modified version of the Coping Cat program could be effective in reducing anxiety in children with autism spectrum disorder (ASD).\nthe Coping Cat program (cognitive-behavioral therapy; CBT)\nResults provide preliminary evidence that a modified version of the Coping Cat program may be a feasible and effective program for reducing clinically significant levels of anxiety in children with high-functioning ASD.\nThe intent of this article is to explore the efficacy of both the literal and concrete externalization aspects within narrative therapy, and the implementation of interactive metaphors as a combined psychotherapeutic approach for decreasing anxiety with people who present with high-functioning autism.\nThis paper reports a case series of children and adolescents with ASD and an anxiety disorder who were treated with a standard cognitive behaviour therapy (CBT) rationale adapted to take account of the neuropsychological features of ASD.\nchildren with moderate autistic symptomology (per SRS-P) were significantly more likely to improve from family CBT (FCBT) than individual CBT (ICBT; OR = 8.67)\nThough both treatments reduced anxiety, FCBT outperformed ICBT for children with moderate ASD symptoms, a benefit potentially linked to more at-home exposures and greater child involvement in FCBT.\nA structural model-building approach was used to test the extent to which family and peer variables directly or indirectly affected ASD via child anxiety/depression.\nThe key findings were that anxiety/depression and ASD symptomatology were significantly related, and family conflict was more predictive of ASD symptomatology than positive family/peer influences. The results point to the utility of expanding interventions to include conflict management for couples, even when conflict and family distress is low.\nA family-based, cognitive behavioural treatment for anxiety in 47 children with comorbid anxiety disorders and High Functioning Autism Spectrum Disorder (HFA) was evaluated.\nFollowing treatment, 71.4% of the treated participants no longer fulfilled diagnostic criteria for an anxiety disorder. Comparisons between the two conditions indicated significant reductions in anxiety symptoms as measured by self-report, parent report and teacher report\n\nQuestion: List all reported treatment options for anxiety in autism spectrum disorder.", "role": "user" }, { "content": "[\"Mindfulness Based Therapy (MBT)\", \"Multimodal Anxiety and Social Skills Intervention (MASSI) program\", \"modified version of the Coping Cat program\", \"Family cognitive-behavioral therapy\", \"Individual cognitive-behavioral therapy\", \"Conflict management for couples\", \"Sertraline\"]", "role": "assistant" } ]
science.discomat_te
| | Gb | Gc | Gd | | %V4+ (+-0.01) | 3.75 | 4.13 | 38.41 | | E a s e (eV) | 0.79 | 0.80 | 0.62 | Caption: Determined electronic conductivity E a and V4+ amount for Li/V=1 glasses Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "3.75", "type": "Other"} {"value": "4.13", "type": "Other"} {"value": "38.41", "type": "Other"} {"value": "0.79", "type": "Other"} {"value": "0.80", "type": "Other"} {"value": "0.62", "type": "Other"}
science.discomat_te.396
[ { "content": "| | Gb | Gc | Gd |\n| %V4+ (+-0.01) | 3.75 | 4.13 | 38.41 |\n| E a s e (eV) | 0.79 | 0.80 | 0.62 |\n\nCaption: Determined electronic conductivity E a and V4+ amount for Li/V=1 glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"3.75\", \"type\": \"Other\"}\n{\"value\": \"4.13\", \"type\": \"Other\"}\n{\"value\": \"38.41\", \"type\": \"Other\"}\n{\"value\": \"0.79\", \"type\": \"Other\"}\n{\"value\": \"0.80\", \"type\": \"Other\"}\n{\"value\": \"0.62\", \"type\": \"Other\"}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Translation Artifacts in Cross-lingual Transfer Learning Both human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique. In this paper, we show that such translation process can introduce subtle artifacts that have a notable impact in existing cross-lingual models. For instance, in natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them, which current models are highly sensitive to. We show that some previous findings in cross-lingual transfer learning need to be reconsidered in the light of this phenomenon. Based on the gained insights, we also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively. Introduction While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training. Closely related to that, cross-lingual transfer learning aims to leverage large datasets available in one language—typically English—to build multilingual models that can generalize to other languages. Previous work has explored 3 main approaches to that end: machine translating the test set into English and using a monolingual English model (Translate-Test), machine translating the training set into each target language and training the models on their respective languages (Translate-Train), or using English data to fine-tune a multilingual model that is then transferred to the rest of languages (Zero-Shot). The dataset creation and transfer procedures described above result in a mixture of original, human translated and machine translated data when dealing with cross-lingual models. In fact, the type of text a system is trained on does not typically match the type of text it is exposed to at test time: Translate-Test systems are trained on original data and evaluated on machine translated test sets, Zero-Shot systems are trained on original data and evaluated on human translated test sets, and Translate-Train systems are trained on machine translated data and evaluated on human translated test sets. Despite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon. Related work ::: Cross-lingual transfer learning. Current cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide. Related work ::: Annotation artifacts. Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings. Related work ::: Translationese. Translated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation. Experimental design Our goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (§SECREF6), the specific training variants explored (§SECREF8), and the evaluation procedure followed (§SECREF10). Experimental design ::: Tasks and evaluation procedure ::: Natural Language Inference (NLI). Given a premise and a hypothesis, the task is to determine whether there is an entailment, neutral or contradiction relation between them. We fine-tune our models on MultiNLI BIBREF15 for 10 epochs using the same settings as BIBREF28. In most of our experiments, we evaluate on XNLI BIBREF1, which comprises 2490 development and 5010 test instances in 15 languages. These were originally annotated in English, and the resulting premises and hypotheses were independently translated into the rest of the languages by professional translators. For the Translate-Test approach, we use the machine translated versions from the authors. Following BIBREF8, we select the best epoch checkpoint according to the average accuracy in the development set. NLI experiments ::: Translate-Test results We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another. NLI experiments ::: Zero-Shot results We next analyze the results for the Zero-Shot approach. In this case, inference is done in the test set in each target language which, in the case of XNLI, was human translated from English. As such, different from the Translate-Test approach, neither training on original data (Orig) nor training on machine translated data (BT-XX and MT-XX) makes use of the exact same type of text that the system is exposed to at test time. However, as shown in Table TABREF9, both BT-XX and MT-XX outperform Orig by approximately 2 points, which suggests that our (back-)translated versions of the training set are more similar to the human translated test sets than the original one. This also provides a new perspective on the Translate-Train approach, which was reported to outperform Orig in previous work BIBREF5: while the original motivation was to train the model on the same language that it is tested on, our results show that machine translating the training set is beneficial even when the target language is different. NLI experiments ::: Comparison with the state-of-the-art So as to put our results into perspective, we compare our best variant to previous work on the XNLI test set. As shown in Table TABREF31, our method improves the state-of-the-art for both the Translate-Test and the Zero-Shot approaches by 4.3 and 2.8 points, respectively. It also obtains the best overall results published to date, with the additional advantage that the previous state-of-the-art required a machine translation system between English and each of the 14 target languages, whereas our method uses a single machine translation system between English and Finnish (which is not one of the target languages). While the main goal of our work is not to design better cross-lingual models, but to analyze their behavior in connection to translation, this shows that the phenomenon under study is highly relevant, to the extent that it can be exploited to improve the state-of-the-art. QA experiments So as to understand whether our previous findings apply to other tasks besides NLI, we run additional experiments on QA. As shown in Table TABREF32, BT-FI and BT-ES do indeed outperform Orig for the Translate-Test approach on MLQA. The improvement is modest, but very consistent across different languages, models and runs. The results for MT-ES and MT-FI are less conclusive, presumably because mapping the answer spans across languages might introduce some noise. In contrast, we do not observe any clear improvement for the Zero-Shot approach on this dataset. Our XQuAD results in Table TABREF33 are more positive, but still inconclusive. These results can partly be explained by the translation procedure used to create the different benchmarks: the premises and hypotheses of XNLI were translated independently, whereas the questions and context paragraphs of XQuAD were translated together. Similarly, MLQA made use of parallel contexts, and translators were shown the sentence containing each answer when translating the corresponding question. As a result, one can expect both QA benchmarks to have more consistent translations than XNLI, which would in turn diminish this phenomenon. In contrast, the questions and context paragraphs are independently translated when using machine translation, which explains why BT-ES and BT-FI outperform Orig for the Translate-Test approach. We conclude that the translation artifacts revealed by our analysis are not exclusive to NLI, as they also show up on QA for the Translate-Test approach, but their actual impact can be highly dependent on the translation procedure used and the nature of the task. Discussion ::: Future evaluation should better account for translation artifacts. The evaluation issues raised by our analysis do not have a simple solution. In fact, while we use the term translation artifacts to highlight that they are an unintended effect of translation that impacts final evaluation, one could also argue that it is the original datasets that contain the artifacts, which translation simply alters or even mitigates. In any case, this is a more general issue that falls beyond the scope of cross-lingual transfer learning, so we argue that it should be carefully controlled when evaluating cross-lingual models. In the absence of more robust datasets, we recommend that future multilingual benchmarks should at least provide consistent test sets for English and the rest of languages. This can be achieved by (i) using original annotations in all languages, (ii) using original annotations in a non-English language and translating them into English and other languages, or (iii) if translating from English, doing so at the document level to minimize translation inconsistencies. Conclusions In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English. Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases. Table 2: NLI results on original (OR), human translated (HT) and machine translated (MT) sets (acc). BT-XX and MT-XX outperform ORIG in translated sets, but do not get any clear improvement in original ones. Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI. Table 6: XNLI test results (acc). Results for other methods are taken from their respective papers or, if not provided, from Conneau et al. (2019). For those with multiple variants, we select the one with the best results. Table 7: MLQA test results (F1 / exact match). Query: What are the languages they use in their experiment? JSON Response:
{"answer": "English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish", "evidence": ["We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another."]}
science.qasper_abstractive_qa.272.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nTranslation Artifacts in Cross-lingual Transfer Learning\n\nBoth human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique. In this paper, we show that such translation process can introduce subtle artifacts that have a notable impact in existing cross-lingual models. For instance, in natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them, which current models are highly sensitive to. We show that some previous findings in cross-lingual transfer learning need to be reconsidered in the light of this phenomenon. Based on the gained insights, we also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively.\n\nIntroduction\nWhile most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training.\nClosely related to that, cross-lingual transfer learning aims to leverage large datasets available in one language—typically English—to build multilingual models that can generalize to other languages. Previous work has explored 3 main approaches to that end: machine translating the test set into English and using a monolingual English model (Translate-Test), machine translating the training set into each target language and training the models on their respective languages (Translate-Train), or using English data to fine-tune a multilingual model that is then transferred to the rest of languages (Zero-Shot).\nThe dataset creation and transfer procedures described above result in a mixture of original, human translated and machine translated data when dealing with cross-lingual models. In fact, the type of text a system is trained on does not typically match the type of text it is exposed to at test time: Translate-Test systems are trained on original data and evaluated on machine translated test sets, Zero-Shot systems are trained on original data and evaluated on human translated test sets, and Translate-Train systems are trained on machine translated data and evaluated on human translated test sets.\nDespite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon.\n\nRelated work ::: Cross-lingual transfer learning.\nCurrent cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide.\n\nRelated work ::: Annotation artifacts.\nSeveral studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings.\n\nRelated work ::: Translationese.\nTranslated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation.\n\nExperimental design\nOur goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (§SECREF6), the specific training variants explored (§SECREF8), and the evaluation procedure followed (§SECREF10).\n\nExperimental design ::: Tasks and evaluation procedure ::: Natural Language Inference (NLI).\nGiven a premise and a hypothesis, the task is to determine whether there is an entailment, neutral or contradiction relation between them. We fine-tune our models on MultiNLI BIBREF15 for 10 epochs using the same settings as BIBREF28. In most of our experiments, we evaluate on XNLI BIBREF1, which comprises 2490 development and 5010 test instances in 15 languages. These were originally annotated in English, and the resulting premises and hypotheses were independently translated into the rest of the languages by professional translators. For the Translate-Test approach, we use the machine translated versions from the authors. Following BIBREF8, we select the best epoch checkpoint according to the average accuracy in the development set.\n\nNLI experiments ::: Translate-Test results\nWe start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another.\n\nNLI experiments ::: Zero-Shot results\nWe next analyze the results for the Zero-Shot approach. In this case, inference is done in the test set in each target language which, in the case of XNLI, was human translated from English. As such, different from the Translate-Test approach, neither training on original data (Orig) nor training on machine translated data (BT-XX and MT-XX) makes use of the exact same type of text that the system is exposed to at test time. However, as shown in Table TABREF9, both BT-XX and MT-XX outperform Orig by approximately 2 points, which suggests that our (back-)translated versions of the training set are more similar to the human translated test sets than the original one. This also provides a new perspective on the Translate-Train approach, which was reported to outperform Orig in previous work BIBREF5: while the original motivation was to train the model on the same language that it is tested on, our results show that machine translating the training set is beneficial even when the target language is different.\n\nNLI experiments ::: Comparison with the state-of-the-art\nSo as to put our results into perspective, we compare our best variant to previous work on the XNLI test set. As shown in Table TABREF31, our method improves the state-of-the-art for both the Translate-Test and the Zero-Shot approaches by 4.3 and 2.8 points, respectively. It also obtains the best overall results published to date, with the additional advantage that the previous state-of-the-art required a machine translation system between English and each of the 14 target languages, whereas our method uses a single machine translation system between English and Finnish (which is not one of the target languages). While the main goal of our work is not to design better cross-lingual models, but to analyze their behavior in connection to translation, this shows that the phenomenon under study is highly relevant, to the extent that it can be exploited to improve the state-of-the-art.\n\nQA experiments\nSo as to understand whether our previous findings apply to other tasks besides NLI, we run additional experiments on QA. As shown in Table TABREF32, BT-FI and BT-ES do indeed outperform Orig for the Translate-Test approach on MLQA. The improvement is modest, but very consistent across different languages, models and runs. The results for MT-ES and MT-FI are less conclusive, presumably because mapping the answer spans across languages might introduce some noise. In contrast, we do not observe any clear improvement for the Zero-Shot approach on this dataset. Our XQuAD results in Table TABREF33 are more positive, but still inconclusive.\nThese results can partly be explained by the translation procedure used to create the different benchmarks: the premises and hypotheses of XNLI were translated independently, whereas the questions and context paragraphs of XQuAD were translated together. Similarly, MLQA made use of parallel contexts, and translators were shown the sentence containing each answer when translating the corresponding question. As a result, one can expect both QA benchmarks to have more consistent translations than XNLI, which would in turn diminish this phenomenon. In contrast, the questions and context paragraphs are independently translated when using machine translation, which explains why BT-ES and BT-FI outperform Orig for the Translate-Test approach. We conclude that the translation artifacts revealed by our analysis are not exclusive to NLI, as they also show up on QA for the Translate-Test approach, but their actual impact can be highly dependent on the translation procedure used and the nature of the task.\n\nDiscussion ::: Future evaluation should better account for translation artifacts.\nThe evaluation issues raised by our analysis do not have a simple solution. In fact, while we use the term translation artifacts to highlight that they are an unintended effect of translation that impacts final evaluation, one could also argue that it is the original datasets that contain the artifacts, which translation simply alters or even mitigates. In any case, this is a more general issue that falls beyond the scope of cross-lingual transfer learning, so we argue that it should be carefully controlled when evaluating cross-lingual models. In the absence of more robust datasets, we recommend that future multilingual benchmarks should at least provide consistent test sets for English and the rest of languages. This can be achieved by (i) using original annotations in all languages, (ii) using original annotations in a non-English language and translating them into English and other languages, or (iii) if translating from English, doing so at the document level to minimize translation inconsistencies.\n\nConclusions\nIn this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English.\n\nTable 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.\n\nTable 2: NLI results on original (OR), human translated (HT) and machine translated (MT) sets (acc). BT-XX and MT-XX outperform ORIG in translated sets, but do not get any clear improvement in original ones.\n\nTable 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI.\n\nTable 6: XNLI test results (acc). Results for other methods are taken from their respective papers or, if not provided, from Conneau et al. (2019). For those with multiple variants, we select the one with the best results.\n\nTable 7: MLQA test results (F1 / exact match).\n\nQuery: What are the languages they use in their experiment?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"English\\nFrench\\nSpanish\\nGerman\\nGreek\\nBulgarian\\nRussian\\nTurkish\\nArabic\\nVietnamese\\nThai\\nChinese\\nHindi\\nSwahili\\nUrdu\\nFinnish\", \"evidence\": [\"We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Female carriers of the Apolipoprotein E4 (APOE4) allele have shorter lifetime exposure to estrogen due to a decreased reproductive period. Research Abstract: CONTEXT Exogenous estrogen use may lower risk of dementia in postmenopausal women. A relationship between long-term exposure to endogenous estrogens and incident dementia has been hypothesized but not studied. OBJECTIVE To determine whether a longer reproductive period, as an indicator of longer exposure to endogenous estrogens, is associated with lower risk of dementia and Alzheimer disease (AD) in women who have natural menopause. DESIGN AND SETTING The Rotterdam Study, a population-based prospective cohort study conducted in the Netherlands. PARTICIPANTS A total of 3601 women aged 55 years or older who did not have dementia at baseline (1990-1993) and had information on age at menarche, age at menopause, and type of menopause. Participants were reexamined in 1993-1994 and 1997-1999 and were continuously monitored for development of dementia. MAIN OUTCOME MEASURES Incidence of dementia, based on Diagnostic and Statistical Manual of Mental Disorders, Revised Third Edition criteria, and AD, based on National Institute of Neurological Disorders and Stroke/Alzheimer's Disease and Related Disorders Association criteria, compared by quartiles of reproductive period among women with natural menopause. RESULTS During 21 046 person-years of follow-up (median follow-up, 6.3 years), 199 women developed dementia, including 159 who developed AD. After adjusting for age, dementia was not clearly associated with length of reproductive period. However, after adjusting for multiple covariates, women with natural menopause and more reproductive years had an increased risk of dementia (adjusted rate ratio [RR] for women with >39 reproductive years [highest quartile] compared with <34 reproductive years [lowest quartile], 1.78; 95% confidence interval [CI], 1.12-2.84). The adjusted RR per year of increase was 1.04 (95% CI, 1.01-1.08). For risk of AD, the adjusted RRs were 1.51 (95% CI, 0.91-2.50) and 1.03 (95% CI, 1.00-1.07), respectively. Risk of dementia associated with a longer reproductive period was most pronounced in APOE epsilon4 carriers (adjusted RR for >39 reproductive years compared with <34 reproductive years, 4.20 [95% CI, 1.97-8.92] for dementia and 3.42 [95% CI, 1.51-7.75] for AD), whereas in noncarriers, no clear association with dementia or AD was observed. CONCLUSION Our findings do not support the hypothesis that a longer reproductive period reduces risk of dementia in women who have natural menopause.
{"verdict": "CONTRADICT", "evidence": ["Risk of dementia associated with a longer reproductive period was most pronounced in APOE epsilon4 carriers (adjusted RR for >39 reproductive years compared with <34 reproductive years, 4.20 [95% CI, 1.97-8.92] for dementia and 3.42 [95% CI, 1.51-7.75] for AD), whereas in noncarriers, no clear association with dementia or AD was observed."]}
science.scifact_entailment.276
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nFemale carriers of the Apolipoprotein E4 (APOE4) allele have shorter lifetime exposure to estrogen due to a decreased reproductive period.\n\nResearch Abstract:\nCONTEXT Exogenous estrogen use may lower risk of dementia in postmenopausal women.\nA relationship between long-term exposure to endogenous estrogens and incident dementia has been hypothesized but not studied.\nOBJECTIVE To determine whether a longer reproductive period, as an indicator of longer exposure to endogenous estrogens, is associated with lower risk of dementia and Alzheimer disease (AD) in women who have natural menopause.\nDESIGN AND SETTING The Rotterdam Study, a population-based prospective cohort study conducted in the Netherlands.\nPARTICIPANTS A total of 3601 women aged 55 years or older who did not have dementia at baseline (1990-1993) and had information on age at menarche, age at menopause, and type of menopause.\nParticipants were reexamined in 1993-1994 and 1997-1999 and were continuously monitored for development of dementia.\nMAIN OUTCOME MEASURES Incidence of dementia, based on Diagnostic and Statistical Manual of Mental Disorders, Revised Third Edition criteria, and AD, based on National Institute of Neurological Disorders and Stroke/Alzheimer's Disease and Related Disorders Association criteria, compared by quartiles of reproductive period among women with natural menopause.\nRESULTS During 21 046 person-years of follow-up (median follow-up, 6.3 years), 199 women developed dementia, including 159 who developed AD.\nAfter adjusting for age, dementia was not clearly associated with length of reproductive period.\nHowever, after adjusting for multiple covariates, women with natural menopause and more reproductive years had an increased risk of dementia (adjusted rate ratio [RR] for women with >39 reproductive years [highest quartile] compared with <34 reproductive years [lowest quartile], 1.78; 95% confidence interval [CI], 1.12-2.84).\nThe adjusted RR per year of increase was 1.04 (95% CI, 1.01-1.08).\nFor risk of AD, the adjusted RRs were 1.51 (95% CI, 0.91-2.50) and 1.03 (95% CI, 1.00-1.07), respectively.\nRisk of dementia associated with a longer reproductive period was most pronounced in APOE epsilon4 carriers (adjusted RR for >39 reproductive years compared with <34 reproductive years, 4.20 [95% CI, 1.97-8.92] for dementia and 3.42 [95% CI, 1.51-7.75] for AD), whereas in noncarriers, no clear association with dementia or AD was observed.\nCONCLUSION Our findings do not support the hypothesis that a longer reproductive period reduces risk of dementia in women who have natural menopause.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Risk of dementia associated with a longer reproductive period was most pronounced in APOE epsilon4 carriers (adjusted RR for >39 reproductive years compared with <34 reproductive years, 4.20 [95% CI, 1.97-8.92] for dementia and 3.42 [95% CI, 1.51-7.75] for AD), whereas in noncarriers, no clear association with dementia or AD was observed.\"]}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Recently, a novel CPVT RyR2 mutation, G230C, was found to increase the cytosolic, but not the luminal, Ca2+ sensitivity of single RyR2 channels in lipid bilayers. These data suggest that the G230C mutation enhances the propensity for SOICR by sensitizing the channel to luminal and cytosolic Ca2+ activation, and that G230C has an intrinsic structural impact on the N-terminal domains of RyR2. The novel RYR2-S4153R mutation has been implicated as a cause of CPVT and atrial fibrillation. The mutation has been functionally characterized via store-overload-induced Ca(2+) release (SOICR) and tritium-labelled ryanodine ([(3)H]ryanodine) binding assays. The S4153R mutation enhanced propensity for spontaneous Ca(2+) release and reduced SOICR threshold but did not alter Ca(2+) activation of [(3)H]ryanodine binding, a common feature of other CPVT gain-of-function RYR2 mutations. We conclude that the S4153R mutation is a gain-of-function RYR2 mutation associated with a clinical phenotype characterized by both CPVT and atrial fibrillation. To determine the molecular and cellular mechanisms by which a novel RyR2-V2475F mutation associated with CPVT in humans triggers Ca(2+)-dependent arrhythmias in whole hearts and intact mice. The mutation RyR2-V2475F is phenotypically strong among other CPVT mutations and produces heterogeneous mechanisms of RyR2 dysfunction. We generated 3 knock-in mice with mutations introduced into RyR2 that result in leaky channels and cause exercise induced polymorphic ventricular tachycardia in humans [catecholaminergic polymorphic ventricular tachycardia (CPVT)]. We examined AF susceptibility in these three CPVT mouse models harboring RyR2 mutations to explore the role of diastolic SR Ca2+ leak in AF. AF was stimulated with an intra-esophageal burst pacing protocol in the 3 CPVT mouse models (RyR2-R2474S+/-, 70%; RyR2-N2386I+/-, 60%; RyR2-L433P+/-, 35.71%) but not in wild-type (WT) mice (P<0.05). The RyR2-G230C mutant exhibits similar biophysical defects compared with previously characterized CPVT mutations: decreased binding of the stabilizing subunit calstabin2 and a leftward shift in the Ca(2+) dependence for activation under conditions that simulate exercise, consistent with a "leaky" channel. Both RyR2-G230C and RyR2-P2328S channels exhibit normal luminal Ca(2+) activation. Thus, diastolic sarcoplasmic reticulum Ca(2+) leak caused by reduced calstabin2 binding and a leftward shift in the Ca(2+) dependence for activation by diastolic levels of cytosolic Ca(2+) is a common mechanism underlying CPVT. A novel CPVT mutation, E189D, was identified. The E189D mutation does not alter the affinity of the channel for FKBP12.6, but it increases the propensity for store-overload-induced Ca(2+) release (SOICR). Furthermore, the E189D mutation enhances the basal channel activity of RyR2 and its sensitivity to activation by caffeine. The E189D RyR2 mutation is causative for CPVT and functionally increases the propensity for SOICR without altering the affinity for FKBP12.6. We found that mice heterozygous for the R2474S mutation in Ryr2 (Ryr2-R2474S mice) exhibited spontaneous generalized tonic-clonic seizures (which occurred in the absence of cardiac arrhythmias), exercise-induced ventricular arrhythmias, and sudden cardiac death. Thus, CPVT-associated mutant leaky Ryr2-R2474S channels in the brain can cause seizures in mice, independent of cardiac arrhythmias. We screened 12 Finnish CPVT probands for mutations in these genes and identified three novel RYR2 mutations (V2306I, P4902L, R4959Q), which were absent in unaffected and control individuals. RyR2 mutations suggested to cause defective Ca2+ channel function have recently been identified in catecholaminergic polymorphic ventricular tachycardia (CPVT) and arrhythmogenic right ventricular dysplasia (ARVD) affected individuals. We report expression of three CPVT-linked human RyR2 (hRyR2) mutations (S2246L, N4104K, and R4497C) in HL-1 cardiomyocytes displaying correct targeting to the endoplasmic reticulum. We developed a conditional knock-in mouse model carrier of the R4496C mutation, the mouse equivalent to the R4497C mutations identified in CPVT families, to evaluate if the animals would develop a CPVT phenotype and if beta blockers would prevent arrhythmias. These data provide the first experimental demonstration that the R4496C RyR2 mutation predisposes the murine heart to VT and VF in response caffeine and/or adrenergic stimulation. We developed a conditional knock-in mouse model carrier of the R4496C mutation, the mouse equivalent to the R4497C mutations identified in CPVT families, to evaluate if the animals would develop a CPVT phenotype and if beta blockers would prevent arrhythmias Question: Which mutations in the cardiac isoform of the ryanodine receptor (RyR2) have been found to be related to CPVT?
["RyR2-G230C", "RyR2-S4153R", "RyR2-V2475F", "RyR2-R2474S", "RyR2-N2386I", "RyR2-L433P", "RyR2-E189D", "RyR2-R4497C (RyR2-R4496C in the mouse)", "RyR2-V2306I,", "RyR2-P4902L", "RyR2-R4959Q", "RyR2-S2246L", "RyR2-N4104K"]
science.bioasq_list_qa.380
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Recently, a novel CPVT RyR2 mutation, G230C, was found to increase the cytosolic, but not the luminal, Ca2+ sensitivity of single RyR2 channels in lipid bilayers.\nThese data suggest that the G230C mutation enhances the propensity for SOICR by sensitizing the channel to luminal and cytosolic Ca2+ activation, and that G230C has an intrinsic structural impact on the N-terminal domains of RyR2.\nThe novel RYR2-S4153R mutation has been implicated as a cause of CPVT and atrial fibrillation. The mutation has been functionally characterized via store-overload-induced Ca(2+) release (SOICR) and tritium-labelled ryanodine ([(3)H]ryanodine) binding assays. The S4153R mutation enhanced propensity for spontaneous Ca(2+) release and reduced SOICR threshold but did not alter Ca(2+) activation of [(3)H]ryanodine binding, a common feature of other CPVT gain-of-function RYR2 mutations. We conclude that the S4153R mutation is a gain-of-function RYR2 mutation associated with a clinical phenotype characterized by both CPVT and atrial fibrillation.\nTo determine the molecular and cellular mechanisms by which a novel RyR2-V2475F mutation associated with CPVT in humans triggers Ca(2+)-dependent arrhythmias in whole hearts and intact mice.\nThe mutation RyR2-V2475F is phenotypically strong among other CPVT mutations and produces heterogeneous mechanisms of RyR2 dysfunction.\nWe generated 3 knock-in mice with mutations introduced into RyR2 that result in leaky channels and cause exercise induced polymorphic ventricular tachycardia in humans [catecholaminergic polymorphic ventricular tachycardia (CPVT)]. We examined AF susceptibility in these three CPVT mouse models harboring RyR2 mutations to explore the role of diastolic SR Ca2+ leak in AF. AF was stimulated with an intra-esophageal burst pacing protocol in the 3 CPVT mouse models (RyR2-R2474S+/-, 70%; RyR2-N2386I+/-, 60%; RyR2-L433P+/-, 35.71%) but not in wild-type (WT) mice (P<0.05).\nThe RyR2-G230C mutant exhibits similar biophysical defects compared with previously characterized CPVT mutations: decreased binding of the stabilizing subunit calstabin2 and a leftward shift in the Ca(2+) dependence for activation under conditions that simulate exercise, consistent with a \"leaky\" channel. Both RyR2-G230C and RyR2-P2328S channels exhibit normal luminal Ca(2+) activation. Thus, diastolic sarcoplasmic reticulum Ca(2+) leak caused by reduced calstabin2 binding and a leftward shift in the Ca(2+) dependence for activation by diastolic levels of cytosolic Ca(2+) is a common mechanism underlying CPVT.\nA novel CPVT mutation, E189D, was identified. The E189D mutation does not alter the affinity of the channel for FKBP12.6, but it increases the propensity for store-overload-induced Ca(2+) release (SOICR). Furthermore, the E189D mutation enhances the basal channel activity of RyR2 and its sensitivity to activation by caffeine. The E189D RyR2 mutation is causative for CPVT and functionally increases the propensity for SOICR without altering the affinity for FKBP12.6.\nWe found that mice heterozygous for the R2474S mutation in Ryr2 (Ryr2-R2474S mice) exhibited spontaneous generalized tonic-clonic seizures (which occurred in the absence of cardiac arrhythmias), exercise-induced ventricular arrhythmias, and sudden cardiac death.\nThus, CPVT-associated mutant leaky Ryr2-R2474S channels in the brain can cause seizures in mice, independent of cardiac arrhythmias.\nWe screened 12 Finnish CPVT probands for mutations in these genes and identified three novel RYR2 mutations (V2306I, P4902L, R4959Q), which were absent in unaffected and control individuals.\nRyR2 mutations suggested to cause defective Ca2+ channel function have recently been identified in catecholaminergic polymorphic ventricular tachycardia (CPVT) and arrhythmogenic right ventricular dysplasia (ARVD) affected individuals. We report expression of three CPVT-linked human RyR2 (hRyR2) mutations (S2246L, N4104K, and R4497C) in HL-1 cardiomyocytes displaying correct targeting to the endoplasmic reticulum.\nWe developed a conditional knock-in mouse model carrier of the R4496C mutation, the mouse equivalent to the R4497C mutations identified in CPVT families, to evaluate if the animals would develop a CPVT phenotype and if beta blockers would prevent arrhythmias.\nThese data provide the first experimental demonstration that the R4496C RyR2 mutation predisposes the murine heart to VT and VF in response caffeine and/or adrenergic stimulation.\nWe developed a conditional knock-in mouse model carrier of the R4496C mutation, the mouse equivalent to the R4497C mutations identified in CPVT families, to evaluate if the animals would develop a CPVT phenotype and if beta blockers would prevent arrhythmias\n\nQuestion: Which mutations in the cardiac isoform of the ryanodine receptor (RyR2) have been found to be related to CPVT?", "role": "user" }, { "content": "[\"RyR2-G230C\", \"RyR2-S4153R\", \"RyR2-V2475F\", \"RyR2-R2474S\", \"RyR2-N2386I\", \"RyR2-L433P\", \"RyR2-E189D\", \"RyR2-R4497C (RyR2-R4496C in the mouse)\", \"RyR2-V2306I,\", \"RyR2-P4902L\", \"RyR2-R4959Q\", \"RyR2-S2246L\", \"RyR2-N4104K\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: We previously showed an association between low vitamin D levels and high opioid doses to alleviate pain in palliative cancer patients. The aim of this case-controlled study was to investigate if vitamin D supplementation could improve pain management, quality of life (QoL) and decrease infections in palliative cancer patients. ABSTRACT.METHODS: Thirty-nine palliative cancer patients with levels of 25-hydroxyvitamin D < 75 nmol/L were supplemented with vitamin D 4000 IE/day, and were compared to 39 untreated, matched "control"-patients from a previous study at the same ward. Opioid doses, antibiotic consumption and QoL-scores measured with the Edmonton Symptom Assessment Scale (ESAS) were monitored. The primary endpoint was the change from baseline after 1 and 3 months compared between the groups using linear regression with adjustment for a potential cofounding factor. ABSTRACT.RESULTS: After 1 month the vitamin D treated group had a significantly decreased fentanyl dose compared to the untreated group with a difference of 46 μg/h; 95% CI 24–78, which increased further at 3 months to 91 μg/h; 95% CI 56–140 μg/h. The ESAS QoL-score improved in the Vitamin D group the first month; -1.4; 95% CI -2.6 - (-0.21). The vitamin D-treated group had significantly lower consumption of antibiotics after 3 months compared to the untreated group, the difference was -26%; 95%CI -0.41%–(-0.12%). Vitamin D was well tolerated by all patients and no adverse events were reported. ABSTRACT.CONCLUSION: Vitamin D supplementation to palliative cancer patients is safe and improvement in pain management is noted as early as 1 month after treatment. Decreased infections are noted 3 months after vitamin D treatment. The results from this pilot-study have been used for the power-calculation of a future randomized, placebo-controlled, double-blind study called "Palliative-D" that will start in Nov 2017 and will include 254 palliative cancer patients. Extracted Data:
[["vitamin D 4000 IE/day", "untreated (control)", "fentanyl dose after 1 month and 3 months", "significantly decreased", "After 1 month the vitamin D treated group had a significantly decreased fentanyl dose compared to the untreated group with a difference of 46 \u03bcg/h; 95% CI 24\u201378, which increased further at 3 months to 91 \u03bcg/h; 95% CI 56\u2013140 \u03bcg/h."]]
science.evidence_inference.20
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: We previously showed an association between low vitamin D levels and high opioid doses to alleviate pain in palliative cancer patients. The aim of this case-controlled study was to investigate if vitamin D supplementation could improve pain management, quality of life (QoL) and decrease infections in palliative cancer patients.\n\nABSTRACT.METHODS: Thirty-nine palliative cancer patients with levels of 25-hydroxyvitamin D < 75 nmol/L were supplemented with vitamin D 4000 IE/day, and were compared to 39 untreated, matched \"control\"-patients from a previous study at the same ward. Opioid doses, antibiotic consumption and QoL-scores measured with the Edmonton Symptom Assessment Scale (ESAS) were monitored. The primary endpoint was the change from baseline after 1 and 3 months compared between the groups using linear regression with adjustment for a potential cofounding factor.\n\nABSTRACT.RESULTS: After 1 month the vitamin D treated group had a significantly decreased fentanyl dose compared to the untreated group with a difference of 46 μg/h; 95% CI 24–78, which increased further at 3 months to 91 μg/h; 95% CI 56–140 μg/h. The ESAS QoL-score improved in the Vitamin D group the first month; -1.4; 95% CI -2.6 - (-0.21). The vitamin D-treated group had significantly lower consumption of antibiotics after 3 months compared to the untreated group, the difference was -26%; 95%CI -0.41%–(-0.12%). Vitamin D was well tolerated by all patients and no adverse events were reported.\n\nABSTRACT.CONCLUSION: Vitamin D supplementation to palliative cancer patients is safe and improvement in pain management is noted as early as 1 month after treatment. Decreased infections are noted 3 months after vitamin D treatment. The results from this pilot-study have been used for the power-calculation of a future randomized, placebo-controlled, double-blind study called \"Palliative-D\" that will start in Nov 2017 and will include 254 palliative cancer patients.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"vitamin D 4000 IE/day\", \"untreated (control)\", \"fentanyl dose after 1 month and 3 months\", \"significantly decreased\", \"After 1 month the vitamin D treated group had a significantly decreased fentanyl dose compared to the untreated group with a difference of 46 \\u03bcg/h; 95% CI 24\\u201378, which increased further at 3 months to 91 \\u03bcg/h; 95% CI 56\\u2013140 \\u03bcg/h.\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: The efficacy and safety of olanzapine monotherapy are evaluated in Japanese patients from a large, global study of bipolar depression. ABSTRACT.METHODS: This is an analysis of Japanese patients from a 6-week, global (Japan, China, Korea, Taiwan, and the United States), randomized, double-blind, placebo-controlled, Phase 3 study of patients with a depressive episode of bipolar I disorder. The primary outcome was baseline-to-endpoint change in the Montgomery-Åsberg Depression Rating Scale (MADRS) total score. Secondary outcome measures included the Clinical Global Impressions-Bipolar Version Severity of Illness Scale (CGI-BP), the 17-item Hamilton Depression Rating Scale (HAMD-17) total score, the Young Mania Rating Scale (YMRS) total score, and rates of response (≥50% baseline-to-endpoint reduction in MADRS total score), recovery, and remission. ABSTRACT.RESULTS: Of the 156 Japanese patients, 104 had been allocated to olanzapine and 52 to placebo. All results are baseline-to-endpoint change. Compared to placebo, patients in the olanzapine group experienced greater improvement in the primary outcome measure, MADRS total score (−14.9 vs. −10.7; p = .01). They also had greater reductions in the following secondary measures: CGI- BP Depression (−1.41 vs. -0.89; p = .008), CGI-BP Bipolar (−1.31 vs. −0.83; p = .01), HAMD-17 (−11.7 vs. −7.9; p < .01), and YMRS (-0.32 vs. 0.34; p = .03). Differences in rates of response, recovery, and remission were not statistically significant. A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p < .001). Patients treated with olanzapine had greater increases in weight (p < .001) and fasting total cholesterol (p = .008); fasting triglycerides (p = .02), and fasting low-density lipoprotein (p = .01). There was a greater reduction in fasting high-density lipoprotein in olanzapine-treated patients (p = .01). Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p = .007) and had clinically significant weight gain (≥7% body weight) (20.2% vs. 1.9%; p = .001). ABSTRACT.CONCLUSIONS: Results of this analysis support the efficacy and tolerability of olanzapine for the treatment of bipolar depression in Japanese patients. Results in this population were consistent with those seen in the more ethnically diverse parent study. In making treatment decisions for individual patients, clinicians should carefully consider the risks and benefits of olanzapine treatment. ABSTRACT.TRIAL REGISTRATION: Clinicatrials.gov ID NCT00510146 Olanzapine Treatment of Patients with Bipolar I Disorder Extracted Data:
[["olanzapine", "placebo", "improvement in the primary outcome measure, MADRS total score", "significantly increased", "Compared to placebo, patients in the olanzapine group experienced greater improvement in the primary outcome measure, MADRS total score (\u221214.9 vs. \u221210.7; p\u2009=\u2009.01)."], ["olanzapine", "placebo", "reductions in the following secondary measures: CGI- BP Depression", "significantly increased", "They also had greater reductions in the following secondary measures: CGI- BP Depression (\u22121.41 vs. -0.89; p\u2009=\u2009.008),"], ["olanzapine", "placebo", "rates of response, recovery, and remission", "no significant difference", "Differences in rates of response, recovery, and remission were not statistically significant."], ["olanzapine", "placebo", "treatment- emergent adverse events", "significantly increased", "A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p\u2009\u0026lt;\u2009.001)."], ["olanzapine", "placebo", "treatment- emergent adverse events", "significantly increased", "A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p\u2009\u003c\u2009.001). Patients treated with olanzapine had greater increases in weight (p\u2009\u003c\u2009.001) and fasting total cholesterol (p\u2009=\u2009.008); fasting triglycerides (p\u2009=\u2009.02), and fasting low-density lipoprotein (p\u2009=\u2009.01)."], ["olanzapine", "placebo", " number of patients shifted from borderline to high cholesterol", "significantly increased", "Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p\u2009=\u2009.007)"], ["olanzapine", "placebo", "weight gain", "significantly increased", "Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p\u2009=\u2009.007) and had clinically significant weight gain (\u22657% body weight) (20.2% vs. 1.9%; p\u2009=\u2009.001)."]]
science.evidence_inference.199
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The efficacy and safety of olanzapine monotherapy are evaluated in Japanese patients from a large, global study of bipolar depression.\n\nABSTRACT.METHODS: This is an analysis of Japanese patients from a 6-week, global (Japan, China, Korea, Taiwan, and the United States), randomized, double-blind, placebo-controlled, Phase 3 study of patients with a depressive episode of bipolar I disorder. The primary outcome was baseline-to-endpoint change in the Montgomery-Åsberg Depression Rating Scale (MADRS) total score. Secondary outcome measures included the Clinical Global Impressions-Bipolar Version Severity of Illness Scale (CGI-BP), the 17-item Hamilton Depression Rating Scale (HAMD-17) total score, the Young Mania Rating Scale (YMRS) total score, and rates of response (≥50% baseline-to-endpoint reduction in MADRS total score), recovery, and remission.\n\nABSTRACT.RESULTS: Of the 156 Japanese patients, 104 had been allocated to olanzapine and 52 to placebo. All results are baseline-to-endpoint change. Compared to placebo, patients in the olanzapine group experienced greater improvement in the primary outcome measure, MADRS total score (−14.9 vs. −10.7; p = .01). They also had greater reductions in the following secondary measures: CGI- BP Depression (−1.41 vs. -0.89; p = .008), CGI-BP Bipolar (−1.31 vs. −0.83; p = .01), HAMD-17 (−11.7 vs. −7.9; p < .01), and YMRS (-0.32 vs. 0.34; p = .03). Differences in rates of response, recovery, and remission were not statistically significant. A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p < .001). Patients treated with olanzapine had greater increases in weight (p < .001) and fasting total cholesterol (p = .008); fasting triglycerides (p = .02), and fasting low-density lipoprotein (p = .01). There was a greater reduction in fasting high-density lipoprotein in olanzapine-treated patients (p = .01). Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p = .007) and had clinically significant weight gain (≥7% body weight) (20.2% vs. 1.9%; p = .001).\n\nABSTRACT.CONCLUSIONS: Results of this analysis support the efficacy and tolerability of olanzapine for the treatment of bipolar depression in Japanese patients. Results in this population were consistent with those seen in the more ethnically diverse parent study. In making treatment decisions for individual patients, clinicians should carefully consider the risks and benefits of olanzapine treatment.\n\nABSTRACT.TRIAL REGISTRATION: Clinicatrials.gov ID NCT00510146 Olanzapine Treatment of Patients with Bipolar I Disorder\n\nExtracted Data:", "role": "user" }, { "content": "[[\"olanzapine\", \"placebo\", \"improvement in the primary outcome measure, MADRS total score\", \"significantly increased\", \"Compared to placebo, patients in the olanzapine group experienced greater improvement in the primary outcome measure, MADRS total score (\\u221214.9 vs. \\u221210.7; p\\u2009=\\u2009.01).\"], [\"olanzapine\", \"placebo\", \"reductions in the following secondary measures: CGI- BP Depression\", \"significantly increased\", \"They also had greater reductions in the following secondary measures: CGI- BP Depression (\\u22121.41 vs. -0.89; p\\u2009=\\u2009.008),\"], [\"olanzapine\", \"placebo\", \"rates of response, recovery, and remission\", \"no significant difference\", \"Differences in rates of response, recovery, and remission were not statistically significant.\"], [\"olanzapine\", \"placebo\", \"treatment- emergent adverse events\", \"significantly increased\", \"A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p\\u2009\\u0026lt;\\u2009.001).\"], [\"olanzapine\", \"placebo\", \"treatment- emergent adverse events\", \"significantly increased\", \"A greater proportion of olanzapine-treated patients reported treatment- emergent adverse events (87.5% vs. 59.6%; p\\u2009\\u003c\\u2009.001). Patients treated with olanzapine had greater increases in weight (p\\u2009\\u003c\\u2009.001) and fasting total cholesterol (p\\u2009=\\u2009.008); fasting triglycerides (p\\u2009=\\u2009.02), and fasting low-density lipoprotein (p\\u2009=\\u2009.01).\"], [\"olanzapine\", \"placebo\", \" number of patients shifted from borderline to high cholesterol\", \"significantly increased\", \"Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p\\u2009=\\u2009.007)\"], [\"olanzapine\", \"placebo\", \"weight gain\", \"significantly increased\", \"Compared with placebo-group patients, more olanzapine-group patients shifted from borderline to high cholesterol (25.0% vs. 0.0%; p\\u2009=\\u2009.007) and had clinically significant weight gain (\\u22657% body weight) (20.2% vs. 1.9%; p\\u2009=\\u2009.001).\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: OVD-Explorer: A General Information-theoretic Exploration Approach for Reinforcement Learning Main Content: 1 INTRODUCTION . The exploration and exploitation trade-off is critical in reinforcement learning ( Thompson , 1933 ; Sutton & Barto , 2018 ) . Many exploration strategies have been proposed in the literature ( Osband et al. , 2016 ; Lillicrap et al. , 2016 ; Martin et al. , 2017 ; Pathak et al. , 2017 ; Ciosek et al. , 2019 ) . Among these strategies , those following the Optimism in the Face of Uncertainty ( OFU ) principle ( Auer et al. , 2002 ) provide efficient guidance for exploration ( Chen et al. , 2017 ; Ciosek et al. , 2019 ) . Generally , OFU-based methods regard the uncertainty as the result of insufficient exploration for the state-action pair , and refer to such uncertainty as epistemic uncertainty . Besides , there is another uncertainty called aleatoric uncertainty , which captures environmental stochasticity : • Epistemic uncertainty ( a.k.a . parametric uncertainty ) represents the ambiguity of the model arisen from insufficient knowledge , and is high at those state-action pairs seldom visited ( Dearden et al. , 1998 ; Osband et al. , 2016 ; Moerland et al. , 2017 ) . • Aleatoric uncertainty ( a.k.a . intrinsic uncertainty ) is the variation arisen from environment randomness , caused by the stochasticity of policy , reward and/or transition probability , and is characterized by return distribution . ( Bellemare et al. , 2017 ; Moerland et al. , 2017 ) . In the heteroscedastic stochastic tasks where the environment randomness of different state-action pairs differs , those OFU-based methods may be inefficient without properly tackling aleatoric uncertainty ( Nikolov et al. , 2019 ) . As shown in Figure 1 , if the epistemic uncertainty estimation is disturbed due to the volatility caused by aleatoric uncertainty , the exploration strategy ( i.e. , policy A ) optimistic about such estimated uncertainty may lead to explore areas with high aleatoric uncertainty . Visiting such areas results in unstable and risky transitions . In the real world , the aleatoric uncertainty can be caused easily , for example , unpredictable wind can shift the trajectory of an robot ’ s action . If such aleatoric uncertainty is not modelled , the RL agent may be trapped because the state transitions in such area can be wrongly considered novel and worth exploring due to the high uncertainty . This issue , to explore overly the state-action pairs visited frequently but with high aleatoric uncertainty , is referred to as the over-exploration issue . Intuitively , such issue could be solved by avoiding exploring optimistically about aleatoric uncertainty ( i.e. , policy B in Figure 1 ) . The similar concern has been raised in the discrete RL , where Nikolov et al . ( 2019 ) proposes to use the Information-Directed Sampling ( IDS ) to avoid the over-exploration issue in the environments with heteroscedastic noise . However , IDS needs to calculate information-regret ratio for each action , thus applying IDS for continuous RL with explosive or even infinite action space could be non-trivial and ineffective . Meanwhile , many advanced continuous RL approaches suffer from the over-exploration issue . Soft Actor-Critic ( SAC ) is a well performed continuous RL algorithm , but does not account for efficient exploration beyond maximizing policy entropy ( Haarnoja et al. , 2018 ) . OAC improves the exploration follow- ing OFU principle , but it ignores to avoid exploration towards higher aleatoric uncertainty ( Ciosek et al. , 2019 ) . Therefore , an effective exploration method to address the over-exploration issue for continuous RL is of urgent need . To address the over-exploration issue , we propose that the characterization of aleatoric uncertainty and a new exploration principle are necessary . The new principle need to properly trade-off between the epistemic and aleatoric uncertainties , so as to explore seldom visited state-action pairs while avoiding trapped in areas with high aleatoric uncertainty . In this paper , we propose a general information-theoretic exploration approach OVD-Explorer , which enriches the OFU exploration principle with a complement : “ avoid areas with high aleatoric uncertainty ” . Furthermore , to guide the exploration following the new principle , OVD-Explorer maximizes the mutual information between policy and corresponding upper bounds . Notable , the upper bounds represent the best return that policy can reach , formulated using Optimistic Value Distributions ( OVD ) , and the value distribution characterises the aleatoric uncertainty . From the theoretical derivation , we show that maximizing such mutual information urges the OVD-Explorer to guide exploration towards the areas with higher epistemic uncertainty and alleviate over-exploration issue by avoiding the areas with lower information gain , i.e. , the areas that have higher aleatoric uncertainty but low epistemic uncertainty . To make OVD-Explorer tractable for continuous action space , we derive its closed form , and propose a scheme to incorporate OVD-Explorer with any policy-based RL algorithm . Practically , we demonstrate the exploration benefits based on SAC . Evaluations on many challenging continuous RL tasks , including a toy task GridChaos , tasks in Mujoco as well as their stochastic variants are conducted , and the results verify our analysis . To the best of our knowledge , OVD-Explorer firstly addresses the negative impact of aleatoric uncertainty for exploration in continuous RL . 2 PRELIMINARIES . 2.1 DISTRIBUTIONAL RL . To model the randomness in the observed long-term return , which characterises aleatoric uncertainty , distributional RL methods ( Bellemare et al. , 2017 ; Dabney et al. , 2018b ; a ) are used to estimate Q-value distribution rather than expected Q-value ( Mnih et al. , 2015 ) . In our paper , we focus on quantile regression used in QR-DQN ( Dabney et al. , 2018b ) , where the distribution of Q-value is represented by the quantile random variable Z . Z maps the state-action pair to a uniform probability distribution supported on the values at all corresponding quantile fractions . Based on value distribution estimator parameterized as θ , we denote the value at quantile fraction τi as Ẑi ( s , a , θ ) given state-action pair ( s , a ) . Similar to the Bellman operator in the traditional Q-Learning ( Watkins & Dayan , 1992 ) , the distributional Bellman operator ( Bellemare et al. , 2017 ) T πD under policy π is given as : T πDZ ( st , at ) D = R ( st , at ) + γZ ( st+1 , at+1 ) , at+1 ∼ π ( ·|st+1 ) . ( 1 ) Notice that T πD operates on random variables , D = denotes that distributions on both sides have equal probability laws . Based on operator T πD , Dabney et al . ( 2018b ) propose QR-DQN to train quantile estimations via the quantile regression loss ( Koenker & Hallock , 2001 ) , which is denoted as : LQR ( θ ) = 1 N N∑ i=1 N∑ j=1 [ ρτ̂i ( δi , j ) ] , ( 2 ) where TD error δi , j = R ( st , at ) + γẐi ( st+1 , at+1 ; θ̄ ) − Ẑj ( st , at ; θ ) , the quantile Huber loss ρτ ( u ) = u ∗ ( τ − 1u < 0 ) , and τ̂i means the quantile midpoints , which is defined as τ̂i = τi+1+τi2 . 2.2 DISTRIBUTIONAL SOFT ACTOR-CRITIC METHOD . Distributional Soft Actor-Critic ( DSAC ) ( Ma et al. , 2020 ) seamlessly integrates distributional RL with Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) . Basically , based on the Equation 1 , the distributional soft Bellman operator T πDS is defined considering the maximum entropy RL as follows : T πDSZ ( st , at ) D = R ( st , at ) + γ [ Z ( st+1 , at+1 ) − αlogπ ( at+1|st+1 ) ] , ( 3 ) where at+1 ∼ π ( ·|st+1 ) , st+1 ∼ P ( ·|st , at ) . Then , the quantile regression loss differs from Equation 2 on δi , j , by extending clipped double Q-Learning ( Fujimoto et al. , 2018 ) based on the maximum entropy RL framework , to overcome overestimation of Q value estimation : δki , j = R ( st , at ) + γ [ min k=1,2 Ẑi ( st+1 , at+1 ; θ̄k ) − α log π ( at+1|st+1 ; φ̄ ) ] − Ẑj ( st , at ; θk ) , ( 4 ) where θ̄ and φ̄ represents their target networks respectively . The actor is optimized by minimizing the following actor loss , the same as SAC , Jπ ( φ ) = E st∼D , t∼N [ log π ( f ( st , t ; φ ) |st ) −Q ( st , f ( st , t ; φ ) ; θ ) ] , ( 5 ) where D is the replay buffer , f ( s , ; φ ) means sampling action with re-parameterized policy and is a noise vector sampled from any fixed distribution , like standard spherical Gaussian , and Q value is the minimum value of the expectation on certain distributions , as Q ( st , at ; θ ) = min k=1,2 Q ( st , at ; θk ) = 1 N min k=1,2 N−1∑ i=0 Ẑi ( st , at ; θk ) . ( 6 ) 3 METHODOLOGY . We introduce a general exploration method , named OVD-Explorer , to achieve efficient and robust exploration for continuous RL . Overall , OVD-Explorer estimates the upper bounds of the policy ’ s return , and uses the bounds as criteria to find a behavior policy for online exploration from the information-theoretic perspective . Innovatively , to avoid over-exploration , OVD-Explorer particularly takes the aleatoric uncertainty into account , leveraging the optimistic value distribution ( OVD ) to formulate the upper bounds of policy ’ s return . Then , OVD-Explorer derives the exploration policy via maximizing the mutual information between policy and corresponding upper bounds for an optimistic exploration . Meanwhile , the derived behavior policy avoids being optimistic to aleatoric uncertainty , thus avoiding over-exploration issue . In the following , we first give the intuition together with the theoretical derivation of OVD-Explorer . Then we practically describe how we formulate OVD based on uncertainty estimation . Lastly , analysis are given to further illustrate why OVD-Explorer can explore effectively and robustly . 3.1 OPTIMISTIC VALUE DISTRIBUTION GUIDED EXPLORATION . Most OFU-based algorithms follow the exploration principle : “ select the action that leads to areas with high uncertainty ” . However , this principle is incomplete as the high uncertainty may also be caused by the aleatoric uncertainty , which misleads the exploration direction . Hence , we complement the principle by introducing a constrain : “ Not only select the action that leads to areas with high uncertainty , but also avoid the ones that only lead to the area with high aleatoric uncertainty ” . Notable , this new principle intuitively describes a good exploration ability , and we will illustrate how to quantitatively measure a policy ’ s exploration ability in the later section . By following this principle , OVD-Explorer derives the behavior policy for exploration . Intuitively , each policy π in the policy space Π has different exploration ability ( at state s ) , denoted by Fπ ( s ) . Given current state s , OVD-Explorer aims at finding the exploration policy πE with the best exploration ability by solving the following optimization problem : πE = arg max π∈Π Fπ ( s ) . ( 7 ) We propose to quantitatively measure the policy ’ s exploration ability from an information-theoretic perspective by measuring the mutual-information of multi-variables as follows : Fπ ( s ) = MI ( Z̄π ( s , a0 ) , · · · , Z̄π ( s , ak−1 ) ; π ( ·|s ) |s ) , ( 8 ) where π ( ·|s ) is the action random variable , and Z̄π ( s , ai ) the random variable describing the upper bounds of return that action ai can reach under policy π at state s. Here , ai ∈ A denotes any legal action , thus k could be infinite in continuous action space . The higher Fπ ( s ) is , the better ability that policy π has in exploring towards higher epistemic uncertainty while simultaneously avoiding higher aleatoric uncertainty , which satisfies the principle we raised . ( see analysis in Section 3.3 . ) Now we state the Theorem 1 to measure the mutual information above in continuous action space . Theorem 1 . The mutual information in Equation 8 at state s can be approximated as : Fπ ( s ) ≈ 1 C E a∼π ( ·|s ) z̄ ( s , a ) ∼Z̄π ( s , a ) [ ΦZπ ( z̄ ( s , a ) ) log ΦZπ ( z̄ ( s , a ) ) C ] . ( 9 ) Φx ( · ) is the cumulative distribution function ( CDF ) of random variable x , z̄ ( s , a ) is the sampled upper bound of return from its distribution Z̄π ( s , a ) following policy π , Zπ describes the current return distribution of the policy π , and C is a constant ( See proof in Appendix B.1 ) . Theorem 1 reveals that Fπ ( s ) is only proportional to ΦZπ ( z̄ ( s , a ) ) , by maximizing which the policy ’ s exploration ability can be improved . Concretely , given any policy πθ ( parameterized by θ ) , the derivative ∇θΦZπθ ( z̄ ( s , a ) ) can be measured , thus gradient ascent can be iteratively performed to derive a better exploration policy . ( The complete closed form solution is given in later Section 4 ) Note that , to perform above optimization procedure , we need to formulate two critical components in ΦZπ ( z̄ ( s , a ) ) : 1 the return of the policy Zπ ( s , a ) and 2 upper bound of the return of the policy Z̄π ( s , a ) . Therefore , we first details the formulations in Section 3.2 , and then analyze why maximizing ΦZπ ( z̄ ( s , a ) ) can derive a better exploration in Section 3.3 .
This paper introduces OVD-Explorer, an efficient exploration algorithm that is able to detect and avoid areas with high aleatoric uncertainty. Actually, the proposed algorithm explores state-action pairs that have not been visited frequently (high epistemic uncertainty) and at the same time avoids areas with high aleatoric uncertainty. In order to achieve this behavior, OVD-Explorer maximises the mutual information between policy and corresponding upper bounds. Theoretical results show that the over-exploration issue can be tackled through the maximisation of such mutual information. Empirical analysis has been conducted on a toy task and at five MuJoCo environments including their stochastic variants.
science.mup_single_document_summarization.9393
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nOVD-Explorer: A General Information-theoretic Exploration Approach for Reinforcement Learning\n\nMain Content:\n1 INTRODUCTION . The exploration and exploitation trade-off is critical in reinforcement learning ( Thompson , 1933 ; Sutton & Barto , 2018 ) . Many exploration strategies have been proposed in the literature ( Osband et al. , 2016 ; Lillicrap et al. , 2016 ; Martin et al. , 2017 ; Pathak et al. , 2017 ; Ciosek et al. , 2019 ) . Among these strategies , those following the Optimism in the Face of Uncertainty ( OFU ) principle ( Auer et al. , 2002 ) provide efficient guidance for exploration ( Chen et al. , 2017 ; Ciosek et al. , 2019 ) . Generally , OFU-based methods regard the uncertainty as the result of insufficient exploration for the state-action pair , and refer to such uncertainty as epistemic uncertainty . Besides , there is another uncertainty called aleatoric uncertainty , which captures environmental stochasticity : • Epistemic uncertainty ( a.k.a . parametric uncertainty ) represents the ambiguity of the model arisen from insufficient knowledge , and is high at those state-action pairs seldom visited ( Dearden et al. , 1998 ; Osband et al. , 2016 ; Moerland et al. , 2017 ) . • Aleatoric uncertainty ( a.k.a . intrinsic uncertainty ) is the variation arisen from environment randomness , caused by the stochasticity of policy , reward and/or transition probability , and is characterized by return distribution . ( Bellemare et al. , 2017 ; Moerland et al. , 2017 ) . In the heteroscedastic stochastic tasks where the environment randomness of different state-action pairs differs , those OFU-based methods may be inefficient without properly tackling aleatoric uncertainty ( Nikolov et al. , 2019 ) . As shown in Figure 1 , if the epistemic uncertainty estimation is disturbed due to the volatility caused by aleatoric uncertainty , the exploration strategy ( i.e. , policy A ) optimistic about such estimated uncertainty may lead to explore areas with high aleatoric uncertainty . Visiting such areas results in unstable and risky transitions . In the real world , the aleatoric uncertainty can be caused easily , for example , unpredictable wind can shift the trajectory of an robot ’ s action . If such aleatoric uncertainty is not modelled , the RL agent may be trapped because the state transitions in such area can be wrongly considered novel and worth exploring due to the high uncertainty . This issue , to explore overly the state-action pairs visited frequently but with high aleatoric uncertainty , is referred to as the over-exploration issue . Intuitively , such issue could be solved by avoiding exploring optimistically about aleatoric uncertainty ( i.e. , policy B in Figure 1 ) . The similar concern has been raised in the discrete RL , where Nikolov et al . ( 2019 ) proposes to use the Information-Directed Sampling ( IDS ) to avoid the over-exploration issue in the environments with heteroscedastic noise . However , IDS needs to calculate information-regret ratio for each action , thus applying IDS for continuous RL with explosive or even infinite action space could be non-trivial and ineffective . Meanwhile , many advanced continuous RL approaches suffer from the over-exploration issue . Soft Actor-Critic ( SAC ) is a well performed continuous RL algorithm , but does not account for efficient exploration beyond maximizing policy entropy ( Haarnoja et al. , 2018 ) . OAC improves the exploration follow- ing OFU principle , but it ignores to avoid exploration towards higher aleatoric uncertainty ( Ciosek et al. , 2019 ) . Therefore , an effective exploration method to address the over-exploration issue for continuous RL is of urgent need . To address the over-exploration issue , we propose that the characterization of aleatoric uncertainty and a new exploration principle are necessary . The new principle need to properly trade-off between the epistemic and aleatoric uncertainties , so as to explore seldom visited state-action pairs while avoiding trapped in areas with high aleatoric uncertainty . In this paper , we propose a general information-theoretic exploration approach OVD-Explorer , which enriches the OFU exploration principle with a complement : “ avoid areas with high aleatoric uncertainty ” . Furthermore , to guide the exploration following the new principle , OVD-Explorer maximizes the mutual information between policy and corresponding upper bounds . Notable , the upper bounds represent the best return that policy can reach , formulated using Optimistic Value Distributions ( OVD ) , and the value distribution characterises the aleatoric uncertainty . From the theoretical derivation , we show that maximizing such mutual information urges the OVD-Explorer to guide exploration towards the areas with higher epistemic uncertainty and alleviate over-exploration issue by avoiding the areas with lower information gain , i.e. , the areas that have higher aleatoric uncertainty but low epistemic uncertainty . To make OVD-Explorer tractable for continuous action space , we derive its closed form , and propose a scheme to incorporate OVD-Explorer with any policy-based RL algorithm . Practically , we demonstrate the exploration benefits based on SAC . Evaluations on many challenging continuous RL tasks , including a toy task GridChaos , tasks in Mujoco as well as their stochastic variants are conducted , and the results verify our analysis . To the best of our knowledge , OVD-Explorer firstly addresses the negative impact of aleatoric uncertainty for exploration in continuous RL . 2 PRELIMINARIES . 2.1 DISTRIBUTIONAL RL . To model the randomness in the observed long-term return , which characterises aleatoric uncertainty , distributional RL methods ( Bellemare et al. , 2017 ; Dabney et al. , 2018b ; a ) are used to estimate Q-value distribution rather than expected Q-value ( Mnih et al. , 2015 ) . In our paper , we focus on quantile regression used in QR-DQN ( Dabney et al. , 2018b ) , where the distribution of Q-value is represented by the quantile random variable Z . Z maps the state-action pair to a uniform probability distribution supported on the values at all corresponding quantile fractions . Based on value distribution estimator parameterized as θ , we denote the value at quantile fraction τi as Ẑi ( s , a , θ ) given state-action pair ( s , a ) . Similar to the Bellman operator in the traditional Q-Learning ( Watkins & Dayan , 1992 ) , the distributional Bellman operator ( Bellemare et al. , 2017 ) T πD under policy π is given as : T πDZ ( st , at ) D = R ( st , at ) + γZ ( st+1 , at+1 ) , at+1 ∼ π ( ·|st+1 ) . ( 1 ) Notice that T πD operates on random variables , D = denotes that distributions on both sides have equal probability laws . Based on operator T πD , Dabney et al . ( 2018b ) propose QR-DQN to train quantile estimations via the quantile regression loss ( Koenker & Hallock , 2001 ) , which is denoted as : LQR ( θ ) = 1 N N∑ i=1 N∑ j=1 [ ρτ̂i ( δi , j ) ] , ( 2 ) where TD error δi , j = R ( st , at ) + γẐi ( st+1 , at+1 ; θ̄ ) − Ẑj ( st , at ; θ ) , the quantile Huber loss ρτ ( u ) = u ∗ ( τ − 1u < 0 ) , and τ̂i means the quantile midpoints , which is defined as τ̂i = τi+1+τi2 . 2.2 DISTRIBUTIONAL SOFT ACTOR-CRITIC METHOD . Distributional Soft Actor-Critic ( DSAC ) ( Ma et al. , 2020 ) seamlessly integrates distributional RL with Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) . Basically , based on the Equation 1 , the distributional soft Bellman operator T πDS is defined considering the maximum entropy RL as follows : T πDSZ ( st , at ) D = R ( st , at ) + γ [ Z ( st+1 , at+1 ) − αlogπ ( at+1|st+1 ) ] , ( 3 ) where at+1 ∼ π ( ·|st+1 ) , st+1 ∼ P ( ·|st , at ) . Then , the quantile regression loss differs from Equation 2 on δi , j , by extending clipped double Q-Learning ( Fujimoto et al. , 2018 ) based on the maximum entropy RL framework , to overcome overestimation of Q value estimation : δki , j = R ( st , at ) + γ [ min k=1,2 Ẑi ( st+1 , at+1 ; θ̄k ) − α log π ( at+1|st+1 ; φ̄ ) ] − Ẑj ( st , at ; θk ) , ( 4 ) where θ̄ and φ̄ represents their target networks respectively . The actor is optimized by minimizing the following actor loss , the same as SAC , Jπ ( φ ) = E st∼D , t∼N [ log π ( f ( st , t ; φ ) |st ) −Q ( st , f ( st , t ; φ ) ; θ ) ] , ( 5 ) where D is the replay buffer , f ( s , ; φ ) means sampling action with re-parameterized policy and is a noise vector sampled from any fixed distribution , like standard spherical Gaussian , and Q value is the minimum value of the expectation on certain distributions , as Q ( st , at ; θ ) = min k=1,2 Q ( st , at ; θk ) = 1 N min k=1,2 N−1∑ i=0 Ẑi ( st , at ; θk ) . ( 6 ) 3 METHODOLOGY . We introduce a general exploration method , named OVD-Explorer , to achieve efficient and robust exploration for continuous RL . Overall , OVD-Explorer estimates the upper bounds of the policy ’ s return , and uses the bounds as criteria to find a behavior policy for online exploration from the information-theoretic perspective . Innovatively , to avoid over-exploration , OVD-Explorer particularly takes the aleatoric uncertainty into account , leveraging the optimistic value distribution ( OVD ) to formulate the upper bounds of policy ’ s return . Then , OVD-Explorer derives the exploration policy via maximizing the mutual information between policy and corresponding upper bounds for an optimistic exploration . Meanwhile , the derived behavior policy avoids being optimistic to aleatoric uncertainty , thus avoiding over-exploration issue . In the following , we first give the intuition together with the theoretical derivation of OVD-Explorer . Then we practically describe how we formulate OVD based on uncertainty estimation . Lastly , analysis are given to further illustrate why OVD-Explorer can explore effectively and robustly . 3.1 OPTIMISTIC VALUE DISTRIBUTION GUIDED EXPLORATION . Most OFU-based algorithms follow the exploration principle : “ select the action that leads to areas with high uncertainty ” . However , this principle is incomplete as the high uncertainty may also be caused by the aleatoric uncertainty , which misleads the exploration direction . Hence , we complement the principle by introducing a constrain : “ Not only select the action that leads to areas with high uncertainty , but also avoid the ones that only lead to the area with high aleatoric uncertainty ” . Notable , this new principle intuitively describes a good exploration ability , and we will illustrate how to quantitatively measure a policy ’ s exploration ability in the later section . By following this principle , OVD-Explorer derives the behavior policy for exploration . Intuitively , each policy π in the policy space Π has different exploration ability ( at state s ) , denoted by Fπ ( s ) . Given current state s , OVD-Explorer aims at finding the exploration policy πE with the best exploration ability by solving the following optimization problem : πE = arg max π∈Π Fπ ( s ) . ( 7 ) We propose to quantitatively measure the policy ’ s exploration ability from an information-theoretic perspective by measuring the mutual-information of multi-variables as follows : Fπ ( s ) = MI ( Z̄π ( s , a0 ) , · · · , Z̄π ( s , ak−1 ) ; π ( ·|s ) |s ) , ( 8 ) where π ( ·|s ) is the action random variable , and Z̄π ( s , ai ) the random variable describing the upper bounds of return that action ai can reach under policy π at state s. Here , ai ∈ A denotes any legal action , thus k could be infinite in continuous action space . The higher Fπ ( s ) is , the better ability that policy π has in exploring towards higher epistemic uncertainty while simultaneously avoiding higher aleatoric uncertainty , which satisfies the principle we raised . ( see analysis in Section 3.3 . ) Now we state the Theorem 1 to measure the mutual information above in continuous action space . Theorem 1 . The mutual information in Equation 8 at state s can be approximated as : Fπ ( s ) ≈ 1 C E a∼π ( ·|s ) z̄ ( s , a ) ∼Z̄π ( s , a ) [ ΦZπ ( z̄ ( s , a ) ) log ΦZπ ( z̄ ( s , a ) ) C ] . ( 9 ) Φx ( · ) is the cumulative distribution function ( CDF ) of random variable x , z̄ ( s , a ) is the sampled upper bound of return from its distribution Z̄π ( s , a ) following policy π , Zπ describes the current return distribution of the policy π , and C is a constant ( See proof in Appendix B.1 ) . Theorem 1 reveals that Fπ ( s ) is only proportional to ΦZπ ( z̄ ( s , a ) ) , by maximizing which the policy ’ s exploration ability can be improved . Concretely , given any policy πθ ( parameterized by θ ) , the derivative ∇θΦZπθ ( z̄ ( s , a ) ) can be measured , thus gradient ascent can be iteratively performed to derive a better exploration policy . ( The complete closed form solution is given in later Section 4 ) Note that , to perform above optimization procedure , we need to formulate two critical components in ΦZπ ( z̄ ( s , a ) ) : 1 the return of the policy Zπ ( s , a ) and 2 upper bound of the return of the policy Z̄π ( s , a ) . Therefore , we first details the formulations in Section 3.2 , and then analyze why maximizing ΦZπ ( z̄ ( s , a ) ) can derive a better exploration in Section 3.3 .", "role": "user" }, { "content": "This paper introduces OVD-Explorer, an efficient exploration algorithm that is able to detect and avoid areas with high aleatoric uncertainty. Actually, the proposed algorithm explores state-action pairs that have not been visited frequently (high epistemic uncertainty) and at the same time avoids areas with high aleatoric uncertainty. In order to achieve this behavior, OVD-Explorer maximises the mutual information between policy and corresponding upper bounds. Theoretical results show that the over-exploration issue can be tackled through the maximisation of such mutual information. Empirical analysis has been conducted on a toy task and at five MuJoCo environments including their stochastic variants.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Resistance of malaria parasites to chloroquine (CQ) and sulphadoxine-pyrimethamine (SP) is increasing in prevalence in Africa. Combination therapy can both improve treatment and provide important public health benefits if it curbs the spread of parasites harbouring resistance genes. Thus, drug combinations must be identified which minimise gametocyte emergence in treated cases, and so prevent selective transmission of parasites resistant to any of the partner drugs. ABSTRACT.METHODS AND FINDINGS: In a randomised controlled trial, 497 children with uncomplicated falciparum malaria were treated with CQ and SP (three doses and one dose respectively; n = 91), or six doses of artemether in fixed combination with lumefantrine (co-artemether [Coartem, Riamet]) (n = 406). Carriage rates of Plasmodium falciparum gametocytes and trophozoites were measured 7, 14, and 28 d after treatment. The infectiousness of venous blood from 29 children carrying P. falciparum gametocytes 7 d after treatment was tested by membrane-feeding of Anopheles mosquitoes. <p> Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p < 0.0001). Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p < 0.0001) and were less infectious to mosquitoes at day 7 (p < 0.001) than carriers who had received CQ/SP. ABSTRACT.CONCLUSIONS: Co-artemether is highly effective at preventing post-treatment transmission of P. falciparum. Our results suggest that co-artemether has specific activity against immature sequestered gametocytes, and has the capacity to minimise transmission of drug-resistant parasites. Extracted Data:
[["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "carring gametocytes within the 4 weeks following treatment", "significantly increased", "Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p \u0026lt; 0.0001)."], ["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "carring gametocytes within the 4 weeks following treatment", "significantly increased", "Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p \u003c 0.0001)."], ["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "Carriers harboured gametocytes at densities and time periods", "significantly increased", "Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \u0026lt; 0.0001) and were less infectious to mosquitoes at day 7 (p \u0026lt; 0.001) than carriers who had received CQ/SP."], ["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "Carriers harboured gametocytes at densities and time periods", "significantly increased", "Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \u003c 0.0001) and were less infectious to mosquitoes at day 7 (p \u003c 0.001) than carriers who had received CQ/SP."], ["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "infectious to mosquitoes at day 7", "significantly increased", "Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \u0026lt; 0.0001) and were less infectious to mosquitoes at day 7 (p \u0026lt; 0.001) than carriers who had received CQ/SP."], ["chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)", "artemether in fixed combination with lumefantrine", "infectious to mosquitoes at day 7", "significantly increased", "Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \u003c 0.0001) and were less infectious to mosquitoes at day 7 (p \u003c 0.001) than carriers who had received CQ/SP."]]
science.evidence_inference.220
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Resistance of malaria parasites to chloroquine (CQ) and sulphadoxine-pyrimethamine (SP) is increasing in prevalence in Africa. Combination therapy can both improve treatment and provide important public health benefits if it curbs the spread of parasites harbouring resistance genes. Thus, drug combinations must be identified which minimise gametocyte emergence in treated cases, and so prevent selective transmission of parasites resistant to any of the partner drugs.\n\nABSTRACT.METHODS AND FINDINGS: In a randomised controlled trial, 497 children with uncomplicated falciparum malaria were treated with CQ and SP (three doses and one dose respectively; n = 91), or six doses of artemether in fixed combination with lumefantrine (co-artemether [Coartem, Riamet]) (n = 406). Carriage rates of Plasmodium falciparum gametocytes and trophozoites were measured 7, 14, and 28 d after treatment. The infectiousness of venous blood from 29 children carrying P. falciparum gametocytes 7 d after treatment was tested by membrane-feeding of Anopheles mosquitoes. <p> Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p < 0.0001). Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p < 0.0001) and were less infectious to mosquitoes at day 7 (p < 0.001) than carriers who had received CQ/SP.\n\nABSTRACT.CONCLUSIONS: Co-artemether is highly effective at preventing post-treatment transmission of P. falciparum. Our results suggest that co-artemether has specific activity against immature sequestered gametocytes, and has the capacity to minimise transmission of drug-resistant parasites.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"carring gametocytes within the 4 weeks following treatment\", \"significantly increased\", \"Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p \\u0026lt; 0.0001).\"], [\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"carring gametocytes within the 4 weeks following treatment\", \"significantly increased\", \"Children treated with co-artemether were significantly less likely to carry gametocytes within the 4 weeks following treatment than those receiving CQ/SP (30 of 378 [7.94%] versus 42 of 86 [48.8%]; p \\u003c 0.0001).\"], [\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"Carriers harboured gametocytes at densities and time periods\", \"significantly increased\", \"Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \\u0026lt; 0.0001) and were less infectious to mosquitoes at day 7 (p \\u0026lt; 0.001) than carriers who had received CQ/SP.\"], [\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"Carriers harboured gametocytes at densities and time periods\", \"significantly increased\", \"Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \\u003c 0.0001) and were less infectious to mosquitoes at day 7 (p \\u003c 0.001) than carriers who had received CQ/SP.\"], [\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"infectious to mosquitoes at day 7\", \"significantly increased\", \"Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \\u0026lt; 0.0001) and were less infectious to mosquitoes at day 7 (p \\u0026lt; 0.001) than carriers who had received CQ/SP.\"], [\"chloroquine (CQ) and sulphadoxine-pyrimethamine (SP)\", \"artemether in fixed combination with lumefantrine\", \"infectious to mosquitoes at day 7\", \"significantly increased\", \"Carriers in the co-artemether group harboured gametocytes at significantly lower densities, for shorter periods (0.3 d versus 4.2 d; p \\u003c 0.0001) and were less infectious to mosquitoes at day 7 (p \\u003c 0.001) than carriers who had received CQ/SP.\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Gastric lavage is an effective treatment for acute paraquat poisoning. Research Abstract: The present study retrospectively analyzed 19 patients diagnosed with paraquat (PQ) poisoning with the aim to investigate the effect of activated charcoal hemoperfusion on renal function and PQ elimination. The results indicated that 7 patients died and 12 survived. Non-oliguric renal failure occurred in all of the 7 patients who died. Among the 12 surviving patients, 10 had normal renal function and 2 developed non-oliguric renal failure. There was a linear correlation between plasma and urine paraquat concentration prior to and during activated charcoal hemoperfusion. The equation parameters together with the correlation coefficient on admission were as follows: Y=0.5820+1.7348X (R2=0.678; F=35.768; P<0.0001). The equation parameters together with the correlation coefficient were as follows during activated charcoal hemoperfusion: Y=0.6827+1.2649X (R2=0.626; F=50.308; P<0.0001). Therefore, it was concluded that in patients with normal renal function, the elimination kinetics of PQ by the kidneys were only associated with the plasma PQ concentration. Activated charcoal hemoperfusion had little effect on avoiding acute kidney injury in patients with severe PQ poisoning.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.295
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nGastric lavage is an effective treatment for acute paraquat poisoning.\n\nResearch Abstract:\nThe present study retrospectively analyzed 19 patients diagnosed with paraquat (PQ) poisoning with the aim to investigate the effect of activated charcoal hemoperfusion on renal function and PQ elimination.\nThe results indicated that 7 patients died and 12 survived.\nNon-oliguric renal failure occurred in all of the 7 patients who died.\nAmong the 12 surviving patients, 10 had normal renal function and 2 developed non-oliguric renal failure.\nThere was a linear correlation between plasma and urine paraquat concentration prior to and during activated charcoal hemoperfusion.\nThe equation parameters together with the correlation coefficient on admission were as follows: Y=0.5820+1.7348X (R2=0.678; F=35.768; P<0.0001).\nThe equation parameters together with the correlation coefficient were as follows during activated charcoal hemoperfusion: Y=0.6827+1.2649X (R2=0.626; F=50.308; P<0.0001).\nTherefore, it was concluded that in patients with normal renal function, the elimination kinetics of PQ by the kidneys were only associated with the plasma PQ concentration.\nActivated charcoal hemoperfusion had little effect on avoiding acute kidney injury in patients with severe PQ poisoning.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Order Learning and Its Application to Age Estimation Main Content: 1 INTRODUCTION . To measure the quality of something , we often compare it with other things of a similar kind . Before assigning 4 stars to a film , a critic would have thought , “ It is better than 3-star films but worse than 5-stars. ” This ranking through pairwise comparisons is done in various decision processes ( Saaty , 1977 ) . It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly ( Chen et al. , 2016 ; Lee & Kim , 2019a ) . Also , it is easy to tell a higher pitch between two notes , but absolute pitch is a rare ability ( Bachem , 1955 ) . Ranking through comparisons has been investigated for machine learning . In learning to rank ( LTR ) , the pairwise approach learns , between two documents , which one is more relevant to a query ( Liu , 2009 ) . Also , in ordinal regression ( Frank & Hall , 2001 ; Li & Lin , 2007 ) , to predict the rank of an object , binary classifications are performed to tell whether the rank is higher than a series of thresholds or not . In this paper , we propose order learning to learn ordering relationship between objects . Thus , order learning is related to LTR and ordinal regression . However , whereas LTR and ordinal regression assume that ranks form a total order ( Hrbacek & Jech , 1984 ) , order learning can be used for a partial order as well . Order learning is also related to metric learning ( Xing et al. , 2003 ) . While metric learning is about whether an object is ‘ similar to or dissimilar from ’ another object , order learning is about ‘ greater than or smaller than. ’ Section 2 reviews this related work . In order learning , a set of classes , Θ = { θ1 , θ2 , · · · , θn } , is ordered , where each class θi represents one or more object instances . Between two classes θi and θj , there are three possibilities : θi > θj or θi < θj or neither ( i.e . incomparable ) . These relationships are represented by the order graph . The goal of order learning is to determine the order graph and then classify an instance into one of the classes in Θ . To achieve this , we develop a pairwise comparator that determines ordering relationship between two instances x and y into one of three categories : x is ‘ greater than , ’ ‘ similar to , ’ or ‘ smaller than ’ y . Then , we use the comparator to measure an input instance against multiple reference instances in known classes . Finally , we estimate the class of the input to maximize the consistency among the comparison results . It is noted that the parameter optimization of the pairwise comparator , the selection of the references , and the discovery of the order graph are jointly performed to minimize a common loss function . Section 3 proposes this order learning . We apply order learning to facial age estimation . Order learning matches age estimation well , since it is easier to tell a younger one between two people than to estimate each person ’ s age directly ( Chang et al. , 2010 ; Zhang et al. , 2017a ) . Even when we assume that age classes are linearly ordered , the proposed age estimator performs well . The performance is further improved , when classes are divided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner . Section 4 describes this age estimator and discusses its results . Finally , Section 5 concludes this work . 2 RELATED WORK . Pairwise comparison : It is a fundamental problem to estimate the priorities ( or ranks ) of objects through pairwise comparison . In the classic paper , Saaty ( 1977 ) noted that , even when direct estimates of certain quantities are unavailable , rough ratios between them are easily obtained in many cases . Thus , he proposed the scaling method to reconstruct absolute priorities using only relative priorities . The scaling method was applied to monocular depth estimation ( Lee & Kim , 2019a ) and aesthetic assessment ( Lee & Kim , 2019b ) . Ranking from a pairwise comparison matrix has been studied to handle cases , in which the matrix is huge or some elements are noisy ( Braverman & Mossel , 2008 ; Jamieson & Nowak , 2011 ; Negahban et al. , 2012 ; Wauthier et al. , 2013 ) . On the other hand , the pairwise approach to LTR learns , between two documents , which one is more relevant to a query ( Liu , 2009 ; Herbrich et al. , 1999 ; Burges et al. , 2005 ; Tsai et al. , 2007 ) . The proposed order learning is related to LTR , since it also predicts the order between objects . But , while LTR sorts multiple objects with unknown ranks and focuses on the sorting quality , order learning compares a single object x with optimally selected references with known ranks to estimate the rank of x. Ordinal regression : Ordinal regression predicts an ordinal variable ( or rank ) of an instance . Suppose that a 20-year-old is misclassified as a 50-year old and a 25-year old , respectively . The former error should be more penalized than the latter . Ordinal regression exploits this characteristic in the design of a classifier or a regressor . In Frank & Hall ( 2001 ) and Li & Lin ( 2007 ) , a conversion scheme was proposed to transform an ordinal regression problem into multiple binary classification problems . Ordinal regression based on this conversion scheme has been used in various applications , including age estimation ( Chang et al. , 2010 ; 2011 ; Niu et al. , 2016 ; Chen et al. , 2017 ) and monocular depth estimation ( Fu et al. , 2018 ) . Note that order learning is different from ordinal regression . Order learning performs pairwise comparison between objects , instead of directly estimating the rank of each object . In age estimation , ordinal regression based on the conversion scheme is concerned with the problem , “ Is a person ’ s age bigger than a threshold θ ? ” for each θ . In contrast , order learning concerns “ Between two people , who is older ? ” Conceptually , order learning is easier . Technically , if there are N ranks , the conversion scheme requires N − 1 binary classifiers , but order learning needs only a single ternary classifier . Moreover , whereas ordinal regression assumes that ranks form a total order , order learning can be used even in the case of a partial order ( Hrbacek & Jech , 1984 ) . Metric learning : A distance metric can be learned from examples of similar pairs of points and those of dissimilar pairs ( Xing et al. , 2003 ) . The similarity depends on an application and is implicitly defined by user-provided examples . If a learned metric generalizes well to unseen data , it can be used to enforce the desired similarity criterion in clustering ( Xing et al. , 2003 ) , classification ( Weinberger et al. , 2006 ) , or information retrieval ( McFee & Lanckriet , 2010 ) . Both metric learning and order learning learn important binary relations in mathematics : metric and order ( Hrbacek & Jech , 1984 ) . However , a metric decides whether an object x is similar to or dissimilar from another object y , whereas an order tells whether x is greater than or smaller than y . Thus , a learned metric is useful for grouping similar data , whereas a learned order is suitable for processing ordered data . Age estimation : Human ages can be estimated from facial appearance ( Kwon & da Vitoria Lobo , 1994 ) . Geng et al . ( 2007 ) proposed the aging pattern subspace , and Guo et al . ( 2009 ) introduced biologically inspired features to age estimation . Recently , deep learning has been adopted for age estimation . Niu et al . ( 2016 ) proposed OR-CNN for age estimation , which is an ordinal regressor using the conversion scheme . Chen et al . ( 2017 ) proposed Ranking-CNN , which is another ordinal regressor . While OR-CNN uses a common feature for multiple binary classifiers , Ranking-CNN employs a separate CNN to extract a feature for each binary classifier . Tan et al . ( 2018 ) grouped adjacent ages via the group-n encoding , determined whether a face belongs to each group , and combined the results to predict the age . Pan et al . ( 2018 ) proposed the mean-variance loss to train a CNN classifier for age estimation . Shen et al . ( 2018 ) proposed the deep regression forests for age estimation . Zhang et al . ( 2019 ) developed a compact age estimator using the two-points representation . Also , Li et al . ( 2019 ) proposed a continuity-aware probabilistic network for age estimation . 3 ORDER LEARNING . 3.1 WHAT IS ORDER ? . Let us first review mathematical definitions and concepts related to order . An order ( Hrbacek & Jech , 1984 ; Bartle , 1976 ) , often denoted by ≤ , is a binary relation on a set Θ = { θ1 , θ2 , · · · , θn } that satisfies the three properties of • Reflexivity : θi ≤ θi for every θi ∈ Θ ; • Antisymmetry : If θi ≤ θj and θj ≤ θi , then θi = θj ; • Transitivity : If θi ≤ θj and θj ≤ θk , then θi ≤ θk . In real-world problems , an order describes ranks or priorities of objects . For example , in age estimation , θi ≤ θj means that people in age class θi look younger than those in θj . We may use the symbol→ , instead of≤ , to denote an order on a finite set Θ . Then , the order can be represented by a directed graph ( Gross & Yellen , 2006 ) using elements in Θ as nodes . If θi → θj , there is a directed edge from node θi to node θj . The order graph is acyclic because of antisymmetry and transitivity . For example , for n , m ∈ N , let n → m denote that m is a multiple of n. Note that it is an order on any subset of N. Figure 1 ( a ) is the graph representing this order on { 1 , . . . , 9 } . Elements θi and θj are comparable if θi → θj or θj → θi , or incomparable otherwise . In Figure 1 ( a ) , 6 and 8 are incomparable . In age estimation , it is difficult to compare apparent ages of people in different ethnic groups or of different genders . An order on a set Θ is total ( or linear ) if all elements in Θ are comparable to one another . In such a case , Θ is called a linearly ordered set . In some real-world problems , orders are not linear . In this work , a subset Θc of Θ is referred to as a chain , if Θc is linearly ordered and also maximal , i.e . there is no proper superset of Θc that is linearly ordered . In Figure 1 ( a ) , nodes 1 , 2 , 4 , and 8 form a chain . In Figure 1 ( b ) , the entire set is composed of three disjoint chains .
The paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization. The method is applied to the problem of estimating the age based on faces. Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle. Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix.
science.mup_single_document_summarization.18024
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nOrder Learning and Its Application to Age Estimation\n\nMain Content:\n1 INTRODUCTION . To measure the quality of something , we often compare it with other things of a similar kind . Before assigning 4 stars to a film , a critic would have thought , “ It is better than 3-star films but worse than 5-stars. ” This ranking through pairwise comparisons is done in various decision processes ( Saaty , 1977 ) . It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly ( Chen et al. , 2016 ; Lee & Kim , 2019a ) . Also , it is easy to tell a higher pitch between two notes , but absolute pitch is a rare ability ( Bachem , 1955 ) . Ranking through comparisons has been investigated for machine learning . In learning to rank ( LTR ) , the pairwise approach learns , between two documents , which one is more relevant to a query ( Liu , 2009 ) . Also , in ordinal regression ( Frank & Hall , 2001 ; Li & Lin , 2007 ) , to predict the rank of an object , binary classifications are performed to tell whether the rank is higher than a series of thresholds or not . In this paper , we propose order learning to learn ordering relationship between objects . Thus , order learning is related to LTR and ordinal regression . However , whereas LTR and ordinal regression assume that ranks form a total order ( Hrbacek & Jech , 1984 ) , order learning can be used for a partial order as well . Order learning is also related to metric learning ( Xing et al. , 2003 ) . While metric learning is about whether an object is ‘ similar to or dissimilar from ’ another object , order learning is about ‘ greater than or smaller than. ’ Section 2 reviews this related work . In order learning , a set of classes , Θ = { θ1 , θ2 , · · · , θn } , is ordered , where each class θi represents one or more object instances . Between two classes θi and θj , there are three possibilities : θi > θj or θi < θj or neither ( i.e . incomparable ) . These relationships are represented by the order graph . The goal of order learning is to determine the order graph and then classify an instance into one of the classes in Θ . To achieve this , we develop a pairwise comparator that determines ordering relationship between two instances x and y into one of three categories : x is ‘ greater than , ’ ‘ similar to , ’ or ‘ smaller than ’ y . Then , we use the comparator to measure an input instance against multiple reference instances in known classes . Finally , we estimate the class of the input to maximize the consistency among the comparison results . It is noted that the parameter optimization of the pairwise comparator , the selection of the references , and the discovery of the order graph are jointly performed to minimize a common loss function . Section 3 proposes this order learning . We apply order learning to facial age estimation . Order learning matches age estimation well , since it is easier to tell a younger one between two people than to estimate each person ’ s age directly ( Chang et al. , 2010 ; Zhang et al. , 2017a ) . Even when we assume that age classes are linearly ordered , the proposed age estimator performs well . The performance is further improved , when classes are divided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner . Section 4 describes this age estimator and discusses its results . Finally , Section 5 concludes this work . 2 RELATED WORK . Pairwise comparison : It is a fundamental problem to estimate the priorities ( or ranks ) of objects through pairwise comparison . In the classic paper , Saaty ( 1977 ) noted that , even when direct estimates of certain quantities are unavailable , rough ratios between them are easily obtained in many cases . Thus , he proposed the scaling method to reconstruct absolute priorities using only relative priorities . The scaling method was applied to monocular depth estimation ( Lee & Kim , 2019a ) and aesthetic assessment ( Lee & Kim , 2019b ) . Ranking from a pairwise comparison matrix has been studied to handle cases , in which the matrix is huge or some elements are noisy ( Braverman & Mossel , 2008 ; Jamieson & Nowak , 2011 ; Negahban et al. , 2012 ; Wauthier et al. , 2013 ) . On the other hand , the pairwise approach to LTR learns , between two documents , which one is more relevant to a query ( Liu , 2009 ; Herbrich et al. , 1999 ; Burges et al. , 2005 ; Tsai et al. , 2007 ) . The proposed order learning is related to LTR , since it also predicts the order between objects . But , while LTR sorts multiple objects with unknown ranks and focuses on the sorting quality , order learning compares a single object x with optimally selected references with known ranks to estimate the rank of x. Ordinal regression : Ordinal regression predicts an ordinal variable ( or rank ) of an instance . Suppose that a 20-year-old is misclassified as a 50-year old and a 25-year old , respectively . The former error should be more penalized than the latter . Ordinal regression exploits this characteristic in the design of a classifier or a regressor . In Frank & Hall ( 2001 ) and Li & Lin ( 2007 ) , a conversion scheme was proposed to transform an ordinal regression problem into multiple binary classification problems . Ordinal regression based on this conversion scheme has been used in various applications , including age estimation ( Chang et al. , 2010 ; 2011 ; Niu et al. , 2016 ; Chen et al. , 2017 ) and monocular depth estimation ( Fu et al. , 2018 ) . Note that order learning is different from ordinal regression . Order learning performs pairwise comparison between objects , instead of directly estimating the rank of each object . In age estimation , ordinal regression based on the conversion scheme is concerned with the problem , “ Is a person ’ s age bigger than a threshold θ ? ” for each θ . In contrast , order learning concerns “ Between two people , who is older ? ” Conceptually , order learning is easier . Technically , if there are N ranks , the conversion scheme requires N − 1 binary classifiers , but order learning needs only a single ternary classifier . Moreover , whereas ordinal regression assumes that ranks form a total order , order learning can be used even in the case of a partial order ( Hrbacek & Jech , 1984 ) . Metric learning : A distance metric can be learned from examples of similar pairs of points and those of dissimilar pairs ( Xing et al. , 2003 ) . The similarity depends on an application and is implicitly defined by user-provided examples . If a learned metric generalizes well to unseen data , it can be used to enforce the desired similarity criterion in clustering ( Xing et al. , 2003 ) , classification ( Weinberger et al. , 2006 ) , or information retrieval ( McFee & Lanckriet , 2010 ) . Both metric learning and order learning learn important binary relations in mathematics : metric and order ( Hrbacek & Jech , 1984 ) . However , a metric decides whether an object x is similar to or dissimilar from another object y , whereas an order tells whether x is greater than or smaller than y . Thus , a learned metric is useful for grouping similar data , whereas a learned order is suitable for processing ordered data . Age estimation : Human ages can be estimated from facial appearance ( Kwon & da Vitoria Lobo , 1994 ) . Geng et al . ( 2007 ) proposed the aging pattern subspace , and Guo et al . ( 2009 ) introduced biologically inspired features to age estimation . Recently , deep learning has been adopted for age estimation . Niu et al . ( 2016 ) proposed OR-CNN for age estimation , which is an ordinal regressor using the conversion scheme . Chen et al . ( 2017 ) proposed Ranking-CNN , which is another ordinal regressor . While OR-CNN uses a common feature for multiple binary classifiers , Ranking-CNN employs a separate CNN to extract a feature for each binary classifier . Tan et al . ( 2018 ) grouped adjacent ages via the group-n encoding , determined whether a face belongs to each group , and combined the results to predict the age . Pan et al . ( 2018 ) proposed the mean-variance loss to train a CNN classifier for age estimation . Shen et al . ( 2018 ) proposed the deep regression forests for age estimation . Zhang et al . ( 2019 ) developed a compact age estimator using the two-points representation . Also , Li et al . ( 2019 ) proposed a continuity-aware probabilistic network for age estimation . 3 ORDER LEARNING . 3.1 WHAT IS ORDER ? . Let us first review mathematical definitions and concepts related to order . An order ( Hrbacek & Jech , 1984 ; Bartle , 1976 ) , often denoted by ≤ , is a binary relation on a set Θ = { θ1 , θ2 , · · · , θn } that satisfies the three properties of • Reflexivity : θi ≤ θi for every θi ∈ Θ ; • Antisymmetry : If θi ≤ θj and θj ≤ θi , then θi = θj ; • Transitivity : If θi ≤ θj and θj ≤ θk , then θi ≤ θk . In real-world problems , an order describes ranks or priorities of objects . For example , in age estimation , θi ≤ θj means that people in age class θi look younger than those in θj . We may use the symbol→ , instead of≤ , to denote an order on a finite set Θ . Then , the order can be represented by a directed graph ( Gross & Yellen , 2006 ) using elements in Θ as nodes . If θi → θj , there is a directed edge from node θi to node θj . The order graph is acyclic because of antisymmetry and transitivity . For example , for n , m ∈ N , let n → m denote that m is a multiple of n. Note that it is an order on any subset of N. Figure 1 ( a ) is the graph representing this order on { 1 , . . . , 9 } . Elements θi and θj are comparable if θi → θj or θj → θi , or incomparable otherwise . In Figure 1 ( a ) , 6 and 8 are incomparable . In age estimation , it is difficult to compare apparent ages of people in different ethnic groups or of different genders . An order on a set Θ is total ( or linear ) if all elements in Θ are comparable to one another . In such a case , Θ is called a linearly ordered set . In some real-world problems , orders are not linear . In this work , a subset Θc of Θ is referred to as a chain , if Θc is linearly ordered and also maximal , i.e . there is no proper superset of Θc that is linearly ordered . In Figure 1 ( a ) , nodes 1 , 2 , 4 , and 8 form a chain . In Figure 1 ( b ) , the entire set is composed of three disjoint chains .", "role": "user" }, { "content": "The paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization. The method is applied to the problem of estimating the age based on faces. Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle. Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Distant CREs are important for transcriptional regulation. Research Abstract: The mammalian radiation has corresponded with rapid changes in noncoding regions of the genome, but we lack a comprehensive understanding of regulatory evolution in mammals. Here, we track the evolution of promoters and enhancers active in liver across 20 mammalian species from six diverse orders by profiling genomic enrichment of H3K27 acetylation and H3K4 trimethylation. We report that rapid evolution of enhancers is a universal feature of mammalian genomes. Most of the recently evolved enhancers arise from ancestral DNA exaptation, rather than lineage-specific expansions of repeat elements. In contrast, almost all liver promoters are partially or fully conserved across these species. Our data further reveal that recently evolved enhancers can be associated with genes under positive selection, demonstrating the power of this approach for annotating regulatory adaptations in genomic sequences. These results provide important insight into the functional genetics underpinning mammalian regulatory evolution.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.228
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nDistant CREs are important for transcriptional regulation.\n\nResearch Abstract:\nThe mammalian radiation has corresponded with rapid changes in noncoding regions of the genome, but we lack a comprehensive understanding of regulatory evolution in mammals.\nHere, we track the evolution of promoters and enhancers active in liver across 20 mammalian species from six diverse orders by profiling genomic enrichment of H3K27 acetylation and H3K4 trimethylation.\nWe report that rapid evolution of enhancers is a universal feature of mammalian genomes.\nMost of the recently evolved enhancers arise from ancestral DNA exaptation, rather than lineage-specific expansions of repeat elements.\nIn contrast, almost all liver promoters are partially or fully conserved across these species.\nOur data further reveal that recently evolved enhancers can be associated with genes under positive selection, demonstrating the power of this approach for annotating regulatory adaptations in genomic sequences.\nThese results provide important insight into the functional genetics underpinning mammalian regulatory evolution.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Leaf extracts of Scoparia dulcis, is used as a herbal remedy by diabetics worldwide. Fresh Scoparia dulcis porridge elicited a low glycaemic index (GI) and anti-hyperglycaemic effects when fed to diabetic Wistar rats. Commercially produced Scoparia dulcis porridge (SDC) elicited medium GI. Present study was aimed at studying the anti-diabetic effects of consumption of commercially produced S. dulcis porridge. ABSTRACT.METHOD: A randomized crossover clinical trial with type 2 diabetic patients (n = 35) on medication, with mild and moderate diabetes [fasting blood glucose (FBG) 126–300 mg/dL, age 35–70 years] was conducted. Within the first three months (study period 1) group 1 was the test and group 2 was the control. Following a wash-out period, the two groups were crossed over (study period 2: group 1 – control; group 2 - test). Test group consumed commercially produced SDC for 3 days/week for three months and the control group any other food. At the onset and end of each study period glucose measurements [Fasting Blood Glucose (FBG), HbA1c], lipid measurements (total cholesterol, HDL-C, LDL-C, triglycerides, cholesterol ratios), toxicity parameters (liver enzymes, creatinine, CRP, eGFR) were analyzed by enzyme assay kit methods using a KONELAB 20XT auto analyzer. Significances between groups were analyzed by one way ANOVA (normal distribution) and Mann Whitney test (if the values were not normally distributed). Within group comparisons were carried out by Bonferroni post hoc test. ABSTRACT.RESULTS: During the crossover clinical trial HbA1c of group 1 decreased from 7.9 ± 0.5 to 6.5 ± 0.3 (p = 0.003) while HbA1c of group 2 decreased from 7.0 ± 0.3to 6.7 ± 0.3 while in the test group. Therefore, both test groups (1 and 2) elicited a decrease in HbA1c compared to respective control groups. Both test groups elicited a non significant decrease in FBG following the intervention (group 1 - from 174 ± 14 to 160 ± 10 mg/dL; group 2 - from 183 ± 13 to 160 ± 7 mg/dL). No significant differences (p >0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed. All other measurements (AST, ALT, ALP, creatinine, CRP, eGFR) were normal and not significantly different between or within groups. ABSTRACT.CONCLUSION: Porridge made with SDC leaf extract decreased FBG and HbA1c (p >0.05) of type 2 diabetic patients. The porridge had no effect on cholesterol measurements and no toxicity was observed at the dose tested. Therefore, the SDC porridge can be recommended as a suitable meal for diabetic patients. Extracted Data:
[["Commercially produced Scoparia dulcis porridge (SDC)", "Any other food", "Liver enzymes", "no significant difference", "All other measurements (AST, ALT, ALP, creatinine, CRP, eGFR) were normal and not significantly different between or within groups."], ["Commercially produced Scoparia dulcis porridge (SDC)", "Any other food", "Atherogenic index", "no significant difference", "No significant differences (p \u003e0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed."], ["Commercially produced Scoparia dulcis porridge (SDC)", "Any other food", "Fasting blood glucose", "no significant difference", "Both test groups elicited a non significant decrease in FBG following the intervention (group 1 - from 174 \u00b1 14 to 160 \u00b1 10 mg/dL; group 2 - from 183 \u00b1 13 to 160 \u00b1 7 mg/dL)."], ["Commercially produced Scoparia dulcis porridge (SDC)", "Any other food", "HbA1c levels", "significantly decreased", "both test groups (1 and 2) elicited a decrease in HbA1c compared to respective control groups."], ["Commercially produced Scoparia dulcis porridge (SDC)", "Any other food", "Insulin and cholesterol levels", "no significant difference", "No significant differences (p \u003e0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed."]]
science.evidence_inference.27
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Leaf extracts of Scoparia dulcis, is used as a herbal remedy by diabetics worldwide. Fresh Scoparia dulcis porridge elicited a low glycaemic index (GI) and anti-hyperglycaemic effects when fed to diabetic Wistar rats. Commercially produced Scoparia dulcis porridge (SDC) elicited medium GI. Present study was aimed at studying the anti-diabetic effects of consumption of commercially produced S. dulcis porridge.\n\nABSTRACT.METHOD: A randomized crossover clinical trial with type 2 diabetic patients (n = 35) on medication, with mild and moderate diabetes [fasting blood glucose (FBG) 126–300 mg/dL, age 35–70 years] was conducted. Within the first three months (study period 1) group 1 was the test and group 2 was the control. Following a wash-out period, the two groups were crossed over (study period 2: group 1 – control; group 2 - test). Test group consumed commercially produced SDC for 3 days/week for three months and the control group any other food. At the onset and end of each study period glucose measurements [Fasting Blood Glucose (FBG), HbA1c], lipid measurements (total cholesterol, HDL-C, LDL-C, triglycerides, cholesterol ratios), toxicity parameters (liver enzymes, creatinine, CRP, eGFR) were analyzed by enzyme assay kit methods using a KONELAB 20XT auto analyzer. Significances between groups were analyzed by one way ANOVA (normal distribution) and Mann Whitney test (if the values were not normally distributed). Within group comparisons were carried out by Bonferroni post hoc test.\n\nABSTRACT.RESULTS: During the crossover clinical trial HbA1c of group 1 decreased from 7.9 ± 0.5 to 6.5 ± 0.3 (p = 0.003) while HbA1c of group 2 decreased from 7.0 ± 0.3to 6.7 ± 0.3 while in the test group. Therefore, both test groups (1 and 2) elicited a decrease in HbA1c compared to respective control groups. Both test groups elicited a non significant decrease in FBG following the intervention (group 1 - from 174 ± 14 to 160 ± 10 mg/dL; group 2 - from 183 ± 13 to 160 ± 7 mg/dL). No significant differences (p >0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed. All other measurements (AST, ALT, ALP, creatinine, CRP, eGFR) were normal and not significantly different between or within groups.\n\nABSTRACT.CONCLUSION: Porridge made with SDC leaf extract decreased FBG and HbA1c (p >0.05) of type 2 diabetic patients. The porridge had no effect on cholesterol measurements and no toxicity was observed at the dose tested. Therefore, the SDC porridge can be recommended as a suitable meal for diabetic patients.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Commercially produced Scoparia dulcis porridge (SDC)\", \"Any other food\", \"Liver enzymes\", \"no significant difference\", \"All other measurements (AST, ALT, ALP, creatinine, CRP, eGFR) were normal and not significantly different between or within groups.\"], [\"Commercially produced Scoparia dulcis porridge (SDC)\", \"Any other food\", \"Atherogenic index\", \"no significant difference\", \"No significant differences (p \\u003e0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed.\"], [\"Commercially produced Scoparia dulcis porridge (SDC)\", \"Any other food\", \"Fasting blood glucose\", \"no significant difference\", \"Both test groups elicited a non significant decrease in FBG following the intervention (group 1 - from 174 \\u00b1 14 to 160 \\u00b1 10 mg/dL; group 2 - from 183 \\u00b1 13 to 160 \\u00b1 7 mg/dL).\"], [\"Commercially produced Scoparia dulcis porridge (SDC)\", \"Any other food\", \"HbA1c levels\", \"significantly decreased\", \"both test groups (1 and 2) elicited a decrease in HbA1c compared to respective control groups.\"], [\"Commercially produced Scoparia dulcis porridge (SDC)\", \"Any other food\", \"Insulin and cholesterol levels\", \"no significant difference\", \"No significant differences (p \\u003e0.05) in insulin, cholesterol measurements (total cholesterol, LDL-C, HDL-C, triglycerides and cholesterol ratios) and atherogenic index between or within groups were observed.\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Converting apoE4 to apoE3 by gene editing worsens the pathology associated with apoE4 in human iPSCderived neurons. Research Abstract: Efforts to develop drugs for Alzheimer's disease (AD) have shown promise in animal studies, only to fail in human trials, suggesting a pressing need to study AD in human model systems. Using human neurons derived from induced pluripotent stem cells that expressed apolipoprotein E4 (ApoE4), a variant of the APOE gene product and the major genetic risk factor for AD, we demonstrated that ApoE4-expressing neurons had higher levels of tau phosphorylation, unrelated to their increased production of amyloid-β (Aβ) peptides, and that they displayed GABAergic neuron degeneration. ApoE4 increased Aβ production in human, but not in mouse, neurons. Converting ApoE4 to ApoE3 by gene editing rescued these phenotypes, indicating the specific effects of ApoE4. Neurons that lacked APOE behaved similarly to those expressing ApoE3, and the introduction of ApoE4 expression recapitulated the pathological phenotypes, suggesting a gain of toxic effects from ApoE4. Treatment of ApoE4-expressing neurons with a small-molecule structure corrector ameliorated the detrimental effects, thus showing that correcting the pathogenic conformation of ApoE4 is a viable therapeutic approach for ApoE4-related AD.
{"verdict": "CONTRADICT", "evidence": ["Converting ApoE4 to ApoE3 by gene editing rescued these phenotypes, indicating the specific effects of ApoE4."]}
science.scifact_entailment.191
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nConverting apoE4 to apoE3 by gene editing worsens the pathology associated with apoE4 in human iPSCderived neurons.\n\nResearch Abstract:\nEfforts to develop drugs for Alzheimer's disease (AD) have shown promise in animal studies, only to fail in human trials, suggesting a pressing need to study AD in human model systems.\nUsing human neurons derived from induced pluripotent stem cells that expressed apolipoprotein E4 (ApoE4), a variant of the APOE gene product and the major genetic risk factor for AD, we demonstrated that ApoE4-expressing neurons had higher levels of tau phosphorylation, unrelated to their increased production of amyloid-β (Aβ) peptides, and that they displayed GABAergic neuron degeneration.\nApoE4 increased Aβ production in human, but not in mouse, neurons.\nConverting ApoE4 to ApoE3 by gene editing rescued these phenotypes, indicating the specific effects of ApoE4.\nNeurons that lacked APOE behaved similarly to those expressing ApoE3, and the introduction of ApoE4 expression recapitulated the pathological phenotypes, suggesting a gain of toxic effects from ApoE4.\nTreatment of ApoE4-expressing neurons with a small-molecule structure corrector ameliorated the detrimental effects, thus showing that correcting the pathogenic conformation of ApoE4 is a viable therapeutic approach for ApoE4-related AD.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Converting ApoE4 to ApoE3 by gene editing rescued these phenotypes, indicating the specific effects of ApoE4.\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions Main Content: 1 Introduction . There are plenty of multimodal data involved in the real world , and we experience the world from different modalities [ 1 ] . For example , the autonomous driving systems are usually equipped with multiple sensors to collect information from different perspectives [ 2 ] . In medical diagnosis [ 3 ] , multimodal data usually come from different types of examinations , typically including a variety of clinical data . Effectively exploiting the information of different sources to improve learning performance is a long-standing and challenging goal in machine learning . ∗Equal contribution . †Corresponding author . Please correspond to : zhangchangqing @ tju.edu.cn . 3Code : https : //github.com/MaHuanAAA/MoNIG . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Most multimodal regression methods [ 4 , 5 , 6 , 7 ] usually focus on improving the regression performance by exploiting the complementary information among multiple modalities . Despite effectiveness , it is quite risky for these methods to be deployed in cost-sensitive applications due to the lack of reliability and interpretability . One underlying deficiency is that traditional models usually assume the quality of each modality is basically stable , which limits them to produce reliable prediction , especially when some modalities are noisy or even corrupted [ 8 , 9 ] . Moreover , existing models only output ( over-confident ) predictions [ 10 , 11 ] , which can not well support safe decision and might be disastrous for safety-critical applications . Uncertainty estimation provides a way for trustworthy prediction [ 12 , 13 ] . The decisions made by models without uncertainty estimation are untrustworthy because they are prone to be affected by noises or limited training data . Therefore , it is highly desirable to characterize uncertainty in the learning for AI-based systems . More specifically , when a model is given an input that has never been seen or is severely contaminated , it should be able to express “ I don ’ t know ” . The untrustworthy models are vulnerable to be attacked and it may also lead to wrong decisions , where the cost is often unbearable in critical domains [ 14 ] . Basically , it can endow the model with trustworthiness by dynamically modeling uncertainty . Therefore , we propose a novel algorithm that conducts multimodal regression in a trustworthy manner . Specifically , our proposed algorithm is a unified framework modeling uncertainty under a fully probabilistic framework . Our model integrates multiple modalities by introducing a Mixture of Normal-inverse Gamma distributions ( MoNIG ) , which hierarchically characterizes the uncertainties and accordingly promotes both regression accuracy and trustworthiness . In summary , the contributions of this work include : ( 1 ) We propose a novel trustworthy multimodal regression algorithm . Our method effectively fuses multiple modalities under the evidential regression framework equipped with modality-specific uncertainty and global uncertainty . ( 2 ) To integrate different modalities , a novel MoNIG is designed to be dynamically aware of modality-specific noise/corruption with the estimated uncertainty , which promisingly supports trustworthy decision making and also significantly promotes robustness as well . ( 3 ) We conduct extensive experiments on both synthetic and real-application data , which validate the effectiveness , robustness , and reliability of the proposed model on different multimodal regression tasks ( e.g. , critical temperature prediction for superconductivity [ 15 ] , relative location of CT slices , and human multimodal sentiment analysis ) . 2 Related Work . 2.1 Uncertainty Estimation . Quantifying the uncertainty of machine learning models has received extensive attention [ 16 , 17 ] , especially when the systems are deployed in safety-critical domains , such as autonomous vehicle control [ 18 ] and medical diagnosis [ 3 ] . Bayesian neural networks [ 19 , 20 ] model the uncertainty by placing a distribution over model parameters and marginalizing these parameters to form a predictive distribution . Due to the huge parameter space of modern neural networks , Bayesian neural networks are highly non-convex and difficult in inference . To tackle this problem , [ 21 ] extends Variational Dropout [ 22 ] to the case when dropout rates are unbounded , and proposes a way to reduce the variance of the gradient estimator . A more scalable alternative way is MC Dropout [ 23 ] , which is simple to implement and has been successfully applied to downstream tasks [ 14 , 24 ] . Deep ensembles [ 25 ] have shown strong power in both classification accuracy and uncertainty estimation . It is observed that deep ensembles consistently outperform Bayesian neural networks that are trained using variational inference [ 10 ] . However , the memory and computational cost are quite high . For this issue , different deep sub-networks with shared parameters are trained for integration [ 26 ] . Deterministic uncertainty methods are designed to directly output the uncertainty and alleviate the overconfidence . Built upon RBF networks , [ 27 ] is able to identify the out-of-distribution samples . [ 28 ] introduces a new target criterion for model confidence , known as True Class Probability ( TCP ) , to ensure the low confidence for the failure predictions . The recent approach employs the focal loss to calibrate the deep neural networks [ 29 ] . [ 30 ] places Dirichlet priors over discrete classification predictions and regularizes divergence to a well-defined prior . Our model is inspired by deep evidential regression [ 31 ] which is designed for single-modal data . 2.2 Multimodal Learning . Multimodal machine learning aims to build models that can jointly exploit information from multiple modalities [ 1 , 32 ] . Existing multimodal learning methods have achieved significant progress by integrating different modalities at different stages , namely early , intermediate and late fusion [ 1 , 33 ] . The early fusion methods usually integrate the original data or preprocessed features by simply concatenation [ 34 , 35 ] . The intermediate fusion provides a more flexible strategy to exploit multimodal data for diverse practical applications [ 4 , 36 , 37 , 38 , 39 , 40 , 41 ] . In late fusion , each modality is utilized to train a separate model and the final output is obtained by combining the predictions of these multiple models [ 5 , 7 , 37 ] . Although these multimodal learning approaches exploit the complementary information of multiple modalities from different perspectives , basically they are weak in modeling uncertainty which is important for trustworthy prediction especially when they are deployed in safety-critical domains . 3 Background . Regression has been widely used across a wide spectrum of applications . Given a dataset D = { xi , yi } Ni=1 , one principled way for regression is from a maximum likelihood perspective , where the likelihood for parameters w given the observed data D is as follows : p ( y | Dx , w ) = N∏ i=1 p ( yi | w , xi ) . ( 1 ) In practice , a widely used maximum likelihood estimation is based on Gaussian distribution , which assumes that the target y is drawn from a Gaussian distribution . Then we have the following expression of likelihood function : p ( y | Dx , w , σ2 ) = N∏ i=1 N ( yi | f ( xi , w ) , σ2 ) . ( 2 ) The mean and variance indicate the prediction and corresponding predictive uncertainty , while the predictive uncertainty consists of two parts [ 42 ] : epistemic uncertainty ( EU ) and aleatoric uncertainty ( AU ) . To learn both the aleatoric and epistemic uncertainties explicitly , the mean and variance are assumed to be drawn from Gaussian and Inverse-Gamma distributions [ 31 ] , respectively . Then the Normal Inverse-Gamma ( NIG ) distribution with parameters τ = ( δ , γ , α , β ) can be considered as a higher-order conjugate prior of the Gaussian distribution parameterized with θ = ( µm , σ 2 ) : yi ∼ N ( µ , σ2 ) , µ ∼ N ( δ , σ2γ−1 ) , σ2 ∼ Γ−1 ( α , β ) , ( 3 ) where Γ ( · ) is the gamma function . In this case , the distribution of y takes the form of a NIG distribution NIG ( δ , γ , α , β ) : p ( µ , σ2︸ ︷︷ ︸ θ | δ , γ , α , β︸ ︷︷ ︸ τ ) = βα Γ ( α ) √ γ σ √ 2π ( 1 σ2 ) α+1 exp { −2β + γ ( δ − µ ) 2 2σ2 } , ( 4 ) where δ ∈ R , γ > 0 , α > 1 , and β > 0 . During the training stage , the following loss is induced to minimize the negative log likelihood loss : LNLL ( w ) = 1 2 log ( π γ ) − α log ( Ω ) + ( α+ 1 2 ) log ( ( y − δ ) 2γ + Ω ) + log Ψ , ( 5 ) where Ω = 2β ( 1 + γ ) and Ψ = ( Γ ( α ) Γ ( α+ 12 ) ) . The total loss , L ( w ) , consists of two terms for maximizing the likelihood function and regularizing evidence : L ( w ) = LNLL ( w ) + λLR ( w ) , ( 6 ) where LR ( w ) is the penalty for incorrect evidence ( more details are shown in the supplement ) , and the coefficient λ > 0 balances these two loss terms . For multimodal regression [ 5 , 7 , 6 ] , some approaches have achieved impressive performance . However , existing multimodal regression algorithms mainly focus on improving the accuracy and provide a deterministic prediction without the information of uncertainty , making these models limited for trustworthy decision making . For this issue , we develop a novel algorithm which can capture the modality-specific uncertainty and accordingly induces a reliable overall uncertainty for the final output . Beyond trustworthy decision , our approach automatically alleviates the impact from heavily noisy or corrupted modality . Compared with the intermediate fusion [ 4 , 43 , 44 ] , our method can effectively distinguish which modality is noisy or corrupted for different samples , and accordingly can take the modality-specific uncertainty into account for robust integration . 4 Trustworthy Multimodal Regression . In this section , we introduce the proposed algorithm fusing multiple modalities at both feature and predictive distribution levels . Specifically , we train deep neural networks to model the hyperparameters of the higher-order evidential distributions of multiple modalities ( and the pseudo modality ) and then merge these predicted distributions into one by the mixture of NIGs ( MoNIG ) . In the following subsections , we first provide discussion about existing fusion strategies and ours in multimodal regression . Then we introduce MoNIG to integrate NIG distributions of different modalities in principle . Finally , we elaborate the overall pipeline of our model in training and define the epistemic uncertainty and aleatoric uncertainty respectively . 4.1 Overview of Fusion Strategy . Consider the following multimodal regression problem : given a datasetD = { { xmi } M m=1 , yi } , where xmi is the input feature vector of the m-th modality of the i-th sample , yi is the corresponding target , and M is the total number of modalities , the intuitive goal is to learn a function fm for each modality reaching the following target : min ∑M m=1 ( fm ( xm , wm ) − y ) 2 . While to model the uncertainty , we assume that the observed target y is drawn from a Gaussian distribution , i.e. , y ∼ N ( µm , σ2m ) . Meanwhile , inspired by [ 31 ] , as shown in Eq . 3 , we assume that the mean and variance are drawn from Gaussian and Inverse-Gamma distributions , respectively . Considering multimodal fusion , there are typically two representative strategies , i.e. , feature fusion and decision fusion . Feature fusion ( Fig . 1 ( a ) ) jointly learns unified representations and based on which conducts regression . Although simple , the limitation is also obvious . If some modalities of some samples are noisy or corrupted , the algorithms based on this strategy may fail . In other words , these methods do not take the modality-specific uncertainty into account and can not be adaptively aware of quality variation of different modalities for different samples . The other representative strategy is known as decision fusion ( Fig . 1 ( b ) ) which trains one regression model for each modality , and the final prediction is regarded as a function of the predictions of all modalities . Although flexible , it is also risky for the cases that each modality is insufficient to make trustworthy decision due to partial observation . We propose MoNIG , which can elegantly address the above issues in a unified framework . Firstly , we explicitly represent both modality-specific and global uncertainties to endow our model with the ability of adaption for dynamical modality variation . The modality-specific uncertainty can dynamically guide the integration for different samples and the global uncertainty represents the uncertainty of the final prediction . Secondly , we introduce the feature level fusion producing a pseudo modality to make full use of the complementary information and , accordingly generate an enhanced branch to supplement the decision fusion ( Fig 1 ( c ) ) .
This paper tackles a shortcoming of existing multimodal learning methods that mainly focus on the integration of the information from different sources for prediction but ignore the confidence of prediction. The authors propose a Mixture of NormalInverse Gamma distributions (MoNIG) algorithm which estimates uncertainty for multimodal learning. Their model estimates uncertainty for each modality and also in corrupted modality settings.
science.mup_single_document_summarization.10760
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTrustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions\n\nMain Content:\n1 Introduction . There are plenty of multimodal data involved in the real world , and we experience the world from different modalities [ 1 ] . For example , the autonomous driving systems are usually equipped with multiple sensors to collect information from different perspectives [ 2 ] . In medical diagnosis [ 3 ] , multimodal data usually come from different types of examinations , typically including a variety of clinical data . Effectively exploiting the information of different sources to improve learning performance is a long-standing and challenging goal in machine learning . ∗Equal contribution . †Corresponding author . Please correspond to : zhangchangqing @ tju.edu.cn . 3Code : https : //github.com/MaHuanAAA/MoNIG . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Most multimodal regression methods [ 4 , 5 , 6 , 7 ] usually focus on improving the regression performance by exploiting the complementary information among multiple modalities . Despite effectiveness , it is quite risky for these methods to be deployed in cost-sensitive applications due to the lack of reliability and interpretability . One underlying deficiency is that traditional models usually assume the quality of each modality is basically stable , which limits them to produce reliable prediction , especially when some modalities are noisy or even corrupted [ 8 , 9 ] . Moreover , existing models only output ( over-confident ) predictions [ 10 , 11 ] , which can not well support safe decision and might be disastrous for safety-critical applications . Uncertainty estimation provides a way for trustworthy prediction [ 12 , 13 ] . The decisions made by models without uncertainty estimation are untrustworthy because they are prone to be affected by noises or limited training data . Therefore , it is highly desirable to characterize uncertainty in the learning for AI-based systems . More specifically , when a model is given an input that has never been seen or is severely contaminated , it should be able to express “ I don ’ t know ” . The untrustworthy models are vulnerable to be attacked and it may also lead to wrong decisions , where the cost is often unbearable in critical domains [ 14 ] . Basically , it can endow the model with trustworthiness by dynamically modeling uncertainty . Therefore , we propose a novel algorithm that conducts multimodal regression in a trustworthy manner . Specifically , our proposed algorithm is a unified framework modeling uncertainty under a fully probabilistic framework . Our model integrates multiple modalities by introducing a Mixture of Normal-inverse Gamma distributions ( MoNIG ) , which hierarchically characterizes the uncertainties and accordingly promotes both regression accuracy and trustworthiness . In summary , the contributions of this work include : ( 1 ) We propose a novel trustworthy multimodal regression algorithm . Our method effectively fuses multiple modalities under the evidential regression framework equipped with modality-specific uncertainty and global uncertainty . ( 2 ) To integrate different modalities , a novel MoNIG is designed to be dynamically aware of modality-specific noise/corruption with the estimated uncertainty , which promisingly supports trustworthy decision making and also significantly promotes robustness as well . ( 3 ) We conduct extensive experiments on both synthetic and real-application data , which validate the effectiveness , robustness , and reliability of the proposed model on different multimodal regression tasks ( e.g. , critical temperature prediction for superconductivity [ 15 ] , relative location of CT slices , and human multimodal sentiment analysis ) . 2 Related Work . 2.1 Uncertainty Estimation . Quantifying the uncertainty of machine learning models has received extensive attention [ 16 , 17 ] , especially when the systems are deployed in safety-critical domains , such as autonomous vehicle control [ 18 ] and medical diagnosis [ 3 ] . Bayesian neural networks [ 19 , 20 ] model the uncertainty by placing a distribution over model parameters and marginalizing these parameters to form a predictive distribution . Due to the huge parameter space of modern neural networks , Bayesian neural networks are highly non-convex and difficult in inference . To tackle this problem , [ 21 ] extends Variational Dropout [ 22 ] to the case when dropout rates are unbounded , and proposes a way to reduce the variance of the gradient estimator . A more scalable alternative way is MC Dropout [ 23 ] , which is simple to implement and has been successfully applied to downstream tasks [ 14 , 24 ] . Deep ensembles [ 25 ] have shown strong power in both classification accuracy and uncertainty estimation . It is observed that deep ensembles consistently outperform Bayesian neural networks that are trained using variational inference [ 10 ] . However , the memory and computational cost are quite high . For this issue , different deep sub-networks with shared parameters are trained for integration [ 26 ] . Deterministic uncertainty methods are designed to directly output the uncertainty and alleviate the overconfidence . Built upon RBF networks , [ 27 ] is able to identify the out-of-distribution samples . [ 28 ] introduces a new target criterion for model confidence , known as True Class Probability ( TCP ) , to ensure the low confidence for the failure predictions . The recent approach employs the focal loss to calibrate the deep neural networks [ 29 ] . [ 30 ] places Dirichlet priors over discrete classification predictions and regularizes divergence to a well-defined prior . Our model is inspired by deep evidential regression [ 31 ] which is designed for single-modal data . 2.2 Multimodal Learning . Multimodal machine learning aims to build models that can jointly exploit information from multiple modalities [ 1 , 32 ] . Existing multimodal learning methods have achieved significant progress by integrating different modalities at different stages , namely early , intermediate and late fusion [ 1 , 33 ] . The early fusion methods usually integrate the original data or preprocessed features by simply concatenation [ 34 , 35 ] . The intermediate fusion provides a more flexible strategy to exploit multimodal data for diverse practical applications [ 4 , 36 , 37 , 38 , 39 , 40 , 41 ] . In late fusion , each modality is utilized to train a separate model and the final output is obtained by combining the predictions of these multiple models [ 5 , 7 , 37 ] . Although these multimodal learning approaches exploit the complementary information of multiple modalities from different perspectives , basically they are weak in modeling uncertainty which is important for trustworthy prediction especially when they are deployed in safety-critical domains . 3 Background . Regression has been widely used across a wide spectrum of applications . Given a dataset D = { xi , yi } Ni=1 , one principled way for regression is from a maximum likelihood perspective , where the likelihood for parameters w given the observed data D is as follows : p ( y | Dx , w ) = N∏ i=1 p ( yi | w , xi ) . ( 1 ) In practice , a widely used maximum likelihood estimation is based on Gaussian distribution , which assumes that the target y is drawn from a Gaussian distribution . Then we have the following expression of likelihood function : p ( y | Dx , w , σ2 ) = N∏ i=1 N ( yi | f ( xi , w ) , σ2 ) . ( 2 ) The mean and variance indicate the prediction and corresponding predictive uncertainty , while the predictive uncertainty consists of two parts [ 42 ] : epistemic uncertainty ( EU ) and aleatoric uncertainty ( AU ) . To learn both the aleatoric and epistemic uncertainties explicitly , the mean and variance are assumed to be drawn from Gaussian and Inverse-Gamma distributions [ 31 ] , respectively . Then the Normal Inverse-Gamma ( NIG ) distribution with parameters τ = ( δ , γ , α , β ) can be considered as a higher-order conjugate prior of the Gaussian distribution parameterized with θ = ( µm , σ 2 ) : yi ∼ N ( µ , σ2 ) , µ ∼ N ( δ , σ2γ−1 ) , σ2 ∼ Γ−1 ( α , β ) , ( 3 ) where Γ ( · ) is the gamma function . In this case , the distribution of y takes the form of a NIG distribution NIG ( δ , γ , α , β ) : p ( µ , σ2︸ ︷︷ ︸ θ | δ , γ , α , β︸ ︷︷ ︸ τ ) = βα Γ ( α ) √ γ σ √ 2π ( 1 σ2 ) α+1 exp { −2β + γ ( δ − µ ) 2 2σ2 } , ( 4 ) where δ ∈ R , γ > 0 , α > 1 , and β > 0 . During the training stage , the following loss is induced to minimize the negative log likelihood loss : LNLL ( w ) = 1 2 log ( π γ ) − α log ( Ω ) + ( α+ 1 2 ) log ( ( y − δ ) 2γ + Ω ) + log Ψ , ( 5 ) where Ω = 2β ( 1 + γ ) and Ψ = ( Γ ( α ) Γ ( α+ 12 ) ) . The total loss , L ( w ) , consists of two terms for maximizing the likelihood function and regularizing evidence : L ( w ) = LNLL ( w ) + λLR ( w ) , ( 6 ) where LR ( w ) is the penalty for incorrect evidence ( more details are shown in the supplement ) , and the coefficient λ > 0 balances these two loss terms . For multimodal regression [ 5 , 7 , 6 ] , some approaches have achieved impressive performance . However , existing multimodal regression algorithms mainly focus on improving the accuracy and provide a deterministic prediction without the information of uncertainty , making these models limited for trustworthy decision making . For this issue , we develop a novel algorithm which can capture the modality-specific uncertainty and accordingly induces a reliable overall uncertainty for the final output . Beyond trustworthy decision , our approach automatically alleviates the impact from heavily noisy or corrupted modality . Compared with the intermediate fusion [ 4 , 43 , 44 ] , our method can effectively distinguish which modality is noisy or corrupted for different samples , and accordingly can take the modality-specific uncertainty into account for robust integration . 4 Trustworthy Multimodal Regression . In this section , we introduce the proposed algorithm fusing multiple modalities at both feature and predictive distribution levels . Specifically , we train deep neural networks to model the hyperparameters of the higher-order evidential distributions of multiple modalities ( and the pseudo modality ) and then merge these predicted distributions into one by the mixture of NIGs ( MoNIG ) . In the following subsections , we first provide discussion about existing fusion strategies and ours in multimodal regression . Then we introduce MoNIG to integrate NIG distributions of different modalities in principle . Finally , we elaborate the overall pipeline of our model in training and define the epistemic uncertainty and aleatoric uncertainty respectively . 4.1 Overview of Fusion Strategy . Consider the following multimodal regression problem : given a datasetD = { { xmi } M m=1 , yi } , where xmi is the input feature vector of the m-th modality of the i-th sample , yi is the corresponding target , and M is the total number of modalities , the intuitive goal is to learn a function fm for each modality reaching the following target : min ∑M m=1 ( fm ( xm , wm ) − y ) 2 . While to model the uncertainty , we assume that the observed target y is drawn from a Gaussian distribution , i.e. , y ∼ N ( µm , σ2m ) . Meanwhile , inspired by [ 31 ] , as shown in Eq . 3 , we assume that the mean and variance are drawn from Gaussian and Inverse-Gamma distributions , respectively . Considering multimodal fusion , there are typically two representative strategies , i.e. , feature fusion and decision fusion . Feature fusion ( Fig . 1 ( a ) ) jointly learns unified representations and based on which conducts regression . Although simple , the limitation is also obvious . If some modalities of some samples are noisy or corrupted , the algorithms based on this strategy may fail . In other words , these methods do not take the modality-specific uncertainty into account and can not be adaptively aware of quality variation of different modalities for different samples . The other representative strategy is known as decision fusion ( Fig . 1 ( b ) ) which trains one regression model for each modality , and the final prediction is regarded as a function of the predictions of all modalities . Although flexible , it is also risky for the cases that each modality is insufficient to make trustworthy decision due to partial observation . We propose MoNIG , which can elegantly address the above issues in a unified framework . Firstly , we explicitly represent both modality-specific and global uncertainties to endow our model with the ability of adaption for dynamical modality variation . The modality-specific uncertainty can dynamically guide the integration for different samples and the global uncertainty represents the uncertainty of the final prediction . Secondly , we introduce the feature level fusion producing a pseudo modality to make full use of the complementary information and , accordingly generate an enhanced branch to supplement the decision fusion ( Fig 1 ( c ) ) .", "role": "user" }, { "content": "This paper tackles a shortcoming of existing multimodal learning methods that mainly focus on the integration of the information from different sources for prediction but ignore the confidence of prediction. The authors propose a Mixture of NormalInverse Gamma distributions (MoNIG) algorithm which estimates uncertainty for multimodal learning. Their model estimates uncertainty for each modality and also in corrupted modality settings.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Towards Unknown-aware Learning with Virtual Outlier Synthesis Main Content: 1 INTRODUCTION . Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained , yet they often struggle to handle the unknowns . In particular , neural networks have been shown to produce high posterior probability for out-of-distribution ( OOD ) test inputs ( Nguyen et al. , 2015 ) , which arise from unknown categories and should not be predicted by the model . Taking self-driving car as an example , an object detection model trained to recognize in-distribution objects ( e.g. , cars , stop signs ) can produce a high-confidence prediction for an unseen object of a moose ; see Figure 1 ( a ) . Such a failure case raises concerns in model reliability , and worse , may lead to catastrophe when deployed in safety-critical applications . The vulnerability to OOD inputs arises due to the lack explicit knowledge of unknowns during training time . In particular , neural networks are typically optimized only on the in-distribution ( ID ) data . The resulting decision boundary , despite being useful on ID tasks such as classification , can be ill-fated for OOD detection . We illustrate this in Figure 1 . The ID data ( gray ) consists of three classconditional Gaussians , on which a three-way softmax classifier is trained . The resulting classifier is overconfident for regions far away from the ID data ( see the red shade in Figure 1 ( b ) ) , causing trouble for OOD detection . Ideally , a model should learn a more compact decision boundary that produces low uncertainty for the ID data , with high OOD uncertainty elsewhere ( e.g. , Figure 1 ( c ) ) . However , achieving this goal is non-trivial due to the lack of supervision signal of unknowns . This motivates the question : Can we synthesize virtual outliers for effective model regularization ? In this paper , we propose a novel unknown-aware learning framework dubbed VOS ( Virtual Outlier Synthesis ) , which optimizes the dual objectives of both ID task and OOD detection performance . In a nutshell , VOS consists of three components tackling challenges of outlier synthesis and effective model regularization with synthesized outliers . To synthesize the outliers , we estimate the class-conditional distribution in the feature space , and sample outliers from the low-likelihood region of ID classes ( Section 3.1 ) . Key to our method , we show that sampling in the feature space is more tractable than synthesizing images in the high-dimensional pixel space ( Lee et al. , 2018a ) . Alongside , we propose a novel unknown-aware training objective , which contrastively shapes the uncertainty surface between the ID data and synthesized outliers ( Section 3.2 ) . During training , VOS simultaneously performs the ID task ( e.g. , classification or object detection ) as well as the OOD uncertainty regularization . During inference time , the uncertainty estimation branch produces a larger probabilistic score for ID data and vice versa , which enables effective OOD detection ( Section 3.3 ) . VOS offers several compelling advantages compared to existing solutions . ( 1 ) VOS is a general learning framework that is effective for both object detection and image classification tasks , whereas previous methods were primarily driven by image classification . Image-level detection can be limiting as an image could be OOD in certain regions while being in-distribution elsewhere . Our work bridges a critical research gap since OOD detection for object detection is timely yet underexplored in literature . ( 2 ) VOS enables adaptive outlier synthesis , which can be flexibly and conveniently used for any ID data without manual data collection or cleaning . In contrast , previous methods using outlier exposure ( Hendrycks et al. , 2019 ) require an auxiliary image dataset that is sufficiently diverse , which can be arguably prohibitive to obtain . Moreover , one needs to perform careful data cleaning to ensure the auxiliary outlier dataset does not overlap with ID data . ( 3 ) VOS synthesizes outliers that can estimate a compact decision boundary between ID and OOD data . In contrast , existing solutions use outliers that are either too trivial to regularize the OOD estimator , or too hard to be separated from ID data , resulting in sub-optimal performance . Our key contributions and results are summarized as follows : • We propose a new framework VOS addressing a pressing issue—unknown-aware deep learning that optimizes for both ID and OOD performance . VOS establishes state-of-the-art results on a challenging object detection task . Compared to the best method , VOS reduces the FPR95 by up to 7.87 % while preserving the accuracy on the ID task . • We conduct extensive ablations and reveal important insights by contrasting different outlier synthesis approaches . We show that VOS is more advantageous than generating outliers directly in the high-dimensional pixel space ( e.g. , using GAN ( Lee et al. , 2018a ) ) or using noise as outliers . • We comprehensively evaluate our method on common OOD detection benchmarks , along with a more challenging yet underexplored task in the context of object detection . Our effort facilitates future research to evaluate OOD detection in a real-world setting . 2 PROBLEM SETUP . We start by formulating the problem of OOD detection in the setting of object detection . Our framework can be easily generalized to image classification when the bounding box is the entire image ( see Section 4.2 ) . Most previous formulations of OOD detection treat entire images as anomalies , which can lead to ambiguity shown in Figure 1 . In particular , natural images are composed of numerous objects and components . Knowing which regions of an image are anomalous could allow for safer handling of unfamiliar objects . This setting is more realistic in practice , yet also more challenging as it requires reasoning OOD uncertainty at the fine-grained object level . Specifically , we denote the input and label space by X = Rd and Y = { 1 , 2 , ... , K } , respectively . Let x ∈ X be the input image , b ∈ R4 be the bounding box coordinates associated with object instances in the image , and y ∈ Y be the semantic label for K-way classification . An object detection model is trained on in-distribution data D = { ( xi , bi , yi ) } Ni=1 drawn from an unknown joint distribution P . We use neural networks with parameters θ to model the bounding box regression pθ ( b|x ) and the classification pθ ( y|x , b ) . The OOD detection can be formulated as a binary classification problem , which distinguishes between the in- vs. out-of-distribution objects . Let PX denote the marginal probability distribution on X . Given a test input x∗ ∼ PX , as well as an object instance b∗ predicted by the object detector , the goal is to predict pθ ( g|x∗ , b∗ ) . We use g = 1 to indicate a detected object being in-distribution , and g = 0 being out-of-distribution , with semantics outside the support of Y . 3 METHOD . Our novel unknown-aware learning framework is illustrated in Figure 2 . Our framework encompasses three novel components and addresses the following questions : ( 1 ) how to synthesize the virtual outliers ( Section 3.1 ) , ( 2 ) how to leverage the synthesized outliers for effective model regularization ( Section 3.2 ) , and ( 3 ) how to perform OOD detection during inference time ( Section 3.3 ) ? 3.1 VOS : VIRTUAL OUTLIER SYNTHESIS . Our framework VOS generates virtual outliers for model regularization , without relying on external data . While a straightforward idea is to train generative models such as GANs ( Goodfellow et al. , 2014 ; Lee et al. , 2018a ) , synthesizing images in the high-dimensional pixel space can be difficult to optimize . Instead , our key idea is to synthesize virtual outliers in the feature space , which is more tractable given lower dimensionality . Moreover , our method is based on a discriminatively trained classifier in the object detector , which circumvents the difficult optimization process in training generative models . Specifically , we assume the feature representation of object instances forms a class-conditional multivariate Gaussian distribution ( see Figure 3 ) : pθ ( h ( x , b ) |y = k ) = N ( µk , Σ ) , where µk is the Gaussian mean of class k ∈ { 1 , 2 , .. , K } , Σ is the tied covariance matrix , and h ( x , b ) ∈ Rm is the latent representation of an object instance ( x , b ) . To extract the latent representation , we use the penultimate layer of the neural network . The dimensionality m is significantly smaller than the input dimension d. To estimate the parameters of the class-conditional Gaussian , we compute empirical class mean µ̂k and covariance Σ̂ of training samples { ( xi , bi , yi ) } Ni=1 : µ̂k = 1 Nk ∑ i : yi=k h ( xi , bi ) ( 1 ) Σ̂ = 1 N ∑ k ∑ i : yi=k ( h ( xi , bi ) − µ̂k ) ( h ( xi , bi ) − µ̂k ) > , ( 2 ) where Nk is the number of objects in class k , and N is the total number of objects . We use online estimation for efficient training , where we maintain a class-conditional queue with |Qk| object instances from each class . In each iteration , we enqueue the embeddings of objects to their corresponding class-conditional queues , and dequeue the same number of object embeddings . Sampling from the feature representation space . We propose sampling the virtual outliers from the feature representation space , using the multivariate distributions estimated above . Ideally , these virtual outliers should help estimate a more compact decision boundary between ID and OOD data . To achieve this , we propose sampling the virtual outliers Vk from the -likelihood region of the estimated class-conditional distribution : Vk = { vk| 1 ( 2π ) m/2|Σ̂|1/2 exp ( −1 2 ( vk − µ̂k ) > Σ̂−1 ( vk − µ̂k ) ) < } , ( 3 ) where vk ∼ N ( µ̂k , Σ̂ ) denotes the sampled virtual outliers for class k , which are in the sublevel set based on the likelihood . is sufficiently small so that the sampled outliers are near class boundary . Classification outputs for virtual outliers . For a given sampled virtual outlier v ∈ Rm , the output of the classification branch can be derived through a linear transformation : f ( v ; θ ) =W > clsv , ( 4 ) where Wcls ∈ Rm×K is the weight of the last fully connected layer . We proceed with describing how to regularize the output of virtual outliers for improved OOD detection . 3.2 UNKNOWN-AWARE TRAINING OBJECTIVE . We now introduce a new training objective for unknown-aware learning , leveraging the virtual outliers in Section 3.1 . The key idea is to perform visual recognition task while regularizing the model to produce a low OOD score for ID data , and a high OOD score for the synthesized outlier . Uncertainty regularization for classification . For simplicity , we first describe the regularization in the multi-class classification setting . The regularization loss should ideally optimize for the separability between the ID vs. OOD data under some function that captures the data density . However , directly estimating log p ( x ) can be computationally intractable as it requires sampling from the entire space X . We note that the log partition function E ( x ; θ ) : = − log ∑K k=1 e fk ( x ; θ ) is proportional to log p ( x ) with some unknown factor , which can be seen from the following : p ( y|x ) = p ( x , y ) p ( x ) = efy ( x ; θ ) ∑K k=1 e fk ( x ; θ ) , where fy ( x ; θ ) denotes the y-th element of logit output corresponding to the label y . The negative log partition function is also known as the free energy , which was shown to be an effective uncertainty measurement for OOD detection ( Liu et al. , 2020a ) . Our idea is to explicitly perform a level-set estimation based on the energy function ( threshold at 0 ) , where the ID data has negative energy values and the synthesized outlier has positive energy : Luncertainty = Ev∼V 1 { E ( v ; θ ) > 0 } + Ex∼D 1 { E ( x ; θ ) ≤ 0 } This is a simpler objective than estimating density . Since the 0/1 loss is intractable , we replace it with the binary sigmoid loss , a smooth approximation of the 0/1 loss , yielding the following : Luncertainty = Ev∼V [ − log 1 1 + exp−θu·E ( v ; θ ) ] + Ex∼D [ − log exp −θu·E ( x ; θ ) 1 + exp−θu·E ( x ; θ ) ] . ( 5 ) Here θu is a learnable parameter modulating the slope of the sigmoid function . The learning process shapes the uncertainty surface , which predicts high probability for ID data and low probability for virtual outliers v. Liu et al . ( 2020a ) employed energy for model uncertainty regularization , however , the loss function is based on the squared hinge loss and requires tuning two margin hyperparameters . In contrast , our uncertainty regularization loss is completely hyperparameter-free and is much easier to use in practice . Moreover , VOS produces probabilistic score for OOD detection , whereas Liu et al . ( 2020a ) relies on non-probabilistic energy score . Object-level energy score . In case of object detection , we can replace the image-level energy with object-level energy score . For ID object ( x , b ) , the energy is defined as : E ( x , b ; θ ) = − log K∑ k=1 wk · expfk ( ( x , b ) ; θ ) , ( 6 ) where fk ( ( x , b ) ; θ ) = W > clsh ( x , b ) is the logit output for class k in the classification branch . The energy score for the virtual outlier can be defined in a similar way as above . In particular , we will show in Section 4 that a learnable w is more flexible than a constant w , given the inherent class imbalance in object detection datasets . Additional analysis on wk is in Appendix H. Overall training objective . In the case of object detection , the overall training objective combines the standard object detection loss , along with a regularization loss in terms of uncertainty : min θ E ( x , b , y ) ∼D [ Lcls + Lloc ] + β · Luncertainty , ( 7 ) where β is the weight of the uncertainty regularization . Lcls and Lloc are losses for classification and bounding box regression , respectively . This can be simplified to classification task without Lloc . We provide ablation studies in Section 4.1 demonstrating the superiority of our loss function .
This paper proposes an effective method for OOD detection and model regularization, which does not rely on real OOD datasets. Specifically, the proposed method synthesizes virtual outliers by sampling low-likelihood samples in the feature space of class conditional distribution, and adds a novel regularizer to the original ID training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. Extensive and comprehensive experiments demonstrate the effectiveness of the proposed method.
science.mup_single_document_summarization.7803
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTowards Unknown-aware Learning with Virtual Outlier Synthesis\n\nMain Content:\n1 INTRODUCTION . Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained , yet they often struggle to handle the unknowns . In particular , neural networks have been shown to produce high posterior probability for out-of-distribution ( OOD ) test inputs ( Nguyen et al. , 2015 ) , which arise from unknown categories and should not be predicted by the model . Taking self-driving car as an example , an object detection model trained to recognize in-distribution objects ( e.g. , cars , stop signs ) can produce a high-confidence prediction for an unseen object of a moose ; see Figure 1 ( a ) . Such a failure case raises concerns in model reliability , and worse , may lead to catastrophe when deployed in safety-critical applications . The vulnerability to OOD inputs arises due to the lack explicit knowledge of unknowns during training time . In particular , neural networks are typically optimized only on the in-distribution ( ID ) data . The resulting decision boundary , despite being useful on ID tasks such as classification , can be ill-fated for OOD detection . We illustrate this in Figure 1 . The ID data ( gray ) consists of three classconditional Gaussians , on which a three-way softmax classifier is trained . The resulting classifier is overconfident for regions far away from the ID data ( see the red shade in Figure 1 ( b ) ) , causing trouble for OOD detection . Ideally , a model should learn a more compact decision boundary that produces low uncertainty for the ID data , with high OOD uncertainty elsewhere ( e.g. , Figure 1 ( c ) ) . However , achieving this goal is non-trivial due to the lack of supervision signal of unknowns . This motivates the question : Can we synthesize virtual outliers for effective model regularization ? In this paper , we propose a novel unknown-aware learning framework dubbed VOS ( Virtual Outlier Synthesis ) , which optimizes the dual objectives of both ID task and OOD detection performance . In a nutshell , VOS consists of three components tackling challenges of outlier synthesis and effective model regularization with synthesized outliers . To synthesize the outliers , we estimate the class-conditional distribution in the feature space , and sample outliers from the low-likelihood region of ID classes ( Section 3.1 ) . Key to our method , we show that sampling in the feature space is more tractable than synthesizing images in the high-dimensional pixel space ( Lee et al. , 2018a ) . Alongside , we propose a novel unknown-aware training objective , which contrastively shapes the uncertainty surface between the ID data and synthesized outliers ( Section 3.2 ) . During training , VOS simultaneously performs the ID task ( e.g. , classification or object detection ) as well as the OOD uncertainty regularization . During inference time , the uncertainty estimation branch produces a larger probabilistic score for ID data and vice versa , which enables effective OOD detection ( Section 3.3 ) . VOS offers several compelling advantages compared to existing solutions . ( 1 ) VOS is a general learning framework that is effective for both object detection and image classification tasks , whereas previous methods were primarily driven by image classification . Image-level detection can be limiting as an image could be OOD in certain regions while being in-distribution elsewhere . Our work bridges a critical research gap since OOD detection for object detection is timely yet underexplored in literature . ( 2 ) VOS enables adaptive outlier synthesis , which can be flexibly and conveniently used for any ID data without manual data collection or cleaning . In contrast , previous methods using outlier exposure ( Hendrycks et al. , 2019 ) require an auxiliary image dataset that is sufficiently diverse , which can be arguably prohibitive to obtain . Moreover , one needs to perform careful data cleaning to ensure the auxiliary outlier dataset does not overlap with ID data . ( 3 ) VOS synthesizes outliers that can estimate a compact decision boundary between ID and OOD data . In contrast , existing solutions use outliers that are either too trivial to regularize the OOD estimator , or too hard to be separated from ID data , resulting in sub-optimal performance . Our key contributions and results are summarized as follows : • We propose a new framework VOS addressing a pressing issue—unknown-aware deep learning that optimizes for both ID and OOD performance . VOS establishes state-of-the-art results on a challenging object detection task . Compared to the best method , VOS reduces the FPR95 by up to 7.87 % while preserving the accuracy on the ID task . • We conduct extensive ablations and reveal important insights by contrasting different outlier synthesis approaches . We show that VOS is more advantageous than generating outliers directly in the high-dimensional pixel space ( e.g. , using GAN ( Lee et al. , 2018a ) ) or using noise as outliers . • We comprehensively evaluate our method on common OOD detection benchmarks , along with a more challenging yet underexplored task in the context of object detection . Our effort facilitates future research to evaluate OOD detection in a real-world setting . 2 PROBLEM SETUP . We start by formulating the problem of OOD detection in the setting of object detection . Our framework can be easily generalized to image classification when the bounding box is the entire image ( see Section 4.2 ) . Most previous formulations of OOD detection treat entire images as anomalies , which can lead to ambiguity shown in Figure 1 . In particular , natural images are composed of numerous objects and components . Knowing which regions of an image are anomalous could allow for safer handling of unfamiliar objects . This setting is more realistic in practice , yet also more challenging as it requires reasoning OOD uncertainty at the fine-grained object level . Specifically , we denote the input and label space by X = Rd and Y = { 1 , 2 , ... , K } , respectively . Let x ∈ X be the input image , b ∈ R4 be the bounding box coordinates associated with object instances in the image , and y ∈ Y be the semantic label for K-way classification . An object detection model is trained on in-distribution data D = { ( xi , bi , yi ) } Ni=1 drawn from an unknown joint distribution P . We use neural networks with parameters θ to model the bounding box regression pθ ( b|x ) and the classification pθ ( y|x , b ) . The OOD detection can be formulated as a binary classification problem , which distinguishes between the in- vs. out-of-distribution objects . Let PX denote the marginal probability distribution on X . Given a test input x∗ ∼ PX , as well as an object instance b∗ predicted by the object detector , the goal is to predict pθ ( g|x∗ , b∗ ) . We use g = 1 to indicate a detected object being in-distribution , and g = 0 being out-of-distribution , with semantics outside the support of Y . 3 METHOD . Our novel unknown-aware learning framework is illustrated in Figure 2 . Our framework encompasses three novel components and addresses the following questions : ( 1 ) how to synthesize the virtual outliers ( Section 3.1 ) , ( 2 ) how to leverage the synthesized outliers for effective model regularization ( Section 3.2 ) , and ( 3 ) how to perform OOD detection during inference time ( Section 3.3 ) ? 3.1 VOS : VIRTUAL OUTLIER SYNTHESIS . Our framework VOS generates virtual outliers for model regularization , without relying on external data . While a straightforward idea is to train generative models such as GANs ( Goodfellow et al. , 2014 ; Lee et al. , 2018a ) , synthesizing images in the high-dimensional pixel space can be difficult to optimize . Instead , our key idea is to synthesize virtual outliers in the feature space , which is more tractable given lower dimensionality . Moreover , our method is based on a discriminatively trained classifier in the object detector , which circumvents the difficult optimization process in training generative models . Specifically , we assume the feature representation of object instances forms a class-conditional multivariate Gaussian distribution ( see Figure 3 ) : pθ ( h ( x , b ) |y = k ) = N ( µk , Σ ) , where µk is the Gaussian mean of class k ∈ { 1 , 2 , .. , K } , Σ is the tied covariance matrix , and h ( x , b ) ∈ Rm is the latent representation of an object instance ( x , b ) . To extract the latent representation , we use the penultimate layer of the neural network . The dimensionality m is significantly smaller than the input dimension d. To estimate the parameters of the class-conditional Gaussian , we compute empirical class mean µ̂k and covariance Σ̂ of training samples { ( xi , bi , yi ) } Ni=1 : µ̂k = 1 Nk ∑ i : yi=k h ( xi , bi ) ( 1 ) Σ̂ = 1 N ∑ k ∑ i : yi=k ( h ( xi , bi ) − µ̂k ) ( h ( xi , bi ) − µ̂k ) > , ( 2 ) where Nk is the number of objects in class k , and N is the total number of objects . We use online estimation for efficient training , where we maintain a class-conditional queue with |Qk| object instances from each class . In each iteration , we enqueue the embeddings of objects to their corresponding class-conditional queues , and dequeue the same number of object embeddings . Sampling from the feature representation space . We propose sampling the virtual outliers from the feature representation space , using the multivariate distributions estimated above . Ideally , these virtual outliers should help estimate a more compact decision boundary between ID and OOD data . To achieve this , we propose sampling the virtual outliers Vk from the -likelihood region of the estimated class-conditional distribution : Vk = { vk| 1 ( 2π ) m/2|Σ̂|1/2 exp ( −1 2 ( vk − µ̂k ) > Σ̂−1 ( vk − µ̂k ) ) < } , ( 3 ) where vk ∼ N ( µ̂k , Σ̂ ) denotes the sampled virtual outliers for class k , which are in the sublevel set based on the likelihood . is sufficiently small so that the sampled outliers are near class boundary . Classification outputs for virtual outliers . For a given sampled virtual outlier v ∈ Rm , the output of the classification branch can be derived through a linear transformation : f ( v ; θ ) =W > clsv , ( 4 ) where Wcls ∈ Rm×K is the weight of the last fully connected layer . We proceed with describing how to regularize the output of virtual outliers for improved OOD detection . 3.2 UNKNOWN-AWARE TRAINING OBJECTIVE . We now introduce a new training objective for unknown-aware learning , leveraging the virtual outliers in Section 3.1 . The key idea is to perform visual recognition task while regularizing the model to produce a low OOD score for ID data , and a high OOD score for the synthesized outlier . Uncertainty regularization for classification . For simplicity , we first describe the regularization in the multi-class classification setting . The regularization loss should ideally optimize for the separability between the ID vs. OOD data under some function that captures the data density . However , directly estimating log p ( x ) can be computationally intractable as it requires sampling from the entire space X . We note that the log partition function E ( x ; θ ) : = − log ∑K k=1 e fk ( x ; θ ) is proportional to log p ( x ) with some unknown factor , which can be seen from the following : p ( y|x ) = p ( x , y ) p ( x ) = efy ( x ; θ ) ∑K k=1 e fk ( x ; θ ) , where fy ( x ; θ ) denotes the y-th element of logit output corresponding to the label y . The negative log partition function is also known as the free energy , which was shown to be an effective uncertainty measurement for OOD detection ( Liu et al. , 2020a ) . Our idea is to explicitly perform a level-set estimation based on the energy function ( threshold at 0 ) , where the ID data has negative energy values and the synthesized outlier has positive energy : Luncertainty = Ev∼V 1 { E ( v ; θ ) > 0 } + Ex∼D 1 { E ( x ; θ ) ≤ 0 } This is a simpler objective than estimating density . Since the 0/1 loss is intractable , we replace it with the binary sigmoid loss , a smooth approximation of the 0/1 loss , yielding the following : Luncertainty = Ev∼V [ − log 1 1 + exp−θu·E ( v ; θ ) ] + Ex∼D [ − log exp −θu·E ( x ; θ ) 1 + exp−θu·E ( x ; θ ) ] . ( 5 ) Here θu is a learnable parameter modulating the slope of the sigmoid function . The learning process shapes the uncertainty surface , which predicts high probability for ID data and low probability for virtual outliers v. Liu et al . ( 2020a ) employed energy for model uncertainty regularization , however , the loss function is based on the squared hinge loss and requires tuning two margin hyperparameters . In contrast , our uncertainty regularization loss is completely hyperparameter-free and is much easier to use in practice . Moreover , VOS produces probabilistic score for OOD detection , whereas Liu et al . ( 2020a ) relies on non-probabilistic energy score . Object-level energy score . In case of object detection , we can replace the image-level energy with object-level energy score . For ID object ( x , b ) , the energy is defined as : E ( x , b ; θ ) = − log K∑ k=1 wk · expfk ( ( x , b ) ; θ ) , ( 6 ) where fk ( ( x , b ) ; θ ) = W > clsh ( x , b ) is the logit output for class k in the classification branch . The energy score for the virtual outlier can be defined in a similar way as above . In particular , we will show in Section 4 that a learnable w is more flexible than a constant w , given the inherent class imbalance in object detection datasets . Additional analysis on wk is in Appendix H. Overall training objective . In the case of object detection , the overall training objective combines the standard object detection loss , along with a regularization loss in terms of uncertainty : min θ E ( x , b , y ) ∼D [ Lcls + Lloc ] + β · Luncertainty , ( 7 ) where β is the weight of the uncertainty regularization . Lcls and Lloc are losses for classification and bounding box regression , respectively . This can be simplified to classification task without Lloc . We provide ablation studies in Section 4.1 demonstrating the superiority of our loss function .", "role": "user" }, { "content": "This paper proposes an effective method for OOD detection and model regularization, which does not rely on real OOD datasets. Specifically, the proposed method synthesizes virtual outliers by sampling low-likelihood samples in the feature space of class conditional distribution, and adds a novel regularizer to the original ID training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. Extensive and comprehensive experiments demonstrate the effectiveness of the proposed method.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: AARL: Automated Auxiliary Loss for Reinforcement Learning Main Content: 1 INTRODUCTION . Reinforcement learning ( RL ) has been a hot research topic and has shown significant progress in many fields ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Gu et al. , 2017 ; Vinyals et al. , 2019 ) . Recent RL research focuses on obtaining a good representation of states as it is shown to be the key to improve sample efficiency of RL ( Laskin et al. , 2020 ) . This is because in high-dimensional environments , applying RL directly on the complex state inputs is incredibly sample-inefficient ( Lake et al. , 2017 ; Kaiser et al. , 2019 ) , making RL hard to scale to real-world tasks where interacting with the environment is costly ( Dulac-Arnold et al. , 2019 ) . Standard RL paradigm learns the representation from critic loss ( value prediction ) and / or actor loss ( maximizing cumulative reward ) , which hardly extracts informative representations in challenging environments like pixel-based RL and complex robotics systems ( Lake et al. , 2017 ; Kober et al. , 2013 ) . This motivates adding auxiliary losses in support of learning better latent representations of states . The usage of auxiliary loss encodes prior knowledge of RL and its environments , and puts regularization on training ( Shelhamer et al. , 2016 ) . Typical auxiliary losses are manually designed by human experts , including observation reconstruction ( Yarats et al. , 2019 ) , reward prediction ( Jaderberg et al. , 2017 ) and environment dynamics prediction ( Shelhamer et al. , 2016 ; De Bruin et al. , 2018 ; Ota et al. , 2020 ) . However , such auxiliary loss designs and choices rely heavily on human knowledge of what might be helpful for RL training , which requires extensive human efforts to scale to new tasks and can be suboptimal for improving RL performance ( Yang & Nachum , 2021 ) . In this paper , we rigorously treat auxiliary loss functions for RL as a first-class problem to explicitly address the question : what is the universal approach to find a good auxiliary loss function for RL ? Considering that the automated machine learning ( AutoML ) community has shown promising results with automated loss search in computer vision tasks ( Li et al. , 2019 ) , we propose to automate the process of designing auxiliary loss functions of RL . Specifically , we formulate the task as a bi-level optimization , where we try to find the best auxiliary loss function , which , to the most extent , helps train a good RL agent . The inner-level problem is a typical RL training problem , while the outer-level can be seen as a search problem . To tackle this , we first design a novel search space of auxiliary loss functions , which is a combination of two components : loss input and loss operator . As shown in Table 1 , the search space covers many existing handcrafted losses , and it is also substantially large ( as large as 4.6 × 1019 ) . We then propose an efficient search strategy to explore the space . This is done in a two-step manner . We first finalize the loss operator specification and then use an evolutionary strategy that performs mutations on configurations of loss input , to identify the top-performing loss inputs quickly . We evaluate our search framework on both pixel-based and state-based environments in the DeepMind Control suite ( Tassa et al. , 2018 ) . Extensive experiments show that the searched auxiliary loss functions greatly outperform state-of-the-art methods and can easily transfer to unseen environments ( that are never used during the search ) , showing that our proposed method is robust and effective . We highlight the main contributions of this paper below : • We introduce a principled and universal approach for auxiliary loss design in RL . To the best of our knowledge , we are the first to derive the optimal auxiliary loss function with an automatic process . Our framework can be easily applied to arbitrary RL tasks to search for the best auxiliary loss function . • We demonstrate that AARL significantly outperforms state-of-the-art methods in both pixel-based and state-based environments . The searched auxiliary loss functions show strong generalization ability to transfer to unseen environments . • We analyze the derived auxiliary loss functions and deliver some insightful discussions that we hope will deepen the understanding of auxiliary losses in RL . We will also open source our codebase to facilitate future research . 2 METHODOLOGY . We consider the standard Markov Decision Process ( MDP ) setting where we specify the state , action and reward at time step t as ( st , at , rt ) . The sequence of interaction data is denoted as ( st , at , rt , · · · , st+k ) , where k represents the horizon of this sequence . Suppose we have an RL agent Mω parameterized by ω and R ( Mω ; E ) is the agent performance ( i.e. , cumulative discounted reward ) in environment E . The goal of AARL is to find the optimal auxiliary loss function L , such that , when ω is optimized under an arbitrary RL objective function LRL ( e.g. , actor-critic loss ) together with L , the agent achieves the best performance . Formally , we have max L R ( Mω∗ ( L ) ; E ) , s.t . ω∗ ( L ) = argmin ω ( LRL ( ω ; E ) + λL ( ω ; E ) ) , ( 1 ) where λ is a hyper-parameter that controls the relative weight of the auxiliary task . We solve this bi-level optimization problem with AutoML techniques . The inner optimization is a standard RL training procedure . For the outer one , we define a finite and discrete search space ( Section 2.1 ) , and use a variation of evolution strategy to explore the space ( Section 2.2 ) . We will explain the details in the rest of this section . 2.1 SEARCH SPACE . An auxiliary loss function L can be viewed as a combination of two components : 1 ) loss input I and 2 ) loss operator f . We define a search space as shown in Figure 1 , where I is a pair of binary masks ( m̂ , m ) that selects from interaction data to compute ŷ and y , and f is an operator that aggregates the selected inputs into an expression of the loss function with scalar output . Loss Input Unlike supervised machine learning tasks with an explicit prediction and ground truth for the loss function , auxiliary losses in RL have no ground truths beforehand . Instead , they are generated accordingly upon interaction with the environment itself in a self-supervised manner . As shown in Figure 1 , we take some tokens from the sequence ( st , at , rt , · · · , st+k ) with binary mask vector m̂ and feed them into the encoder θ̂ which maps states to latent representations . The predictor module then tries to predict targets ( encoded with a momentum encoder θ , as is typically done in recent works ) , which are another few tokens we select from the sequence with another binary mask vectorm . Details about momentum encoder are given in Appendix A.1.2 . In other words , the search space for loss input is a pair of binary masks ( m̂ , m ) , each of which is up to length ( 3k + 1 ) if the length of an interaction data sequence , i.e. , horizon , is limited to k steps . In our case , we set the maximum horizon length kmax = 10 . Loss Operator We optimize ŷ to be as similar as possible to y . Therefore , we make loss operator f cover commonly-used similarity measures , including inner product ( Inner ) ( He et al. , 2020 ; Stooke et al. , 2021 ) , bilinear inner product ( Bilinear ) ( Laskin et al. , 2020 ) , cosine similarity ( Cosine ) ( Chen et al. , 2020 ) , mean squared error ( MSE ) ( Ota et al. , 2020 ; De Bruin et al. , 2018 ) and normalized mean squared error ( N-MSE ) ( Schwarzer et al. , 2020 ) . Some existing works , e.g. , contrastive objectives like InfoNCE loss ( Oord et al. , 2018 ) , also incorporate the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them . We find this technique applicable to all the loss operators mentioned above and thus incorporate the discriminative version of these operators in our search space . Search Space Complexity As shown in Table 1 , many existing manually designed auxiliary losses can naturally fit into our loss space thus are our special cases , which proves that this loss space is reasonable , flexible , and general . However , this is also at the cost that the space is substantially large . In total , the size of the entire space is 10× ∑10 i=1 2 6i+2 ≈ 4.6× 1019 . The detailed derivation can be found in Appendix C . 2.2 SEARCH STRATEGY . Search Space Pruning Considering that the loss space is extremely large , an effective optimization strategy is inevitably required . Directly grid-searching over the whole space is infeasible because of unacceptable computational cost . Thus some advanced techniques such as space pruning and an elaborate search strategy are necessary . Our search space can be seen as a combination of the space for the input I and the space for the operator f . Inspired by AutoML works ( Dai et al. , 2020 ; Ying et al. , 2019 ) that search for hyper-parameters first and then neural architectures , we approximate the joint search of input and operator in Equation ( 1 ) in a two-step manner . The optimal auxiliary loss { I∗ , f∗ } can be optimized as : max L R ( Mω∗ ( L ) ; E ) = maxI , f R ( Mω∗ ( I , f ) ; E ) ≈ maxI R ( Mω∗ ( I , f∗ ) ; E ) where f∗ ≈ argmax f EI [ R ( Mω∗ ( I , f ) ; E ) ] ( 2 ) To decide the best loss operator , for every f in the loss operator space , we estimate EI [ R ( Mω∗ ( I , f ) ; E ) ] with a random sampling strategy . We run 15 trials for each loss operator and calculate the average of their normalized return on three environments . More details are available in Appendix A.1.3 . Surprisingly , as summarized in Table 2 , the simplest MSE without negative samples outperforms all other loss operators with complex designs . Therefore , this loss operator is chosen for the rest of this paper . Evolution Even with the search space pruning , the rest of the space still remains large . To efficiently identify top candidates from the search space , we adopt an evolutionary algorithm ( Real et al. , 2019 ) . The pipeline of the evolution process is illustrated in Figure 2 . In each stage , we first train and evaluate a population of candidates with size P = 100 , where each candidate corresponds to a pair of binary masks ( m̂ , m ) . The candidates are then sorted by the approximated area under learning curve ( AULC ) ( Ghiassian et al. , 2020 ; Stadie et al. , 2015 ) , which well captures both convergence speed and final performance ( Viering & Loog , 2021 ) and reduces the variance of RL evaluation . The top-25 % candidates are selected after each training stage , and we perform four types of mutations on the selected candidates to form a new population for the next stage : replacement ( 50 % of the population ) ; crossover ( 20 % ) ; horizon decrease and horizon increase ( 10 % ) , as shown in Figure 3 . Moreover , the last 20 % of the new population is from random generation . As for each bit of masks , replacement operation flips the given bit with probability p = 12· ( 3k+1 ) , where k is the horizon length . Crossover generates a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population . We also incorporate the knowledge from the existing auxiliary loss designs by bootstrapping the initial population with a prior distribution ( Co-Reyes et al. , 2020 ) , so that we can quickly converge to a reasonable loss function . More implementation details are available in Appendix A.3 .
This paper introduces a method for searching for the best auxiliary loss function from a huge search space automatically, while the best auxiliary loss function is defined as the one that encourages the agent to get a higher return. The searching space of auxiliary loss functions is finite and discrete. It is a combination of (1) manually designed similarity measures, which is used for encouraging the prediction and target to be similar in the auxiliary task, and (2) the binary mask, which is used for selecting auxiliary loss inputs. In this work, the size of the search space is around $4.6 \times 10^{19}$. The space is pruned firstly by random sampling from the similarity measure space and choosing the one showing the highest averaged performance, then the evolutionary algorithm is applied to select top candidates for loss inputs, which is a method in the literature. The chosen auxiliary losses are empirically shown to be helpful with increasing the learning efficiency and have generalization ability to new tasks.
science.mup_single_document_summarization.8510
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAARL: Automated Auxiliary Loss for Reinforcement Learning\n\nMain Content:\n1 INTRODUCTION . Reinforcement learning ( RL ) has been a hot research topic and has shown significant progress in many fields ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Gu et al. , 2017 ; Vinyals et al. , 2019 ) . Recent RL research focuses on obtaining a good representation of states as it is shown to be the key to improve sample efficiency of RL ( Laskin et al. , 2020 ) . This is because in high-dimensional environments , applying RL directly on the complex state inputs is incredibly sample-inefficient ( Lake et al. , 2017 ; Kaiser et al. , 2019 ) , making RL hard to scale to real-world tasks where interacting with the environment is costly ( Dulac-Arnold et al. , 2019 ) . Standard RL paradigm learns the representation from critic loss ( value prediction ) and / or actor loss ( maximizing cumulative reward ) , which hardly extracts informative representations in challenging environments like pixel-based RL and complex robotics systems ( Lake et al. , 2017 ; Kober et al. , 2013 ) . This motivates adding auxiliary losses in support of learning better latent representations of states . The usage of auxiliary loss encodes prior knowledge of RL and its environments , and puts regularization on training ( Shelhamer et al. , 2016 ) . Typical auxiliary losses are manually designed by human experts , including observation reconstruction ( Yarats et al. , 2019 ) , reward prediction ( Jaderberg et al. , 2017 ) and environment dynamics prediction ( Shelhamer et al. , 2016 ; De Bruin et al. , 2018 ; Ota et al. , 2020 ) . However , such auxiliary loss designs and choices rely heavily on human knowledge of what might be helpful for RL training , which requires extensive human efforts to scale to new tasks and can be suboptimal for improving RL performance ( Yang & Nachum , 2021 ) . In this paper , we rigorously treat auxiliary loss functions for RL as a first-class problem to explicitly address the question : what is the universal approach to find a good auxiliary loss function for RL ? Considering that the automated machine learning ( AutoML ) community has shown promising results with automated loss search in computer vision tasks ( Li et al. , 2019 ) , we propose to automate the process of designing auxiliary loss functions of RL . Specifically , we formulate the task as a bi-level optimization , where we try to find the best auxiliary loss function , which , to the most extent , helps train a good RL agent . The inner-level problem is a typical RL training problem , while the outer-level can be seen as a search problem . To tackle this , we first design a novel search space of auxiliary loss functions , which is a combination of two components : loss input and loss operator . As shown in Table 1 , the search space covers many existing handcrafted losses , and it is also substantially large ( as large as 4.6 × 1019 ) . We then propose an efficient search strategy to explore the space . This is done in a two-step manner . We first finalize the loss operator specification and then use an evolutionary strategy that performs mutations on configurations of loss input , to identify the top-performing loss inputs quickly . We evaluate our search framework on both pixel-based and state-based environments in the DeepMind Control suite ( Tassa et al. , 2018 ) . Extensive experiments show that the searched auxiliary loss functions greatly outperform state-of-the-art methods and can easily transfer to unseen environments ( that are never used during the search ) , showing that our proposed method is robust and effective . We highlight the main contributions of this paper below : • We introduce a principled and universal approach for auxiliary loss design in RL . To the best of our knowledge , we are the first to derive the optimal auxiliary loss function with an automatic process . Our framework can be easily applied to arbitrary RL tasks to search for the best auxiliary loss function . • We demonstrate that AARL significantly outperforms state-of-the-art methods in both pixel-based and state-based environments . The searched auxiliary loss functions show strong generalization ability to transfer to unseen environments . • We analyze the derived auxiliary loss functions and deliver some insightful discussions that we hope will deepen the understanding of auxiliary losses in RL . We will also open source our codebase to facilitate future research . 2 METHODOLOGY . We consider the standard Markov Decision Process ( MDP ) setting where we specify the state , action and reward at time step t as ( st , at , rt ) . The sequence of interaction data is denoted as ( st , at , rt , · · · , st+k ) , where k represents the horizon of this sequence . Suppose we have an RL agent Mω parameterized by ω and R ( Mω ; E ) is the agent performance ( i.e. , cumulative discounted reward ) in environment E . The goal of AARL is to find the optimal auxiliary loss function L , such that , when ω is optimized under an arbitrary RL objective function LRL ( e.g. , actor-critic loss ) together with L , the agent achieves the best performance . Formally , we have max L R ( Mω∗ ( L ) ; E ) , s.t . ω∗ ( L ) = argmin ω ( LRL ( ω ; E ) + λL ( ω ; E ) ) , ( 1 ) where λ is a hyper-parameter that controls the relative weight of the auxiliary task . We solve this bi-level optimization problem with AutoML techniques . The inner optimization is a standard RL training procedure . For the outer one , we define a finite and discrete search space ( Section 2.1 ) , and use a variation of evolution strategy to explore the space ( Section 2.2 ) . We will explain the details in the rest of this section . 2.1 SEARCH SPACE . An auxiliary loss function L can be viewed as a combination of two components : 1 ) loss input I and 2 ) loss operator f . We define a search space as shown in Figure 1 , where I is a pair of binary masks ( m̂ , m ) that selects from interaction data to compute ŷ and y , and f is an operator that aggregates the selected inputs into an expression of the loss function with scalar output . Loss Input Unlike supervised machine learning tasks with an explicit prediction and ground truth for the loss function , auxiliary losses in RL have no ground truths beforehand . Instead , they are generated accordingly upon interaction with the environment itself in a self-supervised manner . As shown in Figure 1 , we take some tokens from the sequence ( st , at , rt , · · · , st+k ) with binary mask vector m̂ and feed them into the encoder θ̂ which maps states to latent representations . The predictor module then tries to predict targets ( encoded with a momentum encoder θ , as is typically done in recent works ) , which are another few tokens we select from the sequence with another binary mask vectorm . Details about momentum encoder are given in Appendix A.1.2 . In other words , the search space for loss input is a pair of binary masks ( m̂ , m ) , each of which is up to length ( 3k + 1 ) if the length of an interaction data sequence , i.e. , horizon , is limited to k steps . In our case , we set the maximum horizon length kmax = 10 . Loss Operator We optimize ŷ to be as similar as possible to y . Therefore , we make loss operator f cover commonly-used similarity measures , including inner product ( Inner ) ( He et al. , 2020 ; Stooke et al. , 2021 ) , bilinear inner product ( Bilinear ) ( Laskin et al. , 2020 ) , cosine similarity ( Cosine ) ( Chen et al. , 2020 ) , mean squared error ( MSE ) ( Ota et al. , 2020 ; De Bruin et al. , 2018 ) and normalized mean squared error ( N-MSE ) ( Schwarzer et al. , 2020 ) . Some existing works , e.g. , contrastive objectives like InfoNCE loss ( Oord et al. , 2018 ) , also incorporate the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them . We find this technique applicable to all the loss operators mentioned above and thus incorporate the discriminative version of these operators in our search space . Search Space Complexity As shown in Table 1 , many existing manually designed auxiliary losses can naturally fit into our loss space thus are our special cases , which proves that this loss space is reasonable , flexible , and general . However , this is also at the cost that the space is substantially large . In total , the size of the entire space is 10× ∑10 i=1 2 6i+2 ≈ 4.6× 1019 . The detailed derivation can be found in Appendix C . 2.2 SEARCH STRATEGY . Search Space Pruning Considering that the loss space is extremely large , an effective optimization strategy is inevitably required . Directly grid-searching over the whole space is infeasible because of unacceptable computational cost . Thus some advanced techniques such as space pruning and an elaborate search strategy are necessary . Our search space can be seen as a combination of the space for the input I and the space for the operator f . Inspired by AutoML works ( Dai et al. , 2020 ; Ying et al. , 2019 ) that search for hyper-parameters first and then neural architectures , we approximate the joint search of input and operator in Equation ( 1 ) in a two-step manner . The optimal auxiliary loss { I∗ , f∗ } can be optimized as : max L R ( Mω∗ ( L ) ; E ) = maxI , f R ( Mω∗ ( I , f ) ; E ) ≈ maxI R ( Mω∗ ( I , f∗ ) ; E ) where f∗ ≈ argmax f EI [ R ( Mω∗ ( I , f ) ; E ) ] ( 2 ) To decide the best loss operator , for every f in the loss operator space , we estimate EI [ R ( Mω∗ ( I , f ) ; E ) ] with a random sampling strategy . We run 15 trials for each loss operator and calculate the average of their normalized return on three environments . More details are available in Appendix A.1.3 . Surprisingly , as summarized in Table 2 , the simplest MSE without negative samples outperforms all other loss operators with complex designs . Therefore , this loss operator is chosen for the rest of this paper . Evolution Even with the search space pruning , the rest of the space still remains large . To efficiently identify top candidates from the search space , we adopt an evolutionary algorithm ( Real et al. , 2019 ) . The pipeline of the evolution process is illustrated in Figure 2 . In each stage , we first train and evaluate a population of candidates with size P = 100 , where each candidate corresponds to a pair of binary masks ( m̂ , m ) . The candidates are then sorted by the approximated area under learning curve ( AULC ) ( Ghiassian et al. , 2020 ; Stadie et al. , 2015 ) , which well captures both convergence speed and final performance ( Viering & Loog , 2021 ) and reduces the variance of RL evaluation . The top-25 % candidates are selected after each training stage , and we perform four types of mutations on the selected candidates to form a new population for the next stage : replacement ( 50 % of the population ) ; crossover ( 20 % ) ; horizon decrease and horizon increase ( 10 % ) , as shown in Figure 3 . Moreover , the last 20 % of the new population is from random generation . As for each bit of masks , replacement operation flips the given bit with probability p = 12· ( 3k+1 ) , where k is the horizon length . Crossover generates a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population . We also incorporate the knowledge from the existing auxiliary loss designs by bootstrapping the initial population with a prior distribution ( Co-Reyes et al. , 2020 ) , so that we can quickly converge to a reasonable loss function . More implementation details are available in Appendix A.3 .", "role": "user" }, { "content": "This paper introduces a method for searching for the best auxiliary loss function from a huge search space automatically, while the best auxiliary loss function is defined as the one that encourages the agent to get a higher return. The searching space of auxiliary loss functions is finite and discrete. It is a combination of (1) manually designed similarity measures, which is used for encouraging the prediction and target to be similar in the auxiliary task, and (2) the binary mask, which is used for selecting auxiliary loss inputs. In this work, the size of the search space is around $4.6 \\times 10^{19}$. The space is pruned firstly by random sampling from the similarity measure space and choosing the one showing the highest averaged performance, then the evolutionary algorithm is applied to select top candidates for loss inputs, which is a method in the literature. The chosen auxiliary losses are empirically shown to be helpful with increasing the learning efficiency and have generalization ability to new tasks.", "role": "assistant" } ]