journal-title
stringclasses 191
values | pmid
stringlengths 8
8
⌀ | pmc
stringlengths 10
11
| doi
stringlengths 12
31
⌀ | article-title
stringlengths 11
423
| abstract
stringlengths 18
3.69k
⌀ | related-work
stringlengths 12
84k
| references
sequencelengths 0
206
| reference_info
listlengths 0
192
|
---|---|---|---|---|---|---|---|---|
International Journal of Social Robotics | 30996753 | PMC6438392 | 10.1007/s12369-018-0473-8 | Directing Attention Through Gaze Hints Improves Task Solving in Human–Humanoid Interaction | In this paper, we report an experimental study designed to examine how participants perceive and interpret social hints from gaze exhibited by either a robot or a human tutor when carrying out a matching task. The underlying notion is that knowing where an agent is looking at provides cues that can direct attention to an object of interest during the activity. In this regard, we asked human participants to play a card matching game in the presence of either a human or a robotic tutor under two conditions. In one case, the tutor gave hints to help the participant find the matching cards by gazing toward the correct match, in the other case, the tutor only looked at the participants and did not give them any help. The performance was measured based on the time and the number of tries taken to complete the game. Results show that gaze hints (helping tutor) made the matching task significantly easier (fewer tries) with the robot tutor. Furthermore, we found out that the robots’ gaze hints were recognized significantly more often than the human tutor gaze hints, and consequently, the participants performed significantly better with the robot tutor. The reported study provides new findings towards the use of non-verbal gaze hints in human–robot interaction, and lays out new design implications, especially for robot-based educative interventions. | Related WorkGiven the critical role of gaze in human communication, research into designing social gaze behaviors for robots has been extensive [2, 12, 15, 31, 35]. Andrist et al.[3] combined three functionalities including face-tracking, head detection, and gaze aversions to create social gaze behaviors for conversational robots. In an evaluation study, the participants indicated they perceived the designed gaze as more intentional. Admoni et al. [1] addressed the impact of frequency and duration of gaze on the perception of attention during the human–robot interaction concluding that shorter, more frequent fixations are better for signifying attention than longer and less frequent fixations.In a storytelling setting, Mutlu et al. [27] showed that participants recalled the story better when the robot looked longer at them. Yoshikawa et al. [37] explored both responsive and non-responsive gaze cues and found that the responsive gazes gave a strong effect of “feeling of being looked at” during the interaction. Moon et al. [26] studied the effects of gaze behaviors in a handover task. They found that gaze cues can improve the hand-over timing and the subjective experience in hand-over tasks. Boucher et al. [7] studied gaze effects on the speed of communication in both human–human and human–robot interaction collaborative works. Their results demonstrate that human participants can use gaze cues of a human or a robot partner to improve their performance in physical interaction tasks.Several studies have considered the ability of people to read cues from robot gaze. For example, using a guessing game, Mutlu et al. [28] show that participants can read and interpret leakage cues from robots’ gaze even faster when the robot is more human-like. Their designed gaze behaviors were evaluated on Robovie and Geminoid robotic platforms that can move their eyes independently of the head direction. This poses the question to what degree simpler and more available robots can also perform gaze cuing effectively.In this line of research, Cuijpers et al. [13] used NAO robot, which has no moveable eyes and measured the region of eye contact with the robot, they concluded that perception of gaze direction with NAO robot is similar to a human looker. Mwangi et al. [29] examined the ability of people to correctly guess the head direction towards different target positions (cards) on a table using the NAO robot. Findings showed that participants perceive the head (gaze) direction of NAO robot more accurately for close objects and also the participants recognized the cards positions left and right of the robot with different accuracy. These related works suggest that robots without movable eyes as NAO robot can be used as well for providing gaze cues.Prior work has also focused on the role of gaze in joint attention. Pfeiffer-Lessmann et al. [33], examined the timing of gaze patterns in interactions between humans and a virtual human to build a joint operational model for artificial agents. Yu et al. [38] studied the timing patterns of gaze when interacting with either a robot or a human in a word learning task. Their eye-tracking result revealed that people pay more attention to the face region of the robot than that of the human during a word learning task.In the present comparison study, the main aim is to determine whether gaze hints from a tutor (either human or robot) can direct player attention and therefore influence the choices of human partners in a game-play. In this regard, we devised the following sub-questions: (1) are the provided gaze cues noticed, (2) are they understood as helping behavior and (3) does the help provided by the tutor influence performance. The underlying assumption is that gaze hints can help to cue attention and influence decisions and thoughts, and, therefore, improve the performance. | [
"22563315",
"21450215",
"12428707",
"10940436",
"17592962",
"23586852",
"3526377"
] | [
{
"pmid": "22563315",
"title": "I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.",
"abstract": "Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times."
},
{
"pmid": "21450215",
"title": "Socially assistive robots in elderly care: a systematic review into effects and effectiveness.",
"abstract": "The ongoing development of robotics on the one hand and, on the other hand, the foreseen relative growth in number of elderly individuals suffering from dementia, raises the question of which contribution robotics could have to rationalize and maintain, or even improve the quality of care. The objective of this review was to assess the published effects and effectiveness of robot interventions aiming at social assistance in elderly care. We searched, using Medical Subject Headings terms and free words, in the CINAHL, MEDLINE, Cochrane, BIOMED, PUBMED, PsycINFO, and EMBASE databases. Also the IEEE Digital Library was searched. No limitations were applied for the date of publication. Only articles written in English were taken into account. Collected publications went through a selection process. In the first step, publications were collected from major databases using a search query. In the second step, 3 reviewers independently selected publications on their title, using predefined selection criteria. In the third step, publications were judged based on their abstracts by the same reviewers, using the same selection criteria. In the fourth step, one reviewer made the final selection of publications based on complete content. Finally, 41 publications were included in the review, describing 17 studies involving 4 robot systems. Most studies reported positive effects of companion-type robots on (socio)psychological (eg, mood, loneliness, and social connections and communication) and physiological (eg, stress reduction) parameters. The methodological quality of the studies was, mostly, low. Although positive effects were reported, the scientific value of the evidence was limited. The positive results described, however, prompt further effectiveness research in this field."
},
{
"pmid": "12428707",
"title": "The importance of eyes: how infants interpret adult looking behavior.",
"abstract": "Two studies assessed the gaze following of 12-, 14-, and 18-month-old infants. The experimental manipulation was whether an adult could see the targets. In Experiment 1, the adult turned to targets with either open or closed eyes. Infants at all ages looked at the adult's target more in the open- versus closed-eyes condition. In Experiment 2, an inanimate occluder, a blindfold, was compared with a headband control. Infants 14- and 18-months-old looked more at the adult's target in the headband condition. Infants were not simply responding to adult head turning, which was controlled, but were sensitive to the status of the adult's eyes. In the 2nd year, infants interpreted adult looking as object-directed--an act connecting the gazer and the object."
},
{
"pmid": "10940436",
"title": "The eyes have it: the neuroethology, function and evolution of social gaze.",
"abstract": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed."
},
{
"pmid": "17592962",
"title": "Gaze cueing of attention: visual attention, social cognition, and individual differences.",
"abstract": "During social interactions, people's eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience."
},
{
"pmid": "23586852",
"title": "Promoting question-asking in school-aged children with autism spectrum disorders: effectiveness of a robot intervention compared to a human-trainer intervention.",
"abstract": "OBJECTIVE\nThe purpose of the present study was to investigate the effectiveness of an applied behaviour analysis (ABA)-based intervention conducted by a robot compared to an ABA-based intervention conducted by a human trainer in promoting self-initiated questions in children with autism spectrum disorder (ASD).\n\n\nMETHODS\nData were collected in a combined crossover multiple baseline design across participants. Six children were randomly assigned to two experimental groups.\n\n\nRESULTS\nResults revealed that the number of self-initiated questions for both experimental groups increased between baseline and the first intervention and was maintained during follow-up. The high number of self-initiated questions during follow-up indicates that both groups maintained this skill.\n\n\nCONCLUSIONS\nThe interventions conducted by a robot and a human trainer were both effective in promoting self-initiated questions in children with ASD. No conclusion with regard to the differential effectiveness of both interventions could be drawn. Implications of the results and directions for future research are discussed."
}
] |
BMC Medical Informatics and Decision Making | 30935389 | PMC6444506 | 10.1186/s12911-019-0798-8 | QAnalysis: a question-answer driven analytic tool on knowledge graphs for leveraging electronic medical records for clinical research | BackgroundWhile doctors should analyze a large amount of electronic medical record (EMR) data to conduct clinical research, the analyzing process requires information technology (IT) skills, which is difficult for most doctors in China.MethodsIn this paper, we build a novel tool QAnalysis, where doctors enter their analytic requirements in their natural language and then the tool returns charts and tables to the doctors. For a given question from a user, we first segment the sentence, and then we use grammar parser to analyze the structure of the sentence. After linking the segmentations to concepts and predicates in knowledge graphs, we convert the question into a set of triples connected with different kinds of operators. These triples are converted to queries in Cypher, the query language for Neo4j. Finally, the query is executed on Neo4j, and the results shown in terms of tables and charts are returned to the user.ResultsThe tool supports top 50 questions we gathered from two hospital departments with the Delphi method. We also gathered 161 questions from clinical research papers with statistical requirements on EMR data. Experimental results show that our tool can directly cover 78.20% of these statistical questions and the precision is as high as 96.36%. Such extension is easy to achieve with the help of knowledge-graph technology we have adopted. The recorded demo can be accessed from https://github.com/NLP-BigDataLab/QAnalysis-project.ConclusionOur tool shows great flexibility in processing different kinds of statistic questions, which provides a convenient way for doctors to get statistical results directly in natural language. | Related workOur work is closely related to two important research topics: question answering using a knowledge base (KB-based QA) and question answering on statistical linked data.The traditional methods for KB-based QA is based on semantic parsing [6–13]. It resolves the natural-language question into a logical representation expressing the semantics of the question, and then the logical expressions are translated into structured queries. Answers are found with queries executed on the knowledge base. There are many challenges during the process. In particular, [12] uses an integer-linear program to solve several disambiguation tasks jointly in this pipelined process, including phrase segmentation, phrases, phrases grounding, and the construction of SPARQL triple patterns. To represent the natural-language question, [13] proposes a semantic query graph, and RDF QA problem is reduced to a subgraph-matching problem. In this way, they solve the disambiguation problem that traditionally has the exponential search space and results in slow responses. To find logical predicates for a given question on Freebase, [7] uses machine learning, and [6] deals with the situation when a structurally simple question is mapped to a k − ary relation in the knowledge base.The overall performance of the semantic-parsing method is not promising since errors may occur in each step from converting original questions to logic forms. Another way for QA on a large knowledge base is to convert both questions and answers to similar representations and check similarities between the two representations. Also, [14] transforms original questions into feature graph, and the relevant nodes in Freebase into topic graphs, and then they extract and combine features from the two graphs and treat QA on Freebase as a binary classification task. With the prevalence of word-embedding learning, the knowledge-representation tasks become much easier. The representations of questions and corresponding answers can be similar to the embedding space with vector embedding of words and KB triples, which forms the basis of deep learning-based method of QA [15–18]. While earlier work [15, 17] simply encode questions as a bag of words, more recent work utilizes more structured information. For example, [16] relies on multicolumn convolutional neural networks to represent questions from three different aspects: answer path, answer context, and answer type.Both QALD and TREC have special for medical field. Recently, the TREC-CDS (clinical decision support) track requires to retrieve relevant scientific articles that contain the answers to medical questions. Goodwin et al. [19] propose a two-step approach. They discovered the answers by utilizing a probabilistic medical knowledge graph that is constructed from many EMRs. Then they selected and ranked scientific articles that contain the answer. TREC-CDS focuses more on retrieving texts than Q&A. Liu Fang et al. [20] used template-based methods to implement the medical questions and answers in Chinese. They define 300 templates in medical field. However, the paper does not address any challenges in this process (e.g. templates conflicts or terminology segmentations).QALD 2016 has a special task about question answering on statistical linked data [21]. The application context to give answers to statistical questions is quite similar to ours. However, their queries are fixed on the dataset of cube-based multidimensional RDF data. The cube uses notions in OLAP model, such as dimensions, measures, and attributes, which are introduced in [22]. CubeQA algorithm designed in [21] uses a template-based pipeline approach similar to [14] and achieves a global F1 score of 0.43 on the QALD6T3-test benchmark. Faceted search can be regarded as an alternative approach to QA. Faceted search requires users to interact with the system, and users select different facets to filter the dataset. Such systems include Ontogator [23], Slash-Facet [24], BrowseRDF [25], gFacet [26], VisiNav [27], and SemFacet [28]. Cubix [7] and Linked Data Query Wizard [29] also support OLAP model.The cube-based approach requires the original graph-based dataset to be converted into multidimensional data. It not only needs a large amount of skills and hard work but also limits the possible ways questions are raised. Faceted-search approach, in general, does not support complex logical operations, such as negative, or relations, between events, which is important in the medical field. | [
"29297414"
] | [
{
"pmid": "29297414",
"title": "An automatic approach for constructing a knowledge base of symptoms in Chinese.",
"abstract": "BACKGROUND\nWhile a large number of well-known knowledge bases (KBs) in life science have been published as Linked Open Data, there are few KBs in Chinese. However, KBs in Chinese are necessary when we want to automatically process and analyze electronic medical records (EMRs) in Chinese. Of all, the symptom KB in Chinese is the most seriously in need, since symptoms are the starting point of clinical diagnosis.\n\n\nRESULTS\nWe publish a public KB of symptoms in Chinese, including symptoms, departments, diseases, medicines, and examinations as well as relations between symptoms and the above related entities. To the best of our knowledge, there is no such KB focusing on symptoms in Chinese, and the KB is an important supplement to existing medical resources. Our KB is constructed by fusing data automatically extracted from eight mainstream healthcare websites, three Chinese encyclopedia sites, and symptoms extracted from a larger number of EMRs as supplements.\n\n\nMETHODS\nFirstly, we design data schema manually by reference to the Unified Medical Language System (UMLS). Secondly, we extract entities from eight mainstream healthcare websites, which are fed as seeds to train a multi-class classifier and classify entities from encyclopedia sites and train a Conditional Random Field (CRF) model to extract symptoms from EMRs. Thirdly, we fuse data to solve the large-scale duplication between different data sources according to entity type alignment, entity mapping, and attribute mapping. Finally, we link our KB to UMLS to investigate similarities and differences between symptoms in Chinese and English.\n\n\nCONCLUSIONS\nAs a result, the KB has more than 26,000 distinct symptoms in Chinese including 3968 symptoms in traditional Chinese medicine and 1029 synonym pairs for symptoms. The KB also includes concepts such as diseases and medicines as well as relations between symptoms and the above related entities. We also link our KB to the Unified Medical Language System and analyze the differences between symptoms in the two KBs. We released the KB as Linked Open Data and a demo at https://datahub.io/dataset/symptoms-in-chinese ."
}
] |
BMC Medical Informatics and Decision Making | 30943972 | PMC6448175 | 10.1186/s12911-019-0787-y | Entity recognition in Chinese clinical text using attention-based CNN-LSTM-CRF | BackgroundClinical entity recognition as a fundamental task of clinical text processing has been attracted a great deal of attention during the last decade. However, most studies focus on clinical text in English rather than other languages. Recently, a few researchers have began to study entity recognition in Chinese clinical text.MethodsIn this paper, a novel deep neural network, called attention-based CNN-LSTM-CRF, is proposed to recognize entities in Chinese clinical text. Attention-based CNN-LSTM-CRF is an extension of LSTM-CRF by introducing a CNN (convolutional neural network) layer after the input layer to capture local context information of words of interest and an attention layer before the CRF layer to select relevant words in the same sentence.ResultsIn order to evaluate the proposed method, we compare it with other two currently popular methods, CRF (conditional random field) and LSTM-CRF, on two benchmark datasets. One of the datasets is publically available and only contains contiguous clinical entities, and the other one is constructed by us and contains contiguous and discontiguous clinical entities. Experimental results show that attention-based CNN-LSTM-CRF outperforms CRF and LSTM-CRF.ConclusionsCNN and attention mechanism are individually beneficial to LSTM-CRF-based Chinese clinical entity recognition system, no matter whether contiguous clinical entities are considered. The conribution of attention mechanism is greater than CNN. | Related workClinical entity representation is very important for recognition. As there exist contiguous and discontiguous entities in clinical text, we could not adopt named entity representation in the newswire domain directly for clinical entities. In order to represent contiguous and discontiguous clinical entities in a unified schema, Tang et al. [6, 7] extended the schemas, such as “BIO” and “BIOES” by introducing new labels for contiguous word fragment shared by discontiguous clinical entities or not, that are “BIOHD” and “BIOHD1234”. Wu et al. [8] proposed a schema, called “Multi-label” to give each word multiple labels, each one of which corresponds the label of the token in one clinical entities.In the past several years, as a number of manually annotated corpora have been publically available for clinical entity recognition in challenges such as the Center for Informatics for Integrating Biology & the Beside (i2b2) [4, 9–11], ShARe/CLEF eHealth Evaluation Lab (SHEL) [12, 13], SemEval (Semantic Evaluation) [14–17], etc., lots of machine learning methods, such as support vector machine (SVM), hidden markov model (HMM), conditional random field (CRF), structured support vector machine (SSVM) and deep neural networks, have been applied to clinical named entity recognition. Among these methods, CRF is the most frequently used method whole performance relies on manually-crafted features, whereas deep neural networks, especially LSTM-CRF, which have ability to avoid feature engineering, are recently introduced for clinical entity recognition. Common features, such as N-grams and part-of-speech, and domain-specific features, such as section information and domain dictionaries, are usually adopted in CRF. For LSTM-CRF, there are a few variants such as [18, 19], which extend the basic LSTM-CRF by introducing character-level word embeddings or attention mechanism. | [
"20819854",
"23564629",
"26225918",
"28699566",
"24347408"
] | [
{
"pmid": "20819854",
"title": "Extracting medication information from clinical text.",
"abstract": "The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records focused on the identification of medications, their dosages, modes (routes) of administration, frequencies, durations, and reasons for administration in discharge summaries. This challenge is referred to as the medication challenge. For the medication challenge, i2b2 released detailed annotation guidelines along with a set of annotated discharge summaries. Twenty teams representing 23 organizations and nine countries participated in the medication challenge. The teams produced rule-based, machine learning, and hybrid systems targeted to the task. Although rule-based systems dominated the top 10, the best performing system was a hybrid. Of all medication-related fields, durations and reasons were the most difficult for all systems to detect. While medications themselves were identified with better than 0.75 F-measure by all of the top 10 systems, the best F-measure for durations and reasons were 0.525 and 0.459, respectively. State-of-the-art natural language processing systems go a long way toward extracting medication names, dosages, modes, and frequencies. However, they are limited in recognizing duration and reason fields and would benefit from future research."
},
{
"pmid": "23564629",
"title": "Evaluating temporal relations in clinical text: 2012 i2b2 Challenge.",
"abstract": "BACKGROUND\nThe Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the temporal relations in clinical narratives. The organizers provided the research community with a corpus of discharge summaries annotated with temporal information, to be used for the development and evaluation of temporal reasoning systems. 18 teams from around the world participated in the challenge. During the workshop, participating teams presented comprehensive reviews and analysis of their systems, and outlined future research directions suggested by the challenge contributions.\n\n\nMETHODS\nThe challenge evaluated systems on the information extraction tasks that targeted: (1) clinically significant events, including both clinical concepts such as problems, tests, treatments, and clinical departments, and events relevant to the patient's clinical timeline, such as admissions, transfers between departments, etc; (2) temporal expressions, referring to the dates, times, durations, or frequencies phrases in the clinical text. The values of the extracted temporal expressions had to be normalized to an ISO specification standard; and (3) temporal relations, between the clinical events and temporal expressions. Participants determined pairs of events and temporal expressions that exhibited a temporal relation, and identified the temporal relation between them.\n\n\nRESULTS\nFor event detection, statistical machine learning (ML) methods consistently showed superior performance. While ML and rule based methods seemed to detect temporal expressions equally well, the best systems overwhelmingly adopted a rule based approach for value normalization. For temporal relation classification, the systems using hybrid approaches that combined ML and heuristics based methods produced the best results."
},
{
"pmid": "26225918",
"title": "Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1.",
"abstract": "The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured four tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. The longitudinal nature of clinical narratives calls particular attention to details of information that, while benign on their own in separate records, can lead to identification of patients in combination in longitudinal records. Accordingly, the 2014 de-identification track addressed a broader set of entities and PHI than covered by the Health Insurance Portability and Accountability Act - the focus of the de-identification shared task that was organized in 2006. Ten teams tackled the 2014 de-identification task and submitted 22 system outputs for evaluation. Each team was evaluated on their best performing system output. Three of the 10 systems achieved F1 scores over .90, and seven of the top 10 scored over .75. The most successful systems combined conditional random fields and hand-written rules. Our findings indicate that automated systems can be very effective for this task, but that de-identification is not yet a solved problem."
},
{
"pmid": "28699566",
"title": "Entity recognition from clinical texts via recurrent neural network.",
"abstract": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction."
},
{
"pmid": "24347408",
"title": "A comprehensive study of named entity recognition in Chinese clinical text.",
"abstract": "OBJECTIVE\nNamed entity recognition (NER) is one of the fundamental tasks in natural language processing. In the medical domain, there have been a number of studies on NER in English clinical notes; however, very limited NER research has been carried out on clinical notes written in Chinese. The goal of this study was to systematically investigate features and machine learning algorithms for NER in Chinese clinical text.\n\n\nMATERIALS AND METHODS\nWe randomly selected 400 admission notes and 400 discharge summaries from Peking Union Medical College Hospital in China. For each note, four types of entity-clinical problems, procedures, laboratory test, and medications-were annotated according to a predefined guideline. Two-thirds of the 400 notes were used to train the NER systems and one-third for testing. We investigated the effects of different types of feature including bag-of-characters, word segmentation, part-of-speech, and section information, and different machine learning algorithms including conditional random fields (CRF), support vector machines (SVM), maximum entropy (ME), and structural SVM (SSVM) on the Chinese clinical NER task. All classifiers were trained on the training dataset and evaluated on the test set, and micro-averaged precision, recall, and F-measure were reported.\n\n\nRESULTS\nOur evaluation on the independent test set showed that most types of feature were beneficial to Chinese NER systems, although the improvements were limited. The system achieved the highest performance by combining word segmentation and section information, indicating that these two types of feature complement each other. When the same types of optimized feature were used, CRF and SSVM outperformed SVM and ME. More specifically, SSVM achieved the highest performance of the four algorithms, with F-measures of 93.51% and 90.01% for admission notes and discharge summaries, respectively."
}
] |
BMC Medical Informatics and Decision Making | 30943955 | PMC6448179 | 10.1186/s12911-019-0783-2 | Parsing clinical text using the state-of-the-art deep learning based parsers: a systematic comparison | BackgroundA shareable repository of clinical notes is critical for advancing natural language processing (NLP) research, and therefore a goal of many NLP researchers is to create a shareable repository of clinical notes, that has breadth (from multiple institutions) as well as depth (as much individual data as possible).MethodsWe aimed to assess the degree to which individuals would be willing to contribute their health data to such a repository. A compact e-survey probed willingness to share demographic and clinical data categories. Participants were faculty, staff, and students in two geographically diverse major medical centers (Utah and New York). Such a sample could be expected to respond like a typical potential participant from the general public who is given complete and fully informed consent about the pros and cons of participating in a research study.Results2140 respondents completed the surveys. 56% of respondents were “somewhat/definitely willing” to share clinical data with identifiers, while 89% of respondents were “somewhat (17%) /definitely willing (72%)” to share without identifiers. Results were consistent across gender, age, and education, but there were some differences by geographical region. Individuals were most reluctant (50–74%) sharing mental health, substance abuse, and domestic violence data.ConclusionsWe conclude that a substantial fraction of potential patient participants, once educated about risks and benefits, would be willing to donate de-identified clinical data to a shared research repository. A slight majority even would be willing to share absent de-identification, suggesting that perceptions about data misuse are not a major concern. Such a repository of clinical notes should be invaluable for clinical NLP research and advancement. | Related workOne potential solution to address the above challenges is the applications of deep learning based, or multi-layer neural networks based approaches. Recently, there are increasing research efforts on deep learning based dependency parsing, especially by using the LSTM (long short term memory) RNN (recurrent neural networks) [17]. This line of works is based on two assumptions: first, the low dimensional embeddings (distributional representation) features could alleviate the data sparsity problem; furthermore, the LSTM structure of each feature has the potential to represent their arbitrary feature combinations implicitly, reducing the explicit implementation of an explosive set of feature combinations [16]. Current works attempt to tailor the deep learning frameworks to dependency parsing from two aspects: (1) feature design: instead of using the previous templates of sparse, binary features, dense core features (i.e., words, part-of-speech taggings-POS and dependency labels) are encoded, concatenated and fed into non-linear classifiers such as multiple-layer perceptron [16–20]. (2) novel neural network architecture for feature encoding: Considering the design of neural network architectures is coupled with the feature set representation of parsers, stack-LSTMs [21] are used to describe the configurations (stack and buffer) of transition-based parsers, and hierarchical-LSTMs [22, 23] are used to encode the hierarchy of parse trees. Accordingly, the elements in the LSTMs are compositional representations of nodes in the parse trees. Le and Zuidema (2014) [22] and Zhu et al. (2015) [24] also employ rerankers, the input to the rerankers are encoded compositional representations capturing the structures around the node.Currently, clinical NLP systems have been applied actively on narrative notes in EHR to extract important information facilitating various clinical and translational applications [5, 25]. Deep learning based methods have been applied to clinical NLP tasks such as concept recognition and relation extraction and obtained better performance in comparison to traditional machine learning methods [26]. Despite that syntactic parsers play a critical role in the NLP pipelines, existing dependency parsers with high-performance on the open text such as the Stanford Parser are usually directly applied in these system [27, 28]. Although some previous studies extended the traditional Stanford Parser using medical lexicons to tune it for clinical text [29], few efforts have been spent on investigating deep learning based dependency parsers for the clinical domain.In our previous work, we systematically evaluated three state-of-the-art constituency parsers of the open domain including the Stanford parser and the Charniank parser and the berkly parser, and found that re-training the parsers using Treebanks annotated from clinical text improved the performance greatly [30]. Given the advantage of deep learning approaches for dependency parsers shown in general English text [16, 18, 19, 21–23], it’s timely to explore the performance of existing deep learning based dependency parsers, to set-up state-of-the-art performance and inform novel parsing approaches for clinical text. | [
"25488240",
"25155030",
"25954443",
"24680097",
"21083794",
"26958273",
"20709188",
"23567779",
"25661593",
"23907286",
"23355458",
"27219127"
] | [
{
"pmid": "25488240",
"title": "Unsupervised information extraction from italian clinical records.",
"abstract": "This paper discusses the application of an unsupervised text mining technique for the extraction of information from clinical records in Italian. The approach includes two steps. First of all, a metathesaurus is exploited together with natural language processing tools to extract the domain entities. Then, clustering is applied to explore relations between entity pairs. The results of a preliminary experiment, performed on the text extracted from 57 medical records containing more than 20,000 potential relations, show how the clustering should be based on the cosine similarity distance rather than the City Block or Hamming ones."
},
{
"pmid": "25155030",
"title": "University of California, Irvine-Pathology Extraction Pipeline: the pathology extraction pipeline for information extraction from pathology reports.",
"abstract": "We describe Pathology Extraction Pipeline (PEP)--a new Open Health Natural Language Processing pipeline that we have developed for information extraction from pathology reports, with the goal of populating the extracted data into a research data warehouse. Specifically, we have built upon Medical Knowledge Analysis Tool pipeline (MedKATp), which is an extraction framework focused on pathology reports. Our particular contributions include additional customization and development on MedKATp to extract data elements and relationships from cancer pathology reports in richer detail than at present, an abstraction layer that provides significantly easier configuration of MedKATp for extraction tasks, and a machine-learning-based approach that makes the extraction more resilient to deviations from the common reporting format in a pathology reports corpus. We present experimental results demonstrating the effectiveness of our pipeline for information extraction in a real-world task, demonstrating performance improvement due to our approach for increasing extractor resilience to format deviation, and finally demonstrating the scalability of the pipeline across pathology reports for different cancer types."
},
{
"pmid": "25954443",
"title": "Automated extraction of family history information from clinical notes.",
"abstract": "Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (\"indicator phrases\"), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications."
},
{
"pmid": "24680097",
"title": "Statistical parsing of varieties of clinical Finnish.",
"abstract": "OBJECTIVES\nIn this paper, we study the development and domain-adaptation of statistical syntactic parsers for three different clinical domains in Finnish.\n\n\nMETHODS AND MATERIALS\nThe materials include text from daily nursing notes written by nurses in an intensive care unit, physicians' notes from cardiology patients' health records, and daily nursing notes from cardiology patients' health records. The parsing is performed with the statistical parser of Bohnet (http://code.google.com/p/mate-tools/, accessed: 22 November 2013).\n\n\nRESULTS\nA parser trained only on general language performs poorly in all clinical subdomains, the labelled attachment score (LAS) ranging from 59.4% to 71.4%, whereas domain data combined with general language gives better results, the LAS varying between 67.2% and 81.7%. However, even a small amount of clinical domain data quickly outperforms this and also clinical data from other domains is more beneficial (LAS 71.3-80.0%) than general language only. The best results (LAS 77.4-84.6%) are achieved by using as training data the combination of all the clinical treebanks.\n\n\nCONCLUSIONS\nIn order to develop a good syntactic parser for clinical language variants, a general language resource is not mandatory, while data from clinical fields is. However, in addition to the exact same clinical domain, also data from other clinical domains is useful."
},
{
"pmid": "21083794",
"title": "Natural language processing for the development of a clinical registry: a validation study in intraductal papillary mucinous neoplasms.",
"abstract": "BACKGROUND\nMedical natural language processing (NLP) systems have been developed to identify, extract and encode information within clinical narrative text. However, the role of NLP in clinical research and patient care remains limited. Pancreatic cysts are common. Some pancreatic cysts, such as intraductal papillary mucinous neoplasms (IPMNs), have malignant potential and require extended periods of surveillance. We seek to develop a novel NLP system that could be applied in our clinical network to develop a functional registry of IPMN patients.\n\n\nOBJECTIVES\nThis study aims to validate the accuracy of our novel NLP system in the identification of surgical patients with pathologically confirmed IPMN in comparison with our pre-existing manually created surgical database (standard reference).\n\n\nMETHODS\nThe Regenstrief EXtraction Tool (REX) was used to extract pancreatic cyst patient data from medical text files from Indiana University Health. The system was assessed periodically by direct sampling and review of medical records. Results were compared with the standard reference.\n\n\nRESULTS\nNatural language processing detected 5694 unique patients with pancreas cysts, in 215 of whom surgical pathology had confirmed IPMN. The NLP software identified all but seven patients present in the surgical database and identified an additional 37 IPMN patients not previously included in the surgical database. Using the standard reference, the sensitivity of the NLP program was 97.5% (95% confidence interval [CI] 94.8-98.9%) and its positive predictive value was 95.5% (95% CI 92.3-97.5%).\n\n\nCONCLUSIONS\nNatural language processing is a reliable and accurate method for identifying selected patient cohorts and may facilitate the identification and follow-up of patients with IPMN."
},
{
"pmid": "26958273",
"title": "A Study of Neural Word Embeddings for Named Entity Recognition in Clinical Text.",
"abstract": "Clinical Named Entity Recognition (NER) is a critical task for extracting important patient information from clinical text to support clinical and translational research. This study explored the neural word embeddings derived from a large unlabeled clinical corpus for clinical NER. We systematically compared two neural word embedding algorithms and three different strategies for deriving distributed word representations. Two neural word embeddings were derived from the unlabeled Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) II corpus (403,871 notes). The results from both 2010 i2b2 and 2014 Semantic Evaluation (SemEval) data showed that the binarized word embedding features outperformed other strategies for deriving distributed word representations. The binarized embedding features improved the F1-score of the Conditional Random Fields based clinical NER system by 2.3% on i2b2 data and 2.4% on SemEval data. The combined feature from the binarized embeddings and the Brown clusters improved the F1-score of the clinical NER system by 2.9% on i2b2 data and 2.7% on SemEval data. Our study also showed that the distributed word embedding features derived from a large unlabeled corpus can be better than the widely used Brown clusters. Further analysis found that the neural word embeddings captured a wide range of semantic relations, which could be discretized into distributed word representations to benefit the clinical NER system. The low-cost distributed feature representation can be adapted to any other clinical natural language processing research."
},
{
"pmid": "20709188",
"title": "Detecting hedge cues and their scope in biomedical text with conditional random fields.",
"abstract": "OBJECTIVE\nHedging is frequently used in both the biological literature and clinical notes to denote uncertainty or speculation. It is important for text-mining applications to detect hedge cues and their scope; otherwise, uncertain events are incorrectly identified as factual events. However, due to the complexity of language, identifying hedge cues and their scope in a sentence is not a trivial task. Our objective was to develop an algorithm that would automatically detect hedge cues and their scope in biomedical literature.\n\n\nMETHODOLOGY\nWe used conditional random fields (CRFs), a supervised machine-learning algorithm, to train models to detect hedge cue phrases and their scope in biomedical literature. The models were trained on the publicly available BioScope corpus. We evaluated the performance of the CRF models in identifying hedge cue phrases and their scope by calculating recall, precision and F1-score. We compared our models with three competitive baseline systems.\n\n\nRESULTS\nOur best CRF-based model performed statistically better than the baseline systems, achieving an F1-score of 88% and 86% in detecting hedge cue phrases and their scope in biological literature and an F1-score of 93% and 90% in detecting hedge cue phrases and their scope in clinical notes.\n\n\nCONCLUSIONS\nOur approach is robust, as it can identify hedge cues and their scope in both biological and clinical text. To benefit text-mining applications, our system is publicly available as a Java API and as an online application at http://hedgescope.askhermes.org. To our knowledge, this is the first publicly available system to detect hedge cues and their scope in biomedical literature."
},
{
"pmid": "23567779",
"title": "Improving case definition of Crohn's disease and ulcerative colitis in electronic medical records using natural language processing: a novel informatics approach.",
"abstract": "BACKGROUND\nPrevious studies identifying patients with inflammatory bowel disease using administrative codes have yielded inconsistent results. Our objective was to develop a robust electronic medical record-based model for classification of inflammatory bowel disease leveraging the combination of codified data and information from clinical text notes using natural language processing.\n\n\nMETHODS\nUsing the electronic medical records of 2 large academic centers, we created data marts for Crohn's disease (CD) and ulcerative colitis (UC) comprising patients with ≥1 International Classification of Diseases, 9th edition, code for each disease. We used codified (i.e., International Classification of Diseases, 9th edition codes, electronic prescriptions) and narrative data from clinical notes to develop our classification model. Model development and validation was performed in a training set of 600 randomly selected patients for each disease with medical record review as the gold standard. Logistic regression with the adaptive LASSO penalty was used to select informative variables.\n\n\nRESULTS\nWe confirmed 399 CD cases (67%) in the CD training set and 378 UC cases (63%) in the UC training set. For both, a combined model including narrative and codified data had better accuracy (area under the curve for CD 0.95; UC 0.94) than models using only disease International Classification of Diseases, 9th edition codes (area under the curve 0.89 for CD; 0.86 for UC). Addition of natural language processing narrative terms to our final model resulted in classification of 6% to 12% more subjects with the same accuracy.\n\n\nCONCLUSIONS\nInclusion of narrative concepts identified using natural language processing improves the accuracy of electronic medical records case definition for CD and UC while simultaneously identifying more subjects compared with models using codified data alone."
},
{
"pmid": "25661593",
"title": "Domain adaption of parsing for operative notes.",
"abstract": "BACKGROUND\nFull syntactic parsing of clinical text as a part of clinical natural language processing (NLP) is critical for a wide range of applications. Several robust syntactic parsers are publicly available to produce linguistic representations for sentences. However, these existing parsers are mostly trained on general English text and may require adaptation for optimal performance on clinical text. Our objective was to adapt an existing general English parser for the clinical text of operative reports via lexicon augmentation, statistics adjusting, and grammar rules modification based on operative reports.\n\n\nMETHOD\nThe Stanford unlexicalized probabilistic context-free grammar (PCFG) parser lexicon was expanded with SPECIALIST lexicon along with statistics collected from a limited set of operative notes tagged by two POS taggers (GENIA tagger and MedPost). The most frequently occurring verb entries of the SPECIALIST lexicon were adjusted based on manual review of verb usage in operative notes. Stanford parser grammar production rules were also modified based on linguistic features of operative reports. An analogous approach was then applied to the GENIA corpus to test the generalizability of this approach to biologic text.\n\n\nRESULTS\nThe new unlexicalized PCFG parser extended with the extra lexicon from SPECIALIST along with accurate statistics collected from an operative note corpus tagged with GENIA POS tagger improved the F-score by 2.26% from 87.64% to 89.90%. There was a progressive improvement with the addition of multiple approaches. Lexicon augmentation combined with statistics from the operative notes corpus provided the greatest improvement of parser performance. Application of this approach on the GENIA corpus increased the F-score by 3.81% with a simple new grammar and addition of the GENIA corpus lexicon.\n\n\nCONCLUSION\nUsing statistics collected from clinical text tagged with POS taggers along with proper modification of grammars and lexicons of an unlexicalized PCFG parser may improve parsing performance of existing parsers on specialized clinical text."
},
{
"pmid": "23907286",
"title": "Syntactic parsing of clinical text: guideline and corpus development with handling ill-formed sentences.",
"abstract": "OBJECTIVE\nTo develop, evaluate, and share: (1) syntactic parsing guidelines for clinical text, with a new approach to handling ill-formed sentences; and (2) a clinical Treebank annotated according to the guidelines. To document the process and findings for readers with similar interest.\n\n\nMETHODS\nUsing random samples from a shared natural language processing challenge dataset, we developed a handbook of domain-customized syntactic parsing guidelines based on iterative annotation and adjudication between two institutions. Special considerations were incorporated into the guidelines for handling ill-formed sentences, which are common in clinical text. Intra- and inter-annotator agreement rates were used to evaluate consistency in following the guidelines. Quantitative and qualitative properties of the annotated Treebank, as well as its use to retrain a statistical parser, were reported.\n\n\nRESULTS\nA supplement to the Penn Treebank II guidelines was developed for annotating clinical sentences. After three iterations of annotation and adjudication on 450 sentences, the annotators reached an F-measure agreement rate of 0.930 (while intra-annotator rate was 0.948) on a final independent set. A total of 1100 sentences from progress notes were annotated that demonstrated domain-specific linguistic features. A statistical parser retrained with combined general English (mainly news text) annotations and our annotations achieved an accuracy of 0.811 (higher than models trained purely with either general or clinical sentences alone). Both the guidelines and syntactic annotations are made available at https://sourceforge.net/projects/medicaltreebank.\n\n\nCONCLUSIONS\nWe developed guidelines for parsing clinical text and annotated a corpus accordingly. The high intra- and inter-annotator agreement rates showed decent consistency in following the guidelines. The corpus was shown to be useful in retraining a statistical parser that achieved moderate accuracy."
},
{
"pmid": "23355458",
"title": "Towards comprehensive syntactic and semantic annotations of the clinical narrative.",
"abstract": "OBJECTIVE\nTo create annotated clinical narratives with layers of syntactic and semantic labels to facilitate advances in clinical natural language processing (NLP). To develop NLP algorithms and open source components.\n\n\nMETHODS\nManual annotation of a clinical narrative corpus of 127 606 tokens following the Treebank schema for syntactic information, PropBank schema for predicate-argument structures, and the Unified Medical Language System (UMLS) schema for semantic information. NLP components were developed.\n\n\nRESULTS\nThe final corpus consists of 13 091 sentences containing 1772 distinct predicate lemmas. Of the 766 newly created PropBank frames, 74 are verbs. There are 28 539 named entity (NE) annotations spread over 15 UMLS semantic groups, one UMLS semantic type, and the Person semantic category. The most frequent annotations belong to the UMLS semantic groups of Procedures (15.71%), Disorders (14.74%), Concepts and Ideas (15.10%), Anatomy (12.80%), Chemicals and Drugs (7.49%), and the UMLS semantic type of Sign or Symptom (12.46%). Inter-annotator agreement results: Treebank (0.926), PropBank (0.891-0.931), NE (0.697-0.750). The part-of-speech tagger, constituency parser, dependency parser, and semantic role labeler are built from the corpus and released open source. A significant limitation uncovered by this project is the need for the NLP community to develop a widely agreed-upon schema for the annotation of clinical concepts and their relations.\n\n\nCONCLUSIONS\nThis project takes a foundational step towards bringing the field of clinical NLP up to par with NLP in the general domain. The corpus creation and NLP components provide a resource for research and application development that would have been previously impossible."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
}
] |
BMC Medical Informatics and Decision Making | 30943973 | PMC6448182 | 10.1186/s12911-019-0782-3 | Identifying peer experts in online health forums | BackgroundOnline health forums have become increasingly popular over the past several years. They provide members with a platform to network with peers and share information, experiential advice, and support. Among the members of health forums, we define “peer experts” as a set of lay users who have gained expertise on the particular health topic through personal experience, and who demonstrate credibility in responding to questions from other members. This paper aims to motivate the need to identify peer experts in health forums and study their characteristics.MethodsWe analyze profiles and activity of members of a popular online health forum and characterize the interaction behavior of peer experts. We study the temporal patterns of comments posted by lay users and peer experts to uncover how peer expertise is developed. We further train a supervised classifier to identify peer experts based on their activity level, textual features, and temporal progression of posts.ResultA support vector machine classifier with radial basis function kernel was found to be the most suitable model among those studied. Features capturing the key semantic word classes and higher mean user activity were found to be most significant features.ConclusionWe define a new class of members of health forums called peer experts, and present preliminary, yet promising, approaches to distinguish peer experts from novice users. Identifying such peer expertise could potentially help improve the perceived reliability and trustworthiness of information in community health forums. | Related workThere have been recent works that demonstrate the growing usage of health forums. Hoffman-Goetz et al. [4] have analyzed the content posted in response to queries about Type II Diabetes on an online health forum. They found that responses and recommendations provided were in high accordance with the clinical best practice guidelines. They argue that there exist such knowledgeable users, who we call peer experts, who have high health literacy skills and are interested in sharing this information. The work by Tanis [5] claims that the surge in the usage of health-related forums can also be attributed to various social factors, one of them being its affordance of anonymity. Patients who might feel stigmatized by their health condition are more comfortable participating in online discussion “anonymously” and maintain connections with similar patients.The domain of community generated websites has also been explored widely. Adamic et al. [6] explored Yahoo! Answers, a popular question-answering website, and categorized different types of interactions and user behavior on the website. They proposed using user attributes and answer characteristics to predict whether a given answer to a question will be selected as the best answer or not.There has been some relevant work done in the area of finding experts in online communities. Liu et al. [7] used information retrieval techniques by treating the query posted by a novice user as the query and the member profiles as the candidate documents. The retrieval techniques they used were language models such as the query likelihood models, relevance models, and cluster-based language models. The work of Pal and Konstan [8] introduced a new concept called ‘question selection bias’. They claimed that experts tend to answer only those questions that don’t already have good answers and showed that it was possible to find peer experts based on this selection bias. They focus on two datasets from other domains, namely the TurboTax community (personal finance) and StackOverflow (computer programming). Riahi et al. [9] tackle the same problem of finding experts on StackOverflow using topic modeling. They find that topic models perform much better than other retrieval techniques to find a set of best experts to answer a question. They were also able to show that Segmented Topic Model performed better than the Latent Dirichlet Model for this task. Two other papers by Jurczyk and Agichtein [10] and Zhang et al. [11] focus on using network connections and link analyses to predict experts in an online community. They use algorithms such as PageRank [12] and HITS [13] to find members with high influence in a network. The approaches followed by these authors have not been studied over health forums.In this paper, we aim to study peer expertise in online health forums and how to identify them. Our approach differs from the previous ones in that we focus on text features to identify peer experts and understand how they evolve over time using temporal pattern analysis. This notion of using temporal patterns as features for machine learning has already been explored in other research works. Deushl et al. [14] have proposed using it to classify tremors based on tremor time series analysis. They found the waveform analysis to be highly informative to distinguish physiologic tremors in normal people from patients with Parkinson’s disease. Another work by Toshniwal and Joshi [15] used time weighted moments to compute the similarity between time series data. The main intuition behind using moments is that the centroid values within a given time interval is an effective way to represent the data trend that might be dense otherwise. We take a similar approach to summarize a user activity behavior and use central moments as features for the peer expert classification. As will be described in the next section, we compute the central moments of time series data that represents the activity level of each user and use it as features for our classification task. | [
"19412842",
"18958781"
] | [
{
"pmid": "19412842",
"title": "Clinical guidelines about diabetes and the accuracy of peer information in an unmoderated online health forum for retired persons.",
"abstract": "The objective of this study was to determine whether peer recommendations made in response to user queries about non-insulin dependent type II diabetes in an online health forum for retired persons were in agreement with diabetes clinical practice guidelines. A content analysis was conducted on type II diabetes conversations occurring in an online health forum for Canadian retired persons from 1 January to 31 December 2006. Recommendations responding to posted questions about diabetes were compared with published Canadian diabetes clinical practice guidelines. Seven diabetes-related questions generated 17 responses and 35 recommendations. Comparison of recommendations with evidence-based sources indicated that 91% (32/35) were in agreement with the best practice clinical guidelines for type II diabetes. Discussion themes included diabetic signs and symptoms, glycemic control, neuropathy, retinopathy, diet and physical activity recommendations and interactions of prednisone with glucose control. Concerns about the accuracy of online peer recommendations about type II diabetes care and management have not supported these results. This forum presents information sharing among a group of knowledgeable older adults with high interactive health literacy skills. Future research is needed to determine whether deviations from 'accurate' online information are truly harmful or represent lay expert adaptations to self-care routines."
},
{
"pmid": "18958781",
"title": "Health-related on-line forums: what's the big attraction?",
"abstract": "This study investigates what motivates people to make use of health-related online forums, and how people feel that using these forums helps them in coping with their situation. Results are based on an online questionnaire (N = 189) among users of a variety of health forums. Findings show an overall positive effect of using forums on the degree to which people are better able to cope with the situation they are facing, both socially and with their condition. This especially holds for people who find forums a convenient tool for inclusion or gathering information. A negative effect on coping, however, is found for people who primarily use forums for discussion. The study also shows that features that often are mentioned in literature on computer-mediated communication (i.e., the anonymity it affords, its text-based character, and the possibility it offers for network expansion) are recognized but appreciated differently by users. Users who feel stigmatized especially appreciate the anonymity of online forums, while people who are restricted in their mobility appreciate the possibilities for network expansion."
}
] |
BMC Medical Informatics and Decision Making | 30943960 | PMC6448186 | 10.1186/s12911-019-0781-4 | Clinical text classification with rule-based features and knowledge-guided convolutional neural networks | BackgroundClinical text classification is an fundamental problem in medical natural language processing. Existing studies have cocnventionally focused on rules or knowledge sources-based feature engineering, but only a limited number of studies have exploited effective representation learning capability of deep learning methods.MethodsIn this study, we propose a new approach which combines rule-based features and knowledge-guided deep learning models for effective disease classification. Critical Steps of our method include recognizing trigger phrases, predicting classes with very few examples using trigger phrases and training a convolutional neural network (CNN) with word embeddings and Unified Medical Language System (UMLS) entity embeddings.ResultsWe evaluated our method on the 2008 Integrating Informatics with Biology and the Bedside (i2b2) obesity challenge. The results demonstrate that our method outperforms the state-of-the-art methods.ConclusionWe showed that CNN model is powerful for learning effective hidden features, and CUIs embeddings are helpful for building clinical text representations. This shows integrating domain knowledge into CNN models is promising. | Related WorkClinical text classificationA systematic literature review of clinical coding and classification systems has been conducted by Stanfill et al. [11]. Some challenge tasks in biomedical text mining also focus on clinical text classification, e.g., Informatics for Integrating Biology and the Bedside (i2b2) hosted text classification tasks on determining smoking status [10], and predicting obesity and its co-morbidities [12]. In this work, we focus on the obesity challenge [12]. Among the top ten systems of obesity challenge, most are rule-based systems, and the top four systems are purely rule-based.Many approaches for clinical text classification rely on biomedical knowledge sources [3]. A common approach is to first map narrative text to concepts from knowledge sources like Unified Medical Language System (UMLS), then train classifiers on document representations that include UMLS Concept Unique Identifiers (CUIs) as features [6]. More knowledge-intensive approaches enrich the feature set with related concepts [4] for apply semantic kernels that project documents that contain related concepts closer together in a feature space [7]. Similarly, Yao et al. [13] proposed to improve distributed document representations with medical concept descriptions for traditional Chinese medicine clinical records classification.On the other hand, some clinical text classification studies use various types of information instead of knowledge sources. For instance, effective classifiers have been designed based on regular expression discovery [14] and semi-supervised learning [15, 16]. Active learning [17] has been applied in clinical domain, which leverages unlabeled corpora to improve the classification of clinical text.Although these methods used rules, knowledge sources or different types of information in many ways. They seldom use effective feature learning methods, while deep learning methods are recently widely used for text classification and have shown powerful feature learning capabilities.Deep learning for clinical data miningRecently, deep learning methods have been successfully applied to clinical data mining. Two representative deep models are convolutional neural networks (CNN) [18, 19] and recurrent neural networks (RNN) [20, 21]. They achieve state of the art performances on a number of clinical data mining tasks. Beaulieu-Jones et al. [22] designed a neural network approach to construct phenotypes for classifying patient disease status. The model performed better than decision trees, random forests and Support Vector Machines (SVM). They also showed to successfully learn the structure of high-dimensional EHR data for phenotype stratification. Gehrmann et al. [23] compared CNN to the traditional rule-based entity extraction systems using the cTAKES and Logistic Regression (LR) with n-gram features. They tested ten different phenotyping tasks on discharge summaries. CNN outperformed other phenotyping algorithms on the prediction of the ten phenotypes, and they concluded that deep learning-based NLP methods improved the patient phenotyping performance compared to other methods. Luo et al. applied both CNN, RNN, and Graph Convolutional Networks (GCN) to classify the semantic relations between medical concepts in discharge summaries from the i2b2-VA challenge dataset [24] and showed that CNN, RNN and GCN with only word embedding features can obtain similar or better performances compared to state-of-the-art systems by challenge participants with heavy feature engineering [25–27]. Wu et al. [28] applied CNN using pre-trained embeddings on clinical text for named entity recognization. They showed that their models outperformed the conditional random fields (CRF) baseline. Geraci et al. [29] applied deep learning models to identify youth depression in unstructured text notes. They obtained a sensitivity of 93.5% and a specificity of 68%. Jagannatha et al. [30, 31] experimented with RNN, long short-term memory (LSTM), gated recurrent units (GRU), bidirectional LSTM, combinations of LSTM with CRF, to extract clinical concepts from texts. They demonstrated that all RNN variants outperformed the CRF baseline. Lipton et al. [32] evaluated LSTM in phenotype prediction using multivariate time series clinical measurements. They showed that their model outperformed multi-layer perceptron (MLP) and LR. They also concluded that combining MLP and LSTM leads to the best performance. Che et al. [33] also applied deep neural networks to model time series in ICU data. They introduced a Laplacian regularization process on the sigmoid layer based on medical knowledge bases and other structured knowledge. In addition, they designed an incremental training procedure to iteratively add neurons to the hidden layer. They then used causal inference to analyze and interpret hidden layer representations. They showed that their method improved the performance of phenotype identification, the model also converges faster and has better interpretation.Although deep learning techniques have been well studied in clinical data mining, most of these works do not focus on long clinical text classification (e.g., an entire clinical note) or utilize knowledge sources, while we propose a novel knowledge-guided deep learning method for clinical text classification. | [
"19683066",
"12668687",
"19390101",
"23077130",
"22580178",
"17947624",
"20962126",
"19390096",
"24578357",
"23845911",
"22707743",
"27744022",
"29447188",
"21685143",
"28694119",
"26262126",
"28739578",
"27219127",
"20442139",
"29191207"
] | [
{
"pmid": "19683066",
"title": "What can natural language processing do for clinical decision support?",
"abstract": "Computerized clinical decision support (CDS) aims to aid decision making of health care providers and the public by providing easily accessible health-related information at the point and time it is needed. natural language processing (NLP) is instrumental in using free-text information to drive CDS, representing clinical knowledge and CDS interventions in standardized formats, and leveraging clinical narrative. The early innovative NLP research of clinical narrative was followed by a period of stable research conducted at the major clinical centers and a shift of mainstream interest to biomedical NLP. This review primarily focuses on the recently renewed interest in development of fundamental NLP methods and advances in the NLP systems for CDS. The current solutions to challenges posed by distinct sublanguages, intended user groups, and support goals are discussed."
},
{
"pmid": "12668687",
"title": "The role of domain knowledge in automating medical text report classification.",
"abstract": "OBJECTIVE\nTo analyze the effect of expert knowledge on the inductive learning process in creating classifiers for medical text reports.\n\n\nDESIGN\nThe authors converted medical text reports to a structured form through natural language processing. They then inductively created classifiers for medical text reports using varying degrees and types of expert knowledge and different inductive learning algorithms. The authors measured performance of the different classifiers as well as the costs to induce classifiers and acquire expert knowledge.\n\n\nMEASUREMENTS\nThe measurements used were classifier performance, training-set size efficiency, and classifier creation cost.\n\n\nRESULTS\nExpert knowledge was shown to be the most significant factor affecting inductive learning performance, outweighing differences in learning algorithms. The use of expert knowledge can affect comparisons between learning algorithms. This expert knowledge may be obtained and represented separately as knowledge about the clinical task or about the data representation used. The benefit of the expert knowledge is more than that of inductive learning itself, with less cost to obtain.\n\n\nCONCLUSION\nFor medical text report classification, expert knowledge acquisition is more significant to performance and more cost-effective to obtain than knowledge discovery. Building classifiers should therefore focus more on acquiring knowledge from experts than trying to learn this knowledge inductively."
},
{
"pmid": "19390101",
"title": "Semantic classification of diseases in discharge summaries using a context-aware rule-based classifier.",
"abstract": "OBJECTIVE Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. DESIGN The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F(1)-macro and F(1)-micro for measurements. RESULTS On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F(1)-macro = 0.80 for the textual and F(1)-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. CONCLUSIONS The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse."
},
{
"pmid": "23077130",
"title": "Knowledge-based biomedical word sense disambiguation: an evaluation and application to clinical document classification.",
"abstract": "BACKGROUND\nWord sense disambiguation (WSD) methods automatically assign an unambiguous concept to an ambiguous term based on context, and are important to many text-processing tasks. In this study we developed and evaluated a knowledge-based WSD method that uses semantic similarity measures derived from the Unified Medical Language System (UMLS) and evaluated the contribution of WSD to clinical text classification.\n\n\nMETHODS\nWe evaluated our system on biomedical WSD datasets and determined the contribution of our WSD system to clinical document classification on the 2007 Computational Medicine Challenge corpus.\n\n\nRESULTS\nOur system compared favorably with other knowledge-based methods. Machine learning classifiers trained on disambiguated concepts significantly outperformed those trained using all concepts.\n\n\nCONCLUSIONS\nWe developed a WSD system that achieves high disambiguation accuracy on standard biomedical WSD datasets and showed that our WSD system improves clinical document classification.\n\n\nDATA SHARING\nWe integrated our WSD system with MetaMap and the clinical Text Analysis and Knowledge Extraction System, two popular biomedical natural language processing systems. All codes required to reproduce our results and all tools developed as part of this study are released as open source, available under http://code.google.com/p/ytex."
},
{
"pmid": "22580178",
"title": "Ontology-guided feature engineering for clinical text classification.",
"abstract": "In this study we present novel feature engineering techniques that leverage the biomedical domain knowledge encoded in the Unified Medical Language System (UMLS) to improve machine-learning based clinical text classification. Critical steps in clinical text classification include identification of features and passages relevant to the classification task, and representation of clinical text to enable discrimination between documents of different classes. We developed novel information-theoretic techniques that utilize the taxonomical structure of the Unified Medical Language System (UMLS) to improve feature ranking, and we developed a semantic similarity measure that projects clinical text into a feature space that improves classification. We evaluated these methods on the 2008 Integrating Informatics with Biology and the Bedside (I2B2) obesity challenge. The methods we developed improve upon the results of this challenge's top machine-learning based system, and may improve the performance of other machine-learning based clinical text classification systems. We have released all tools developed as part of this study as open source, available at http://code.google.com/p/ytex."
},
{
"pmid": "17947624",
"title": "Identifying patient smoking status from medical discharge records.",
"abstract": "The authors organized a Natural Language Processing (NLP) challenge on automatically determining the smoking status of patients from information found in their discharge records. This challenge was issued as a part of the i2b2 (Informatics for Integrating Biology to the Bedside) project, to survey, facilitate, and examine studies in medical language understanding for clinical narratives. This article describes the smoking challenge, details the data and the annotation process, explains the evaluation metrics, discusses the characteristics of the systems developed for the challenge, presents an analysis of the results of received system runs, draws conclusions about the state of the art, and identifies directions for future research. A total of 11 teams participated in the smoking challenge. Each team submitted up to three system runs, providing a total of 23 submissions. The submitted system runs were evaluated with microaveraged and macroaveraged precision, recall, and F-measure. The systems submitted to the smoking challenge represented a variety of machine learning and rule-based algorithms. Despite the differences in their approaches to smoking status identification, many of these systems provided good results. There were 12 system runs with microaveraged F-measures above 0.84. Analysis of the results highlighted the fact that discharge summaries express smoking status using a limited number of textual features (e.g., \"smok\", \"tobac\", \"cigar\", Social History, etc.). Many of the effective smoking status identifiers benefit from these features."
},
{
"pmid": "20962126",
"title": "A systematic literature review of automated clinical coding and classification systems.",
"abstract": "Clinical coding and classification processes transform natural language descriptions in clinical text into data that can subsequently be used for clinical care, research, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the performance of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review show that automated tools exist for a variety of coding and classification purposes, focus on various healthcare specialties, and handle a wide variety of clinical document types. Automated coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, but these data must be considered in context, with performance relative to the complexity of the task and the desired outcome."
},
{
"pmid": "19390096",
"title": "Recognizing obesity and comorbidities in sparse data.",
"abstract": "In order to survey, facilitate, and evaluate studies of medical language processing on clinical narratives, i2b2 (Informatics for Integrating Biology to the Bedside) organized its second challenge and workshop. This challenge focused on automatically extracting information on obesity and fifteen of its most common comorbidities from patient discharge summaries. For each patient, obesity and any of the comorbidities could be Present, Absent, or Questionable (i.e., possible) in the patient, or Unmentioned in the discharge summary of the patient. i2b2 provided data for, and invited the development of, automated systems that can classify obesity and its comorbidities into these four classes based on individual discharge summaries. This article refers to obesity and comorbidities as diseases. It refers to the categories Present, Absent, Questionable, and Unmentioned as classes. The task of classifying obesity and its comorbidities is called the Obesity Challenge. The data released by i2b2 was annotated for textual judgments reflecting the explicitly reported information on diseases, and intuitive judgments reflecting medical professionals' reading of the information presented in discharge summaries. There were very few examples of some disease classes in the data. The Obesity Challenge paid particular attention to the performance of systems on these less well-represented classes. A total of 30 teams participated in the Obesity Challenge. Each team was allowed to submit two sets of up to three system runs for evaluation, resulting in a total of 136 submissions. The submissions represented a combination of rule-based and machine learning approaches. Evaluation of system runs shows that the best predictions of textual judgments come from systems that filter the potentially noisy portions of the narratives, project dictionaries of disease names onto the remaining text, apply negation extraction, and process the text through rules. Information on disease-related concepts, such as symptoms and medications, and general medical knowledge help systems infer intuitive judgments on the diseases."
},
{
"pmid": "24578357",
"title": "Learning regular expressions for clinical text classification.",
"abstract": "OBJECTIVES\nNatural language processing (NLP) applications typically use regular expressions that have been developed manually by human experts. Our goal is to automate both the creation and utilization of regular expressions in text classification.\n\n\nMETHODS\nWe designed a novel regular expression discovery (RED) algorithm and implemented two text classifiers based on RED. The RED+ALIGN classifier combines RED with an alignment algorithm, and RED+SVM combines RED with a support vector machine (SVM) classifier. Two clinical datasets were used for testing and evaluation: the SMOKE dataset, containing 1091 text snippets describing smoking status; and the PAIN dataset, containing 702 snippets describing pain status. We performed 10-fold cross-validation to calculate accuracy, precision, recall, and F-measure metrics. In the evaluation, an SVM classifier was trained as the control.\n\n\nRESULTS\nThe two RED classifiers achieved 80.9-83.0% in overall accuracy on the two datasets, which is 1.3-3% higher than SVM's accuracy (p<0.001). Similarly, small but consistent improvements have been observed in precision, recall, and F-measure when RED classifiers are compared with SVM alone. More significantly, RED+ALIGN correctly classified many instances that were misclassified by the SVM classifier (8.1-10.3% of the total instances and 43.8-53.0% of SVM's misclassifications).\n\n\nCONCLUSIONS\nMachine-generated regular expressions can be effectively used in clinical text classification. The regular expression-based classifier can be combined with other classifiers, like SVM, to improve classification performance."
},
{
"pmid": "23845911",
"title": "Semi-supervised clinical text classification with Laplacian SVMs: an application to cancer case management.",
"abstract": "OBJECTIVE\nTo compare linear and Laplacian SVMs on a clinical text classification task; to evaluate the effect of unlabeled training data on Laplacian SVM performance.\n\n\nBACKGROUND\nThe development of machine-learning based clinical text classifiers requires the creation of labeled training data, obtained via manual review by clinicians. Due to the effort and expense involved in labeling data, training data sets in the clinical domain are of limited size. In contrast, electronic medical record (EMR) systems contain hundreds of thousands of unlabeled notes that are not used by supervised machine learning approaches. Semi-supervised learning algorithms use both labeled and unlabeled data to train classifiers, and can outperform their supervised counterparts.\n\n\nMETHODS\nWe trained support vector machines (SVMs) and Laplacian SVMs on a training reference standard of 820 abdominal CT, MRI, and ultrasound reports labeled for the presence of potentially malignant liver lesions that require follow up (positive class prevalence 77%). The Laplacian SVM used 19,845 randomly sampled unlabeled notes in addition to the training reference standard. We evaluated SVMs and Laplacian SVMs on a test set of 520 labeled reports.\n\n\nRESULTS\nThe Laplacian SVM trained on labeled and unlabeled radiology reports significantly outperformed supervised SVMs (Macro-F1 0.773 vs. 0.741, Sensitivity 0.943 vs. 0.911, Positive Predictive value 0.877 vs. 0.883). Performance improved with the number of labeled and unlabeled notes used to train the Laplacian SVM (pearson's ρ=0.529 for correlation between number of unlabeled notes and macro-F1 score). These results suggest that practical semi-supervised methods such as the Laplacian SVM can leverage the large, unlabeled corpora that reside within EMRs to improve clinical text classification."
},
{
"pmid": "22707743",
"title": "Active learning for clinical text classification: is it better than random sampling?",
"abstract": "OBJECTIVE\nThis study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks.\n\n\nDESIGN\nThree existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results.\n\n\nMEASUREMENTS\nClassification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences.\n\n\nRESULTS\nThe DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm.\n\n\nCONCLUSION\nFor medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty."
},
{
"pmid": "27744022",
"title": "Semi-supervised learning of the electronic health record for phenotype stratification.",
"abstract": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes, which can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across multiple simulation models and improved survival prediction in ALS clinical trial data. This is particularly evident in cases where only a small number of patients have high quality phenotypes, a common scenario in EHR-based research. Denoising autoencoders perform dimensionality reduction enabling visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs."
},
{
"pmid": "29447188",
"title": "Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.",
"abstract": "In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions."
},
{
"pmid": "21685143",
"title": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text.",
"abstract": "The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate."
},
{
"pmid": "28694119",
"title": "Recurrent neural networks for classifying relations in clinical notes.",
"abstract": "We proposed the first models based on recurrent neural networks (more specifically Long Short-Term Memory - LSTM) for classifying relations from clinical notes. We tested our models on the i2b2/VA relation classification challenge dataset. We showed that our segment LSTM model, with only word embedding feature and no manual feature engineering, achieved a micro-averaged f-measure of 0.661 for classifying medical problem-treatment relations, 0.800 for medical problem-test relations, and 0.683 for medical problem-medical problem relations. These results are comparable to those of the state-of-the-art systems on the i2b2/VA relation classification challenge. We compared the segment LSTM model with the sentence LSTM model, and demonstrated the benefits of exploring the difference between concept text and context text, and between different contextual parts in the sentence. We also evaluated the impact of word embedding on the performance of LSTM models and showed that medical domain word embedding help improve the relation classification. These results support the use of LSTM models for classifying relations between medical concepts, as they show comparable performance to previously published systems while requiring no manual feature engineering."
},
{
"pmid": "26262126",
"title": "Named Entity Recognition in Chinese Clinical Text Using Deep Neural Network.",
"abstract": "Rapid growth in electronic health records (EHRs) use has led to an unprecedented expansion of available clinical data in electronic formats. However, much of the important healthcare information is locked in the narrative documents. Therefore Natural Language Processing (NLP) technologies, e.g., Named Entity Recognition that identifies boundaries and types of entities, has been extensively studied to unlock important clinical information in free text. In this study, we investigated a novel deep learning method to recognize clinical entities in Chinese clinical documents using the minimal feature engineering approach. We developed a deep neural network (DNN) to generate word embeddings from a large unlabeled corpus through unsupervised learning and another DNN for the NER task. The experiment results showed that the DNN with word embeddings trained from the large unlabeled corpus outperformed the state-of-the-art CRF's model in the minimal feature engineering setting, achieving the highest F1-score of 0.9280. Further analysis showed that word embeddings derived through unsupervised learning from large unlabeled corpus remarkably improved the DNN with randomized embedding, denoting the usefulness of unsupervised feature learning."
},
{
"pmid": "28739578",
"title": "Applying deep neural networks to unstructured text notes in electronic medical records for phenotyping youth depression.",
"abstract": "BACKGROUND\nWe report a study of machine learning applied to the phenotyping of psychiatric diagnosis for research recruitment in youth depression, conducted with 861 labelled electronic medical records (EMRs) documents. A model was built that could accurately identify individuals who were suitable candidates for a study on youth depression.\n\n\nOBJECTIVE\nOur objective was a model to identify individuals who meet inclusion criteria as well as unsuitable patients who would require exclusion.\n\n\nMETHODS\nOur methods included applying a system that coded the EMR documents by removing personally identifying information, using two psychiatrists who labelled a set of EMR documents (from which the 861 came), using a brute force search and training a deep neural network for this task.\n\n\nFINDINGS\nAccording to a cross-validation evaluation, we describe a model that had a specificity of 97% and a sensitivity of 45% and a second model with a specificity of 53% and a sensitivity of 89%. We combined these two models into a third one (sensitivity 93.5%; specificity 68%; positive predictive value (precision) 77%) to generate a list of most suitable candidates in support of research recruitment.\n\n\nCONCLUSION\nOur efforts are meant to demonstrate the potential for this type of approach for patient recruitment purposes but it should be noted that a larger sample size is required to build a truly reliable recommendation system.\n\n\nCLINICAL IMPLICATIONS\nFuture efforts will employ alternate neural network algorithms available and other machine learning methods."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
},
{
"pmid": "20442139",
"title": "An overview of MetaMap: historical perspective and recent advances.",
"abstract": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD."
},
{
"pmid": "29191207",
"title": "Medical subdomain classification of clinical notes using a machine learning-based natural language processing approach.",
"abstract": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions."
}
] |
Frontiers in Neurorobotics | 30983987 | PMC6448581 | 10.3389/fnbot.2019.00009 | Body Randomization Reduces the Sim-to-Real Gap for Compliant Quadruped Locomotion | Designing controllers for compliant, underactuated robots is challenging and usually requires a learning procedure. Learning robotic control in simulated environments can speed up the process whilst lowering risk of physical damage. Since perfect simulations are unfeasible, several techniques are used to improve transfer to the real world. Here, we investigate the impact of randomizing body parameters during learning of CPG controllers in simulation. The controllers are evaluated on our physical quadruped robot. We find that body randomization in simulation increases chances of finding gaits that function well on the real robot. | 1.1. Related WorkThe transfer of knowledge obtained in one domain to a new domain is important to speed up learning. Knowledge transfer can be applied across tasks, where knowledge from a learned task is utilized to speed up learning a new task by the same model (Hamer et al., 2013; Um et al., 2014). For instance, transfer of a quadruped gait learned in a specific environment, speeds up learning in other environments (Degrave et al., 2015). Knowledge transfer can also be applied across models, for instance if knowledge obtained by a first robot is utilized by a second robot (Gupta et al., 2017) or if a model is trained in simulation and then applied to a physical robot (Peng et al., 2018). However, the transfer of knowledge from simulation to reality has proven challenging for locomotion controllers due to discrepancies between simulation and reality, the so-called simulation-reality gap (Lipson and Pollack, 2000). This gap can easily cause a controller that is optimized in simulation to fail in the real world. Different methods have been developed to decrease the gap, they can generally be divided into two categories: (i) improving simulation accuracy and (ii) improving controller robustness.System identification improves simulation accuracy by tuning the simulation parameters to match the behavior of the physical system. In the embodiment theory framework (Füchslin et al., 2013), the relation between environment, body and controller is described from a dynamical view point, where each entity can be modeled as a non-linear filter. Improving the simulator accuracy is then reduced to matching the transfer function of these filters. Urbain et al. (2018) provides an automated and parametrized calibration method that improves simulation accuracy by treating both the physical robot and its parametrized model as black box dynamical systems. It optimizes the similarity between the transfer functions by matching their sensor response to a given actuation input.Similarly, simulation accuracy can be improved with machine learning techniques. For instance, in computer vision tasks (e.g., Taigman et al., 2016; Bousmalis et al., 2017) and visually guided robotic grasping tasks (Bousmalis et al., 2018), synthetic data has been augmented with generative adversarial networks (GANs). The augmentation improves the realism of the synthetic data and hence results in better models.Another approach for minimizing the simulation-reality gap is by increasing robustness of the learned controllers. This can be achieved by perturbing the simulated robot during learning or by adding noise to the simulated environment (domain randomization, Jakobi, 1998; Tobin et al., 2017). The assumption is that if the model is trained on a sufficiently broad range of simulated environments, the real world will seem like just another variation to the model. Similarly, dynamics randomization is achieved by randomizing physical properties. Tan et al. (2018) found that dynamics randomization decreased performance but increased stability of a non-compliant quadruped robot. In Mordatch et al. (2015), optimization on ensembles of models instead of only the nominal model enables functional gaits on a small humanoid. In Peng et al. (2018), dynamics randomization was necessary for sim-to-real transfer of a robotic arm controller. | [
"28179882",
"23186344",
"25022259",
"10984047",
"18006736"
] | [
{
"pmid": "28179882",
"title": "Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform.",
"abstract": "Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 \"Neurorobotics\" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments."
},
{
"pmid": "23186344",
"title": "Morphological computation and morphological control: steps toward a formal theory and applications.",
"abstract": "Morphological computation can be loosely defined as the exploitation of the shape, material properties, and physical dynamics of a physical system to improve the efficiency of a computation. Morphological control is the application of morphological computing to a control task. In its theoretical part, this article sharpens and extends these definitions by suggesting new formalized definitions and identifying areas in which the definitions we propose are still inadequate. We go on to describe three ongoing studies, in which we are applying morphological control to problems in medicine and in chemistry. The first involves an inflatable support system for patients with impaired movement, and is based on macroscopic physics and concepts already tested in robotics. The two other case studies (self-assembly of chemical microreactors; models of induced cell repair in radio-oncology) describe processes and devices on the micrometer scale, in which the emergent dynamics of the underlying physical system (e.g., phase transitions) are dominated by stochastic processes such as diffusion."
},
{
"pmid": "25022259",
"title": "Soft Robotics: New Perspectives for Robot Bodyware and Control.",
"abstract": "The remarkable advances of robotics in the last 50 years, which represent an incredible wealth of knowledge, are based on the fundamental assumption that robots are chains of rigid links. The use of soft materials in robotics, driven not only by new scientific paradigms (biomimetics, morphological computation, and others), but also by many applications (biomedical, service, rescue robots, and many more), is going to overcome these basic assumptions and makes the well-known theories and techniques poorly applicable, opening new perspectives for robot design and control. The current examples of soft robots represent a variety of solutions for actuation and control. Though very first steps, they have the potential for a radical technological change. Soft robotics is not just a new direction of technological development, but a novel approach to robotics, unhinging its fundamentals, with the potential to produce a new generation of robots, in the support of humans in our natural environments."
},
{
"pmid": "10984047",
"title": "Automatic design and manufacture of robotic lifeforms.",
"abstract": "Biological life is in control of its own means of reproduction, which generally involves complex, autocatalysing chemical reactions. But this autonomy of design and manufacture has not yet been realized artificially. Robots are still laboriously designed and constructed by teams of human engineers, usually at considerable expense. Few robots are available because these costs must be absorbed through mass production, which is justified only for toys, weapons and industrial systems such as automatic teller machines. Here we report the results of a combined computational and experimental approach in which simple electromechanical systems are evolved through simulations from basic building blocks (bars, actuators and artificial neurons); the 'fittest' machines (defined by their locomotive ability) are then fabricated robotically using rapid manufacturing technology. We thus achieve autonomy of design and construction using evolution in a 'limited universe' physical simulation coupled to automatic fabrication."
},
{
"pmid": "18006736",
"title": "Self-organization, embodiment, and biologically inspired robotics.",
"abstract": "Robotics researchers increasingly agree that ideas from biology and self-organization can strongly benefit the design of autonomous robots. Biological organisms have evolved to perform and survive in a world characterized by rapid changes, high uncertainty, indefinite richness, and limited availability of information. Industrial robots, in contrast, operate in highly controlled environments with no or very little uncertainty. Although many challenges remain, concepts from biologically inspired (bio-inspired) robotics will eventually enable researchers to engineer machines for the real world that possess at least some of the desirable properties of biological organisms, such as adaptivity, robustness, versatility, and agility."
}
] |
eNeuro | 30963103 | PMC6451157 | 10.1523/ENEURO.0111-18.2019 | Perceived Target Range Shapes Human Sound-Localization Behavior | AbstractThe auditory system relies on binaural differences and spectral pinna cues to localize sounds in azimuth and elevation. However, the acoustic input can be unreliable, due to uncertainty about the environment, and neural noise. A possible strategy to reduce sound-location uncertainty is to integrate the sensory observations with sensorimotor information from previous experience, to infer where sounds are more likely to occur. We investigated whether and how human sound localization performance is affected by the spatial distribution of target sounds, and changes thereof. We tested three different open-loop paradigms, in which we varied the spatial range of sounds in different ways. For the narrowest ranges, target-response gains were highly idiosyncratic and deviated from an optimal gain predicted by error-minimization; in the horizontal plane the deviation typically consisted of a response overshoot. Moreover, participants adjusted their behavior by rapidly adapting their gain to the target range, both in elevation and in azimuth, yielding behavior closer to optimal for larger target ranges. Notably, gain changes occurred without any exogenous feedback about performance. We discuss how the findings can be explained by a sub-optimal model in which the motor-control system reduces its response error across trials to within an acceptable range, rather than strictly minimizing the error. | Related workMany studies have demonstrated response adaptation to changes in the environment. Most studies used explicit (visual) feedback to influence response behavior. For example, manipulation of the perceived errors of eye-hand control through noisy visual feedback showed that the brain derives the underlying error distribution across trials through Bayesian inference (Körding and Wolpert, 2004). The Bayesian formalism also extends to audiovisual integration (Körding et al., 2007), movement planning (Hudson et al., 2007), ventriloquism (Alais and Burr, 2004), visual speed perception (Stocker and Simoncelli, 2006), and auditory spatial learning (Carlile et al., 2014). Furthermore, it may explain learning of the underlying distribution of target locations in a visual estimation task (Berniker et al., 2010). Also, sound-localization behavior adapts to chronic and acute changes in the acoustics-to-spatial mapping (Hofman et al., 1998; Shinn-Cunningham et al., 1998; Zwiers et al., 2003; King et al., 2011; Otte et al., 2013; Carlile et al., 2014).Minimizing the MAE, as described in the Introduction (Eq. 2), is mathematically equivalent to the optimal Bayesian decision rule on Gaussian distributions that selects the maximum of the posterior distribution (the maximum-a-posteriori, or MAP strategy (Körding and Wolpert, 2004; Ege et al., 2018):(11)POST(ε*|ε)∝L(ε|ε*)·P(ε*) and R=maxarg[POST(ε*|ε)]with L(ε|ε*) the likelihood function of the noisy sensory input for a target presented at ε*, with uncertainty, σT; P(ε*) is the prior distribution, or expectation, of potential target locations, and R is the selected MAP response. For a fixed prior, the MAP strategy provides an optimal trade-off between mean absolute localization error (accuracy) and response variability (precision). For Gaussian distributions, the MAP rule predicts that the stimulus-response gain depends on the sensory noise, σT, and the prior width, σP, by:(12)g≡Rε*=σP2σP2+σT2
Recently, we (Ege et al., 2018) found that for a fixed target range, the human sound localization system might indeed rely on such a Bayesian decision rule, as the results indicated that the localization gain g depended on the sensory noise, σT in a systematic fashion.In our current experiments, the prior width may have varied with the expected target range: σP = σP(ΔT). The idiosyncratic differences in initial gains, observed in this study, could thus be partially due to idiosyncratic differences in initial priors. The present study challenged the auditory system to update its prior only on the basis of endogenous signals.Several studies have shown that the auditory system rapidly adapts to the statistics of environmental acoustics, without overt exogenous feedback. For example, neurons in inferior colliculus (IC) of anesthetized guinea pigs shift their sound-level tuning curves according to the mean and variance of sound levels (Dean et al., 2005). Interestingly, these rapid adjustments already manifest at the auditory nerve (Wen et al., 2009). Likewise, ILD tuning of IC neurons in anesthetized ferrets adjusts to the ILD statistics of dichotic sounds, while these same stimuli induce perceptual shifts to ILD sensitivity in humans (Dahmen et al., 2010). Finally, it has been shown that head-orienting reaction times to audiovisual stimuli depend systematically on trial history, and on the probability of perceived audiovisual spatial alignment, without providing exogenous feedback (Van Wanrooij et al., 2010). | [
"14761661",
"20844766",
"20053901",
"25234999",
"20620878",
"16286934",
"30401920",
"9187290",
"10368392",
"9604358",
"10196533",
"17898140",
"21414354",
"14724638",
"17895984",
"12398464",
"9874370",
"2018391",
"23319012",
"24711409",
"14121113",
"9637047",
"18354398",
"16547513",
"23463919",
"22131411",
"15930391",
"20584180",
"15496665",
"19889991",
"2926000",
"12524547"
] | [
{
"pmid": "14761661",
"title": "The ventriloquist effect results from near-optimal bimodal integration.",
"abstract": "Ventriloquism is the ancient art of making one's voice appear to come from elsewhere, an art exploited by the Greek and Roman oracles, and possibly earlier. We regularly experience the effect when watching television and movies, where the voices seem to emanate from the actors' lips rather than from the actual sound source. Originally, ventriloquism was explained by performers projecting sound to their puppets by special techniques, but more recently it is assumed that ventriloquism results from vision \"capturing\" sound. In this study we investigate spatial localization of audio-visual stimuli. When visual localization is good, vision does indeed dominate and capture sound. However, for severely blurred visual stimuli (that are poorly localized), the reverse holds: sound captures vision. For less blurred stimuli, neither sense dominates and perception follows the mean position. Precision of bimodal localization is usually better than either the visual or the auditory unimodal presentation. All the results are well explained not by one sense capturing the other, but by a simple model of optimal combination of visual and auditory information."
},
{
"pmid": "20844766",
"title": "Learning priors for Bayesian computations in the nervous system.",
"abstract": "Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors) in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject's evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior."
},
{
"pmid": "20053901",
"title": "Pinna cues determine orienting response modes to synchronous sounds in elevation.",
"abstract": "To program a goal-directed orienting response toward a sound source embedded in an acoustic scene, the audiomotor system should detect and select the target against a background. Here, we focus on whether the system can segregate synchronous sounds in the midsagittal plane (elevation), a task requiring the auditory system to dissociate the pinna-induced spectral localization cues. Human listeners made rapid head-orienting responses toward either a single sound source (broadband buzzer or Gaussian noise) or toward two simultaneously presented sounds (buzzer and noise) at a wide variety of locations in the midsagittal plane. In the latter case, listeners had to orient to the buzzer (target) and ignore the noise (nontarget). In the single-sound condition, localization was accurate. However, in the double-sound condition, response endpoints depended on relative sound level and spatial disparity. The loudest sound dominated the responses, regardless of whether it was the target or the nontarget. When the sounds had about equal intensities and their spatial disparity was sufficiently small, endpoint distributions were well described by weighted averaging. However, when spatial disparities exceeded approximately 45 degrees, response endpoint distributions became bimodal. Similar response behavior has been reported for visuomotor experiments, for which averaging and bimodal endpoint distributions are thought to arise from neural interactions within retinotopically organized visuomotor maps. We show, however, that the auditory-evoked responses can be well explained by the idiosyncratic acoustics of the pinnae. Hence basic principles of target representation and selection for audition and vision appear to differ profoundly."
},
{
"pmid": "25234999",
"title": "Accommodating to new ears: the effects of sensory and sensory-motor feedback.",
"abstract": "Changing the shape of the outer ear using small in-ear molds degrades sound localization performance consistent with the distortion of monaural spectral cues to location. It has been shown recently that adult listeners re-calibrate to these new spectral cues for locations both inside and outside the visual field. This raises the question as to the teacher signal for this remarkable functional plasticity. Furthermore, large individual differences in the extent and rate of accommodation suggests a number of factors may be driving this process. A training paradigm exploiting multi-modal and sensory-motor feedback during accommodation was examined to determine whether it might accelerate this process. So as to standardize the modification of the spectral cues, molds filling 40% of the volume of each outer ear were custom made for each subject. Daily training sessions for about an hour, involving repetitive auditory stimuli and exploratory behavior by the subject, significantly improved the extent of accommodation measured by both front-back confusions and polar angle localization errors, with some improvement in the rate of accommodation demonstrated by front-back confusion errors. This work has implications for both the process by which a coherent representation of auditory space is maintained and for accommodative training for hearing aid wearers."
},
{
"pmid": "20620878",
"title": "Adaptation to stimulus statistics in the perception and neural representation of auditory space.",
"abstract": "Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position."
},
{
"pmid": "16286934",
"title": "Neural population coding of sound level adapts to stimulus statistics.",
"abstract": "Mammals can hear sounds extending over a vast range of sound levels with remarkable accuracy. How auditory neurons code sound level over such a range is unclear; firing rates of individual neurons increase with sound level over only a very limited portion of the full range of hearing. We show that neurons in the auditory midbrain of the guinea pig adjust their responses to the mean, variance and more complex statistics of sound level distributions. We demonstrate that these adjustments improve the accuracy of the neural population code close to the region of most commonly occurring sound levels. This extends the range of sound levels that can be accurately encoded, fine-tuning hearing to the local acoustic environment."
},
{
"pmid": "30401920",
"title": "Accuracy-Precision Trade-off in Human Sound Localisation.",
"abstract": "Sensory representations are typically endowed with intrinsic noise, leading to variability and inaccuracies in perceptual responses. The Bayesian framework accounts for an optimal strategy to deal with sensory-motor uncertainty, by combining the noisy sensory input with prior information regarding the distribution of stimulus properties. The maximum-a-posteriori (MAP) estimate selects the perceptual response from the peak (mode) of the resulting posterior distribution that ensure optimal accuracy-precision trade-off when the underlying distributions are Gaussians (minimal mean-squared error, with minimum response variability). We tested this model on human eye- movement responses toward broadband sounds, masked by various levels of background noise, and for head movements to sounds with poor spectral content. We report that the response gain (accuracy) and variability (precision) of the elevation response components changed systematically with the signal-to-noise ratio of the target sound: gains were high for high SNRs and decreased for low SNRs. In contrast, the azimuth response components maintained high gains for all conditions, as predicted by maximum-likelihood estimation. However, we found that the elevation data did not follow the MAP prediction. Instead, results were better described by an alternative decision strategy, in which the response results from taking a random sample from the posterior in each trial. We discuss two potential implementations of a simple posterior sampling scheme in the auditory system that account for the results and argue that although the observed response strategies for azimuth and elevation are sub-optimal with respect to their variability, it allows the auditory system to actively explore the environment in the absence of adequate sensory evidence."
},
{
"pmid": "9187290",
"title": "Human eye-head coordination in two dimensions under different sensorimotor conditions.",
"abstract": "The coordination between eye and head movements during a rapid orienting gaze shift has been investigated mainly when subjects made horizontal movements towards visual targets with the eyes starting at the centre of the orbit. Under these conditions, it is difficult to identify the signals driving the two motor systems, because their initial motor errors are identical and equal to the coordinates of the sensory stimulus (i.e. retinal error). In this paper, we investigate head-free gaze saccades of human subjects towards visual as well as auditory stimuli presented in the two-dimensional frontal plane, under both aligned and unaligned initial fixation conditions. Although the basic patterns for eye and head movements were qualitatively comparable for both stimulus modalities, systematic differences were also obtained under aligned conditions, suggesting a task-dependent movement strategy. Auditory-evoked gaze shifts were endowed with smaller eye-head latency differences, consistently larger head movements and smaller concomitant ocular saccades than visually triggered movements. By testing gaze control for eccentric initial eye positions, we found that the head displacement vector was best related to the initial head motor-error (target-re-head), rather than to the initial gaze error (target-re-eye), regardless of target modality. These findings suggest an independent control of the eye and head motor systems by commands in different frames of reference. However, we also observed a systematic influence of the oculomotor response on the properties of the evoked head movements, indicating a subtle coupling between the two systems. The results are discussed in view of current eye-head coordination models."
},
{
"pmid": "10368392",
"title": "Influence of head position on the spatial representation of acoustic targets.",
"abstract": "Sound localization in humans relies on binaural differences (azimuth cues) and monaural spectral shape information (elevation cues) and is therefore the result of a neural computational process. Despite the fact that these acoustic cues are referenced with respect to the head, accurate eye movements can be generated to sounds in complete darkness. This ability necessitates the use of eye position information. So far, however, sound localization has been investigated mainly with a fixed head position, usually straight ahead. Yet the auditory system may rely on head motor information to maintain a stable and spatially accurate representation of acoustic targets in the presence of head movements. We therefore studied the influence of changes in eye-head position on auditory-guided orienting behavior of human subjects. In the first experiment, we used a visual-auditory double-step paradigm. Subjects made saccadic gaze shifts in total darkness toward brief broadband sounds presented before an intervening eye-head movement that was evoked by an earlier visual target. The data show that the preceding displacements of both eye and head are fully accounted for, resulting in spatially accurate responses. This suggests that auditory target information may be transformed into a spatial (or body-centered) frame of reference. To further investigate this possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and -free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or space-fixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also depended on the tone frequency. Thus tone-evoked ocular saccades typically showed a partial compensation for changes in static head position, whereas noise-evoked eye-head saccades fully compensated for intervening changes in eye-head position. We propose that the auditory localization system combines the acoustic input with head-position information to encode targets in a spatial (or body-centered) frame of reference. In this way, accurate orienting responses may be programmed despite intervening eye-head movements. A conceptual model, based on the tonotopic organization of the auditory system, is presented that may account for our findings."
},
{
"pmid": "9604358",
"title": "Spectro-temporal factors in two-dimensional human sound localization.",
"abstract": "This paper describes the effect of spectro-temporal factors on human sound localization performance in two dimensions (2D). Subjects responded with saccadic eye movements to acoustic stimuli presented in the frontal hemisphere. Both the horizontal (azimuth) and vertical (elevation) stimulus location were varied randomly. Three types of stimuli were used, having different spectro-temporal patterns, but identically shaped broadband averaged power spectra: noise bursts, frequency-modulated tones, and trains of short noise bursts. In all subjects, the elevation components of the saccadic responses varied systematically with the different temporal parameters, whereas the azimuth response components remained equally accurate for all stimulus conditions. The data show that the auditory system does not calculate a final elevation estimate from a long-term (order 100 ms) integration of sensory input. Instead, the results suggest that the auditory system may apply a \"multiple-look\" strategy in which the final estimate is calculated from consecutive short-term (order few ms) estimates. These findings are incorporated in a conceptual model that accounts for the data and proposes a scheme for the temporal processing of spectral sensory information into a dynamic estimate of sound elevation."
},
{
"pmid": "10196533",
"title": "Relearning sound localization with new ears.",
"abstract": "Because the inner ear is not organized spatially, sound localization relies on the neural processing of implicit acoustic cues. To determine a sound's position, the brain must learn and calibrate these cues, using accurate spatial feedback from other sensorimotor systems. Experimental evidence for such a system has been demonstrated in barn owls, but not in humans. Here, we demonstrate the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae."
},
{
"pmid": "17898140",
"title": "Movement planning with probabilistic target information.",
"abstract": "We examined how subjects plan speeded reaching movements when the precise target of the movement is not known at movement onset. Before each reach, subjects were given only a probability distribution on possible target positions. Only after completing part of the movement did the actual target appear. In separate experiments we varied the location of the mode and the scale of the prior distribution for possible targets. In both cases we found that subjects made use of prior probability information when planning reaches. We also devised two tests (Composite Benefit and Row Dominance tests) to determine whether subjects' performance met necessary conditions for optimality (defined as maximizing expected gain). We could not reject the hypothesis of optimality in the experiment where we varied the mode of the prior, but departures from optimality were found in response to changes in the scale of prior distributions."
},
{
"pmid": "21414354",
"title": "Neural circuits underlying adaptation and learning in the perception of auditory space.",
"abstract": "Sound localization mechanisms are particularly plastic during development, when the monaural and binaural acoustic cues that form the basis for spatial hearing change in value as the body grows. Recent studies have shown that the mature brain retains a surprising capacity to relearn to localize sound in the presence of substantially altered auditory spatial cues. In addition to the long-lasting changes that result from learning, behavioral and electrophysiological studies have demonstrated that auditory spatial processing can undergo rapid adjustments in response to changes in the statistics of recent stimulation, which help to maintain sensitivity over the range where most stimulus values occur. Through a combination of recording studies and methods for selectively manipulating the activity of specific neuronal populations, progress is now being made in identifying the cortical and subcortical circuits in the brain that are responsible for the dynamic coding of auditory spatial information."
},
{
"pmid": "14724638",
"title": "Bayesian integration in sensorimotor learning.",
"abstract": "When we learn a new motor skill, such as playing an approaching tennis ball, both our sensors and the task possess variability. Our sensors provide imperfect information about the ball's velocity, so we can only estimate it. Combining information from multiple modalities can reduce the error in this estimate. On a longer time scale, not all velocities are a priori equally probable, and over the course of a match there will be a probability distribution of velocities. According to bayesian theory, an optimal estimate results from combining information about the distribution of velocities-the prior-with evidence from sensory feedback. As uncertainty increases, when playing in fog or at dusk, the system should increasingly rely on prior knowledge. To use a bayesian strategy, the brain would need to represent the prior distribution and the level of uncertainty in the sensory feedback. Here we control the statistical variations of a new sensorimotor task and manipulate the uncertainty of the sensory feedback. We show that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process. The central nervous system therefore employs probabilistic models during sensorimotor learning."
},
{
"pmid": "17895984",
"title": "Causal inference in multisensory perception.",
"abstract": "Perceptual events derive their significance to an animal from their meaning about the world, that is from the information they carry about their causes. The brain should thus be able to efficiently infer the causes underlying our sensory events. Here we use multisensory cue combination to study causal inference in perception. We formulate an ideal-observer model that infers whether two sensory cues originate from the same location and that also estimates their location(s). This model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that indeed humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, we show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception."
},
{
"pmid": "12398464",
"title": "Contribution of spectral cues to human sound localization.",
"abstract": "The contribution of spectral cues to human sound localization was investigated by removing cues in 1/2-, 1- or 2-octave bands in the frequency range above 4 kHz. Localization responses were given by placing an acoustic pointer at the same apparent position as a virtual target. The pointer was generated by filtering a 100-ms harmonic complex with equalized head-related transfer functions (HRTFs). Listeners controlled the pointer via a hand-held stick that rotated about a fixed point. In the baseline condition, the target, a 200-ms noise burst, was filtered with the same HRTFs as the pointer. In other conditions, the spectral information within a certain frequency band was removed by replacing the directional transfer function within this band with the average transfer of this band. Analysis of the data showed that removing cues in 1/2-octave bands did not affect localization, whereas for the 2-octave band correct localization was virtually impossible. The results obtained for the 1-octave bands indicate that up-down cues are located mainly in the 6-12-kHz band, and front-back cues in the 8-16-kHz band. The interindividual spread in response patterns suggests that different listeners use different localization cues. The response patterns in the median plane can be predicted using a model based on spectral comparison of directional transfer functions for target and response directions."
},
{
"pmid": "9874370",
"title": "Role of spectral detail in sound-source localization.",
"abstract": "Sounds heard over headphones are typically perceived inside the head (internalized), unlike real sound sources which are perceived outside the head (externalized). If the acoustical waveforms from a real sound source are reproduced precisely using headphones, auditory images are appropriately externalized and localized. The filtering (relative boosting, attenuation and delaying of component frequencies) of a sound by the head and outer ear provides information about the location of a sound source by means of the differences in the frequency spectra between the ears as well as the overall spectral shape. This location-dependent filtering is explicitly described by the head-related transfer function (HRTF) from sound source to ear canal. Here we present sounds to subjects through open-canal tube-phones and investigate how accurately the HRTFs must be reproduced to achieve true three-dimensional perception of auditory signals in anechoic space. Listeners attempted to discriminate between 'real' sounds presented from a loudspeaker and 'virtual' sounds presented over tube-phones. Our results show that the HRTFs can be smoothed significantly in frequency without affecting the perceived location of a sound. Listeners cannot distinguish real from virtual sources until the HRTF has lost most of its detailed variation in frequency, at which time the perceived elevation of the image is the reported cue."
},
{
"pmid": "2018391",
"title": "Sound localization by human listeners.",
"abstract": "In keeping with our promise earlier in this review, we summarize here the process by which we believe spatial cues are used for localizing a sound source in a free-field listening situation. We believe it entails two parallel processes: 1. The azimuth of the source is determined using differences in interaural time or interaural intensity, whichever is present. Wightman and colleagues (1989) believe the low-frequency temporal information is dominant if both are present. 2. The elevation of the source is determined from spectral shape cues. The received sound spectrum, as modified by the pinna, is in effect compared with a stored set of directional transfer functions. These are actually the spectra of a nearly flat source heard at various elevations. The elevation that corresponds to the best-matching transfer function is selected as the locus of the sound. Pinnae are similar enough between people that certain general rules (e.g. Blauert's boosted bands or Butler's covert peaks) can describe this process. Head motion is probably not a critical part of the localization process, except in cases where time permits a very detailed assessment of location, in which case one tries to localize the source by turning the head toward the putative location. Sound localization is only moderately more precise when the listener points directly toward the source. The process is not analogous to localizing a visual source on the fovea of the retina. Thus, head motion provides only a moderate increase in localization accuracy. Finally, current evidence does not support the view that auditory motion perception is anything more than detection of changes in static location over time."
},
{
"pmid": "23319012",
"title": "Age-related hearing loss and ear morphology affect vertical but not horizontal sound-localization performance.",
"abstract": "Several studies have attributed deterioration of sound localization in the horizontal (azimuth) and vertical (elevation) planes to an age-related decline in binaural processing and high-frequency hearing loss (HFHL). The latter might underlie decreased elevation performance of older adults. However, as the pinnae keep growing throughout life, we hypothesized that larger ears might enable older adults to localize sounds in elevation on the basis of lower frequencies, thus (partially) compensating their HFHL. In addition, it is not clear whether sound localization has already matured at a very young age, when the body is still growing, and the binaural and monaural sound-localization cues change accordingly. The present study investigated sound-localization performance of children (7-11 years), young adults (20-34 years), and older adults (63-80 years) under open-loop conditions in the two-dimensional frontal hemifield. We studied the effect of age-related hearing loss and ear size on localization responses to brief broadband sound bursts with different bandwidths. We found similar localization abilities in azimuth for all listeners, including the older adults with HFHL. Sound localization in elevation for the children and young adult listeners with smaller ears improved when stimuli contained frequencies above 7 kHz. Subjects with larger ears could also judge the elevation of sound sources restricted to lower frequency content. Despite increasing ear size, sound localization in elevation deteriorated in older adults with HFHL. We conclude that the binaural localization cues are successfully used well into later stages of life, but that pinna growth cannot compensate the more profound HFHL with age."
},
{
"pmid": "24711409",
"title": "Natural auditory scene statistics shapes human spatial hearing.",
"abstract": "Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing."
},
{
"pmid": "9637047",
"title": "Adapting to supernormal auditory localization cues. I. Bias and resolution.",
"abstract": "Head-related transfer functions (HRTFs) were used to create spatialized stimuli for presentation through earphones. Subjects performed forced-choice, identification tests during which allowed response directions were indicated visually. In each experimental session, subjects were first presented with auditory stimuli in which the stimulus HRTFs corresponded to the allowed response directions. The correspondence between the HRTFs used to generate the stimuli and the directions was then changed so that response directions no longer corresponded to the HRTFs in the natural way. Feedback was used to train subjects as to which spatial cues corresponded to which of the allowed responses. Finally, the normal correspondence between direction and HRTFs was reinstated. This basic experimental paradigm was used to explore the effects of the type of feedback provided, the complexity of the stimulated acoustic scene, the number of allowed response positions, and the magnitude of the HRTF transformation subjects had to learn. Data showed that (1) although subjects may not adapt completely to a new relationship between physical stimuli and direction, response bias decreases substantially with training, and (2) the ability to resolve different HRTFs depends both on the stimuli presented and on the state of adaptation of the subject."
},
{
"pmid": "18354398",
"title": "Multisensory integration: current issues from the perspective of the single neuron.",
"abstract": "For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain."
},
{
"pmid": "16547513",
"title": "Noise characteristics and prior expectations in human visual speed perception.",
"abstract": "Human visual speed perception is qualitatively consistent with a Bayesian observer that optimally combines noisy measurements with a prior preference for lower speeds. Quantitative validation of this model, however, is difficult because the precise noise characteristics and prior expectations are unknown. Here, we present an augmented observer model that accounts for the variability of subjective responses in a speed discrimination task. This allowed us to infer the shape of the prior probability as well as the internal noise characteristics directly from psychophysical data. For all subjects, we found that the fitted model provides an accurate description of the data across a wide range of stimulus parameters. The inferred prior distribution shows significantly heavier tails than a Gaussian, and the amplitude of the internal noise is approximately proportional to stimulus speed and depends inversely on stimulus contrast. The framework is general and should prove applicable to other experiments and perceptual modalities."
},
{
"pmid": "23463919",
"title": "The influence of static eye and head position on the ventriloquist effect.",
"abstract": "Orienting responses to audiovisual events have shorter reaction times and better accuracy and precision when images and sounds in the environment are aligned in space and time. How the brain constructs an integrated audiovisual percept is a computational puzzle because the auditory and visual senses are represented in different reference frames: the retina encodes visual locations with respect to the eyes; whereas the sound localisation cues are referenced to the head. In the well-known ventriloquist effect, the auditory spatial percept of the ventriloquist's voice is attracted toward the synchronous visual image of the dummy, but does this visual bias on sound localisation operate in a common reference frame by correctly taking into account eye and head position? Here we studied this question by independently varying initial eye and head orientations, and the amount of audiovisual spatial mismatch. Human subjects pointed head and/or gaze to auditory targets in elevation, and were instructed to ignore co-occurring visual distracters. Results demonstrate that different initial head and eye orientations are accurately and appropriately incorporated into an audiovisual response. Effectively, sounds and images are perceptually fused according to their physical locations in space independent of an observer's point of view. Implications for neurophysiological findings and modelling efforts that aim to reconcile sensory and motor signals for goal-directed behaviour are discussed."
},
{
"pmid": "22131411",
"title": "Influence of static eye and head position on tone-evoked gaze shifts.",
"abstract": "The auditory system represents sound-source directions initially in head-centered coordinates. To program eye-head gaze shifts to sounds, the orientation of eyes and head should be incorporated to specify the target relative to the eyes. Here we test (1) whether this transformation involves a stage in which sounds are represented in a world- or a head-centered reference frame, and (2) whether acoustic spatial updating occurs at a topographically organized motor level representing gaze shifts, or within the tonotopically organized auditory system. Human listeners generated head-unrestrained gaze shifts from a large range of initial eye and head positions toward brief broadband sound bursts, and to tones at different center frequencies, presented in the midsagittal plane. Tones were heard at a fixed illusory elevation, regardless of their actual location, that depended in an idiosyncratic way on initial head and eye position, as well as on the tone's frequency. Gaze shifts to broadband sounds were accurate, fully incorporating initial eye and head positions. The results support the hypothesis that the auditory system represents sounds in a supramodal reference frame, and that signals about eye and head orientation are incorporated at a tonotopic stage."
},
{
"pmid": "15930391",
"title": "Relearning sound localization with a new ear.",
"abstract": "Human sound localization results primarily from the processing of binaural differences in sound level and arrival time for locations in the horizontal plane (azimuth) and of spectral shape cues generated by the head and pinnae for positions in the vertical plane (elevation). The latter mechanism incorporates two processing stages: a spectral-to-spatial mapping stage and a binaural weighting stage that determines the contribution of each ear to perceived elevation as function of sound azimuth. We demonstrated recently that binaural pinna molds virtually abolish the ability to localize sound-source elevation, but, after several weeks, subjects regained normal localization performance. It is not clear which processing stage underlies this remarkable plasticity, because the auditory system could have learned the new spectral cues separately for each ear (spatial-mapping adaptation) or for one ear only, while extending its contribution into the contralateral hemifield (binaural-weighting adaptation). To dissociate these possibilities, we applied a long-term monaural spectral perturbation in 13 subjects. Our results show that, in eight experiments, listeners learned to localize accurately with new spectral cues that differed substantially from those provided by their own ears. Interestingly, five subjects, whose spectral cues were not sufficiently perturbed, never yielded stable localization performance. Our findings indicate that the analysis of spectral cues may involve a correlation process between the sensory input and a stored spectral representation of the subject's ears and that learning acts predominantly at a spectral-to-spatial mapping level rather than at the level of binaural weighting."
},
{
"pmid": "20584180",
"title": "Acquired prior knowledge modulates audiovisual integration.",
"abstract": "Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results."
},
{
"pmid": "15496665",
"title": "Dynamic sound localization during rapid eye-head gaze shifts.",
"abstract": "Human sound localization relies on implicit head-centered acoustic cues. However, to create a stable and accurate representation of sounds despite intervening head movements, the acoustic input should be continuously combined with feedback signals about changes in head orientation. Alternatively, the auditory target coordinates could be updated in advance by using either the preprogrammed gaze-motor command or the sensory target coordinates to which the intervening gaze shift is made (\"predictive remapping\"). So far, previous experiments cannot dissociate these alternatives. Here, we study whether the auditory system compensates for ongoing saccadic eye and head movements in two dimensions that occur during target presentation. In this case, the system has to deal with dynamic changes of the acoustic cues as well as with rapid changes in relative eye and head orientation that cannot be preprogrammed by the audiomotor system. We performed visual-auditory double-step experiments in two dimensions in which a brief sound burst was presented while subjects made a saccadic eye-head gaze shift toward a previously flashed visual target. Our results show that localization responses under these dynamic conditions remain accurate. Multiple linear regression analysis revealed that the intervening eye and head movements are fully accounted for. Moreover, elevation response components were more accurate for longer-duration sounds (50 msec) than for extremely brief sounds (3 msec), for all localization conditions. Taken together, these results cannot be explained by a predictive remapping scheme. Rather, we conclude that the human auditory system adequately processes dynamically varying acoustic cues that result from self-initiated rapid head movements to construct a stable representation of the target in world coordinates. This signal is subsequently used to program accurate eye and head localization responses."
},
{
"pmid": "19889991",
"title": "Dynamic range adaptation to sound level statistics in the auditory nerve.",
"abstract": "The auditory system operates over a vast range of sound pressure levels (100-120 dB) with nearly constant discrimination ability across most of the range, well exceeding the dynamic range of most auditory neurons (20-40 dB). Dean et al. (2005) have reported that the dynamic range of midbrain auditory neurons adapts to the distribution of sound levels in a continuous, dynamic stimulus by shifting toward the most frequently occurring level. Here, we show that dynamic range adaptation, distinct from classic firing rate adaptation, also occurs in primary auditory neurons in anesthetized cats for tone and noise stimuli. Specifically, the range of sound levels over which firing rates of auditory nerve (AN) fibers grows rapidly with level shifts nearly linearly with the most probable levels in a dynamic sound stimulus. This dynamic range adaptation was observed for fibers with all characteristic frequencies and spontaneous discharge rates. As in the midbrain, dynamic range adaptation improved the precision of level coding by the AN fiber population for the prevailing sound levels in the stimulus. However, dynamic range adaptation in the AN was weaker than in the midbrain and not sufficient (0.25 dB/dB, on average, for broadband noise) to prevent a significant degradation of the precision of level coding by the AN population above 60 dB SPL. These findings suggest that adaptive processing of sound levels first occurs in the auditory periphery and is enhanced along the auditory pathway."
},
{
"pmid": "2926000",
"title": "Headphone simulation of free-field listening. I: Stimulus synthesis.",
"abstract": "This article describes techniques used to synthesize headphone-presented stimuli that simulate the ear-canal waveforms produced by free-field sources. The stimulus synthesis techniques involve measurement of each subject's free-field-to-eardrum transfer functions for sources at a large number of locations in free field, and measurement of headphone-to-eardrum transfer functions with the subject wearing headphones. Digital filters are then constructed from the transfer function measurements, and stimuli are passed through these digital filters. Transfer function data from ten subjects and 144 source positions are described in this article, along with estimates of the various sources of error in the measurements. The free-field-to-eardrum transfer function data are consistent with comparable data reported elsewhere in the literature. A comparison of ear-canal waveforms produced by free-field sources with ear-canal waveforms produced by headphone-presented simulations shows that the simulations duplicate free-field waveforms within a few dB of magnitude and a few degrees of phase at frequencies up to 14 kHz."
},
{
"pmid": "12524547",
"title": "Plasticity in human sound localization induced by compressed spatial vision.",
"abstract": "Auditory and visual target locations are encoded differently in the brain, but must be co-calibrated to maintain cross-sensory concordance. Mechanisms that adjust spatial calibration across modalities have been described (for example, prism adaptation in owls), though rudimentarily in humans. We quantified the adaptation of human sound localization in response to spatially compressed vision (0.5x lenses for 2-3 days). This induced a corresponding compression of auditory localization that was most pronounced for azimuth (minimal for elevation) and was restricted to the visual field of the lenses. Sound localization was also affected outside the field of visual-auditory interaction (shifted centrally, not compressed). These results suggest that spatially modified vision induces adaptive changes in adult human sound localization, including novel mechanisms that account for spatial compression. Findings are consistent with a model in which the central processing of sound location is encoded by recruitment rather than by a place code."
}
] |
BMC Medical Informatics and Decision Making | 30961602 | PMC6454584 | 10.1186/s12911-019-0770-7 | A fine-grained Chinese word segmentation and part-of-speech tagging corpus for clinical text | BackgroundChinese word segmentation (CWS) and part-of-speech (POS) tagging are two fundamental tasks of Chinese text processing. They are usually preliminary steps for lots of Chinese natural language processing (NLP) tasks. There have been a large number of studies on CWS and POS tagging in various domains, however, few studies have been proposed for CWS and POS tagging in the clinical domain as it is not easy to determine granularity of words.MethodsIn this paper, we investigated CWS and POS tagging for Chinese clinical text at a fine-granularity level, and manually annotated a corpus. On the corpus, we compared two state-of-the-art methods, i.e., conditional random fields (CRF) and bidirectional long short-term memory (BiLSTM) with a CRF layer. In order to validate the plausibility of the fine-grained annotation, we further investigated the effect of CWS and POS tagging on Chinese clinical named entity recognition (NER) on another independent corpus.ResultsWhen only CWS was considered, CRF achieved higher precision, recall and F-measure than BiLSTM-CRF. When both CWS and POS tagging were considered, CRF also gained an advantage over BiLSTM. CRF outperformed BiLSTM-CRF by 0.14% in F-measure on CWS and by 0.34% in F-measure on POS tagging. The CWS information brought a greatest improvement of 0.34% in F-measure, while the CWS&POS information brought a greatest improvement of 0.74% in F-measure.ConclusionsOur proposed fine-grained CWS and POS tagging corpus is reliable and meaningful as the output of the CWS and POS tagging systems developed on this corpus improved the performance of a Chinese clinical NER system on another independent corpus. | Related workCWS and POS tagging have been widely being investigated for a long time as they are two fundamental tasks in NLP. Both of them are regularly recognized as sequence labeling problem, and a large number of machine learning methods have been proposed for them, including machine learning methods relying on manually-crafted features such as maximum entropy Markov model [1] conditional random fields [2] structural support vector machines [3] etc., and deep learning methods that do not need manually feature engineering such as BiLSTM-CRF [4], CNN (convolutional neural networks)-CRF [5] and their variants [6–8]. The deep learning methods usually shows better performance than the machine learning methods relying on manually-crafted features. Most of these studies focused on algorithms rather than other aspects such as domain transfer [9, 10] and multiple labeling criteria [11]. However, in recent years, application needs of NLP in specific domains such as clinic, finance, law, etc., have become more and more. A few researchers began to investigate domain-specific NLP techniques.In the Chinese clinical domain, there have been a small number of studies on NLP tasks, including CWS, POS tagging, latent syntactic analysis, parsing, de-identification, NER, temporal information extraction, etc. In the case of CWS and POS tagging, the existing work was mainly carried out from a linguistics perspective, and might not be suitable for actual applications. Therefore, we investigated CWS and POS tagging for Chinese clinical text at a fine-grained level according to application needs. | [] | [] |
BMC Medical Informatics and Decision Making | 30961614 | PMC6454594 | 10.1186/s12911-019-0765-4 | A hybrid neural network model for predicting kidney disease in hypertension patients based on electronic health records | BackgroundDisease prediction based on Electronic Health Records (EHR) has become one hot research topic in biomedical community. Existing work mainly focuses on the prediction of one target disease, and little work is proposed for multiple associated diseases prediction. Meanwhile, a piece of EHR usually contains two main information: the textual description and physical indicators. However, existing work largely adopts statistical models with discrete features from numerical physical indicators in EHR, and fails to make full use of textual description information.MethodsIn this paper, we study the problem of kidney disease prediction in hypertension patients by using neural network model. Specifically, we first model the prediction problem as a binary classification task. Then we propose a hybrid neural network which incorporates Bidirectional Long Short-Term Memory (BiLSTM) and Autoencoder networks to fully capture the information in EHR.ResultsWe construct a dataset based on a large number of raw EHR data. The dataset consists of totally 35,332 records from hypertension patients. Experimental results show that the proposed neural model achieves 89.7% accuracy for the task.ConclusionsA hybrid neural network model was presented. Based on the constructed dataset, the comparison results of different models demonstrated the effectiveness of the proposed neural model. The proposed model outperformed traditional statistical models with discrete features and neural baseline systems. | Related workDisease prediction, especially the chronic diseases, has received more and more attention from researchers in the biomedical field [19–22]. Early researches mainly focus on the numerical factors including physical examination factors, laboratory test features, and demographic information. For example, Wilson et al., (1998) predicted the risk of coronary heart disease by using Logistic Regression model with an array of discrete factors [8]. The follow-up studies tried to estimate coronary heart disease by considering more non-traditional risk factors, in order to yield better performance [19, 23]. However, these work focuses on the prediction of single target disease. Meanwhile, these methods mainly use discrete models with hand-crafted features.About ten years ago, researchers began to predict the disease risks from the genetic study and tried to find underlying molecular mechanisms of diseases [24–26]. For example, Wray et al., (2007) proposed to assess the genetic risk of a disease in healthy individuals based on dense genome-wide Single-Nucleotide Polymorphism (SNP) panels [26]. More recently, some researches explored the genes associated with the diseases to better understand the pathobiological mechanisms of these diseases [13, 14]. However, there is still a lack of the studies for multiple associated diseases prediction.In recent years, neural network models have extensively been used for various NLP tasks, achieving competitive results [27–29]. The representative neural models include Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Autoencoder, etc. Neural models are gradually applied in the tasks of biomedical field [30–34]. For example, Zhao et al., (2016) trained a deep multi-layer neural network model to extract protein-protein interactions information from biomedical literature [31]. However, neural networks have not been used for the task of multiple associated diseases prediction. In this paper, we explore a hybrid neural model for predicting kidney disease in hypertension patients. | [
"11804985",
"14712421",
"16415374",
"18573856",
"28270835",
"28284520",
"9603539",
"14505774",
"25819077",
"22513989",
"26338084",
"28851283",
"20424251",
"12063776",
"17785532",
"28699566",
"29084508"
] | [
{
"pmid": "11804985",
"title": "Simple scoring scheme for calculating the risk of acute coronary events based on the 10-year follow-up of the prospective cardiovascular Münster (PROCAM) study.",
"abstract": "BACKGROUND\nThe absolute risk of an acute coronary event depends on the totality of risk factors exhibited by an individual, the so-called global risk profile. Although several scoring schemes have been suggested to calculate this profile, many omit information on important variables such as family history of coronary heart disease or LDL cholesterol.\n\n\nMETHODS AND RESULTS\nBased on 325 acute coronary events occurring within 10 years of follow-up among 5389 men 35 to 65 years of age at recruitment into the Prospective Cardiovascular Münster (PROCAM) study, we developed a Cox proportional hazards model using the following 8 independent risk variables, ranked in order of importance: age, LDL cholesterol, smoking, HDL cholesterol, systolic blood pressure, family history of premature myocardial infarction, diabetes mellitus, and triglycerides. We then derived a simple point scoring system based on the beta-coefficients of this model. The accuracy of this point scoring scheme was comparable to coronary event prediction when the continuous variables themselves were used. The scoring system accurately predicted observed coronary events with an area under the receiver-operating characteristics curve of 82.4% compared with 82.9% for the Cox model with continuous variables.\n\n\nCONCLUSIONS\nOur scoring system is a simple and accurate way of predicting global risk of myocardial infarction in clinical practice and will therefore allow more accurate targeting of preventive therapy."
},
{
"pmid": "14712421",
"title": "Prevention of radiocontrast nephropathy with N-acetylcysteine in patients with chronic kidney disease: a meta-analysis of randomized, controlled trials.",
"abstract": "BACKGROUND\nRadiocontrast nephropathy (RCN) is a common cause of hospital-acquired acute renal failure. Results of several studies using N-acetylcysteine (NAC) for the prevention of RCN have yielded conflicting results. We performed a meta-analysis of group data extracted from previously published studies to assess the effect of NAC on the prevention of RCN in patients with pre-existing chronic kidney disease (CKD).\n\n\nMETHODS\nOvid's multidatabase search for MEDLINE, Cochrane Central Registry of Controlled Trials, Cochrane Database of Systematic Reviews, and HealthSTAR were used to identify candidate articles. Abstracts from proceedings of scientific meetings also were screened. We selected blinded and unblinded randomized controlled trials (RCTs) performed in humans 18 years and older with pre-existing CKD, defined by a mean baseline serum creatinine level of 1.2 mg/dL or greater (> or =106.1 micromol/L) or creatinine clearance less than 70 mL/min (<1.17 mL/s). The overall risk ratio (RR) for the development of RCN was computed using a random-effects model.\n\n\nRESULTS\nEight RCTs (n = 885 patients) published in full-text articles were included in the primary analysis. In the control group, the overall rate of RCN was 18.5% (95% confidence interval [CI], 15 to 22). In the primary analysis, overall RR for RCN associated with the use of NAC was 0.41 (95% CI, 0.22 to 0.79; P = 0.007). In a sensitivity analysis that included 4 additional RCTs published in abstract form, RR remained significant at 0.55 (95% CI, 0.34 to 0.91; P = 0.020).\n\n\nCONCLUSION\nNAC reduces the risk for RCN in patients with CKD."
},
{
"pmid": "16415374",
"title": "Adult hypertension and kidney disease: the role of fetal programming.",
"abstract": "Hypertension (HTN) and chronic kidney disease are highly prevalent diseases that tend to occur more frequently among disadvantaged populations, in whom prenatal care also tends to be poor. More and more evidence is emerging highlighting the important role of fetal programming in the development of adult disease, suggesting a possible common pathophysiologic denominator in the development of these disorders. Epidemiologic evidence accumulated over the past 2 decades has demonstrated an association between low birth weight and subsequent adult HTN, diabetes, and cardiovascular disease. More recently, a similar association has been found with chronic kidney disease. Animal studies and indirect evidence from human studies support the hypothesis that low birth weight, as a marker of adverse intrauterine circumstances, is associated with a congenital deficit in nephron number. The precise mechanism of the reduction in nephron number has not been established, but several hypotheses have been put forward, including changes in DNA methylation, increased apoptosis in the developing kidney, alterations in renal renin-angiotensin system activity, and increased fetal glucocorticoid exposure. A reduction in nephron number is associated with compensatory glomerular hypertrophy and an increased susceptibility to renal disease progression. HTN in low birth weight individuals also appears to be mediated in part through a reduction in nephron number. Increased awareness of the implications of low birth weight and inadequate prenatal care should lead to public health policies that may have long-term benefits in curbing the epidemics of HTN, diabetes, and kidney disease in generations to come."
},
{
"pmid": "18573856",
"title": "Predicting cardiovascular risk in England and Wales: prospective derivation and validation of QRISK2.",
"abstract": "OBJECTIVE\nTo develop and validate version two of the QRISK cardiovascular disease risk algorithm (QRISK2) to provide accurate estimates of cardiovascular risk in patients from different ethnic groups in England and Wales and to compare its performance with the modified version of Framingham score recommended by the National Institute for Health and Clinical Excellence (NICE).\n\n\nDESIGN\nProspective open cohort study with routinely collected data from general practice, 1 January 1993 to 31 March 2008.\n\n\nSETTING\n531 practices in England and Wales contributing to the national QRESEARCH database.\n\n\nPARTICIPANTS\n2.3 million patients aged 35-74 (over 16 million person years) with 140,000 cardiovascular events. Overall population (derivation and validation cohorts) comprised 2.22 million people who were white or whose ethnic group was not recorded, 22,013 south Asian, 11,595 black African, 10,402 black Caribbean, and 19,792 from Chinese or other Asian or other ethnic groups.\n\n\nMAIN OUTCOME MEASURES\nFirst (incident) diagnosis of cardiovascular disease (coronary heart disease, stroke, and transient ischaemic attack) recorded in general practice records or linked Office for National Statistics death certificates. Risk factors included self assigned ethnicity, age, sex, smoking status, systolic blood pressure, ratio of total serum cholesterol:high density lipoprotein cholesterol, body mass index, family history of coronary heart disease in first degree relative under 60 years, Townsend deprivation score, treated hypertension, type 2 diabetes, renal disease, atrial fibrillation, and rheumatoid arthritis.\n\n\nRESULTS\nThe validation statistics indicated that QRISK2 had improved discrimination and calibration compared with the modified Framingham score. The QRISK2 algorithm explained 43% of the variation in women and 38% in men compared with 39% and 35%, respectively, by the modified Framingham score. Of the 112,156 patients classified as high risk (that is, >or=20% risk over 10 years) by the modified Framingham score, 46,094 (41.1%) would be reclassified at low risk with QRISK2. The 10 year observed risk among these reclassified patients was 16.6% (95% confidence interval 16.1% to 17.0%)-that is, below the 20% treatment threshold. Of the 78 024 patients classified at high risk on QRISK2, 11,962 (15.3%) would be reclassified at low risk by the modified Framingham score. The 10 year observed risk among these patients was 23.3% (22.2% to 24.4%)-that is, above the 20% threshold. In the validation cohort, the annual incidence rate of cardiovascular events among those with a QRISK2 score of >or=20% was 30.6 per 1000 person years (29.8 to 31.5) for women and 32.5 per 1000 person years (31.9 to 33.1) for men. The corresponding figures for the modified Framingham equation were 25.7 per 1000 person years (25.0 to 26.3) for women and 26.4 (26.0 to 26.8) for men). At the 20% threshold, the population identified by QRISK2 was at higher risk of a CV event than the population identified by the Framingham score.\n\n\nCONCLUSIONS\nIncorporating ethnicity, deprivation, and other clinical conditions into the QRISK2 algorithm for risk of cardiovascular disease improves the accuracy of identification of those at high risk in a nationally representative population. At the 20% threshold, QRISK2 is likely to be a more efficient and equitable tool for treatment decisions for the primary prevention of cardiovascular disease. As the validation was performed in a similar population to the population from which the algorithm was derived, it potentially has a \"home advantage.\" Further validation in other populations is therefore advised."
},
{
"pmid": "9603539",
"title": "Prediction of coronary heart disease using risk factor categories.",
"abstract": "BACKGROUND\nThe objective of this study was to examine the association of Joint National Committee (JNC-V) blood pressure and National Cholesterol Education Program (NCEP) cholesterol categories with coronary heart disease (CHD) risk, to incorporate them into coronary prediction algorithms, and to compare the discrimination properties of this approach with other noncategorical prediction functions.\n\n\nMETHODS AND RESULTS\nThis work was designed as a prospective, single-center study in the setting of a community-based cohort. The patients were 2489 men and 2856 women 30 to 74 years old at baseline with 12 years of follow-up. During the 12 years of follow-up, a total of 383 men and 227 women developed CHD, which was significantly associated with categories of blood pressure, total cholesterol, LDL cholesterol, and HDL cholesterol (all P<.001). Sex-specific prediction equations were formulated to predict CHD risk according to age, diabetes, smoking, JNC-V blood pressure categories, and NCEP total cholesterol and LDL cholesterol categories. The accuracy of this categorical approach was found to be comparable to CHD prediction when the continuous variables themselves were used. After adjustment for other factors, approximately 28% of CHD events in men and 29% in women were attributable to blood pressure levels that exceeded high normal (> or =130/85). The corresponding multivariable-adjusted attributable risk percent associated with elevated total cholesterol (> or =200 mg/dL) was 27% in men and 34% in women.\n\n\nCONCLUSIONS\nRecommended guidelines of blood pressure, total cholesterol, and LDL cholesterol effectively predict CHD risk in a middle-aged white population sample. A simple coronary disease prediction algorithm was developed using categorical variables, which allows physicians to predict multivariate CHD risk in patients without overt CHD."
},
{
"pmid": "14505774",
"title": "Coronary heart disease risk prediction in the Atherosclerosis Risk in Communities (ARIC) study.",
"abstract": "Risk prediction functions for incident coronary heart disease (CHD) were estimated using data from the Atherosclerosis Risk in Communities (ARIC) Study, a prospective study of CHD in 15,792 persons recruited in 1987-1989 from four U.S. communities, with follow-up through 1998. Predictivity of which individuals had incident CHD was assessed by increase in area under ROC curves resulting from adding nontraditional risk factors and markers of subclinical disease to a basic model containing only traditional risk factors. We also assessed the increase in population attributable risk. The additional factors were body mass index; waist-hip ratio; sport activity index; forced expiratory volume; plasma fibrinogen, factor VIII, von Willebrand factor, and Lp(a); heart rate; Keys score; pack-years smoking; and subclinical disease marker carotid intima-media thickness. These factors substantially improved prediction of future CHD for men, less for women, and also increased attributable risks."
},
{
"pmid": "25819077",
"title": "Identification of a small set of plasma signalling proteins using neural network for prediction of Alzheimer's disease.",
"abstract": "MOTIVATION\nAlzheimer's disease (AD) is a dementia that gets worse with time resulting in loss of memory and cognitive functions. The life expectancy of AD patients following diagnosis is ∼7 years. In 2006, researchers estimated that 0.40% of the world population (range 0.17-0.89%) was afflicted by AD, and that the prevalence rate would be tripled by 2050. Usually, examination of brain tissues is required for definite diagnosis of AD. So, it is crucial to diagnose AD at an early stage via some alternative methods. As the brain controls many functions via releasing signalling proteins through blood, we analyse blood plasma proteins for diagnosis of AD.\n\n\nRESULTS\nHere, we use a radial basis function (RBF) network for feature selection called feature selection RBF network for selection of plasma proteins that can help diagnosis of AD. We have identified a set of plasma proteins, smaller in size than previous study, with comparable prediction accuracy. We have also analysed mild cognitive impairment (MCI) samples with our selected proteins. We have used neural networks and support vector machines as classifiers. The principle component analysis, Sammmon projection and heat-map of the selected proteins have been used to demonstrate the proteins' discriminating power for diagnosis of AD. We have also found a set of plasma signalling proteins that can distinguish incipient AD from MCI at an early stage. Literature survey strongly supports the AD diagnosis capability of the selected plasma proteins."
},
{
"pmid": "22513989",
"title": "Alternative dietary indices both strongly predict risk of chronic disease.",
"abstract": "The Healthy Eating Index-2005 (HEI-2005) measures adherence to the 2005 Dietary Guidelines for Americans, but the association between the HEI-2005 and risk of chronic disease is not known. The Alternative Healthy Eating Index (AHEI), which is based on foods and nutrients predictive of chronic disease risk, was associated inversely with chronic disease risk previously. We updated the AHEI, including additional dietary factors involved in the development of chronic disease, and assessed the associations between the AHEI-2010 and the HEI-2005 and risk of major chronic disease prospectively among 71,495 women from the Nurses' Health Study and 41,029 men from the Health Professionals Follow-Up Study who were free of chronic disease at baseline. During ≥24 y of follow-up, we documented 26,759 and 15,558 incident chronic diseases (cardiovascular disease, diabetes, cancer, or nontrauma death) among women and men, respectively. The RR (95% CI) of chronic disease comparing the highest with the lowest quintile was 0.84 (0.81, 0.87) for the HEI-2005 and 0.81 (0.77, 0.85) for the AHEI-2010. The AHEI-2010 and HEI-2005 were most strongly associated with coronary heart disease (CHD) and diabetes, and for both outcomes the AHEI-2010 was more strongly associated with risk than the HEI-2005 (P-difference = 0.002 and <0.001, respectively). The 2 indices were similarly associated with risk of stroke and cancer. These findings suggest that closer adherence to the 2005 Dietary Guidelines may lower risk of major chronic disease. However, the AHEI-2010, which included additional dietary information, was more strongly associated with chronic disease risk, particularly CHD and diabetes."
},
{
"pmid": "26338084",
"title": "Diet-related chronic disease in the northeastern United States: a model-based clustering approach.",
"abstract": "BACKGROUND\nObesity and diabetes are global public health concerns. Studies indicate a relationship between socioeconomic, demographic and environmental variables and the spatial patterns of diet-related chronic disease. In this paper, we propose a methodology using model-based clustering and variable selection to predict rates of obesity and diabetes. We test this method through an application in the northeastern United States.\n\n\nMETHODS\nWe use model-based clustering, an unsupervised learning approach, to find latent clusters of similar US counties based on a set of socioeconomic, demographic, and environmental variables chosen through the process of variable selection. We then use Analysis of Variance and Post-hoc Tukey comparisons to examine differences in rates of obesity and diabetes for the clusters from the resulting clustering solution.\n\n\nRESULTS\nWe find access to supermarkets, median household income, population density and socioeconomic status to be important in clustering the counties of two northeastern states. The results of the cluster analysis can be used to identify two sets of counties with significantly lower rates of diet-related chronic disease than those observed in the other identified clusters. These relatively healthy clusters are distinguished by the large central and large fringe metropolitan areas contained in their component counties. However, the relationship of socio-demographic factors and diet-related chronic disease is more complicated than previous research would suggest. Additionally, we find evidence of low food access in two clusters of counties adjacent to large central and fringe metropolitan areas. While food access has previously been seen as a problem of inner-city or remote rural areas, this study offers preliminary evidence of declining food access in suburban areas.\n\n\nCONCLUSIONS\nModel-based clustering with variable selection offers a new approach to the analysis of socioeconomic, demographic, and environmental data for diet-related chronic disease prediction. In a test application to two northeastern states, this method allows us to identify two sets of metropolitan counties with significantly lower diet-related chronic disease rates than those observed in most rural and suburban areas. Our method could be applied to larger geographic areas or other countries with comparable data sets, offering a promising method for researchers interested in the global increase in diet-related chronic disease."
},
{
"pmid": "28851283",
"title": "Performance of risk prediction for inflammatory bowel disease based on genotyping platform and genomic risk score method.",
"abstract": "BACKGROUND\nPredicting risk of disease from genotypes is being increasingly proposed for a variety of diagnostic and prognostic purposes. Genome-wide association studies (GWAS) have identified a large number of genome-wide significant susceptibility loci for Crohn's disease (CD) and ulcerative colitis (UC), two subtypes of inflammatory bowel disease (IBD). Recent studies have demonstrated that including only loci that are significantly associated with disease in the prediction model has low predictive power and that power can substantially be improved using a polygenic approach.\n\n\nMETHODS\nWe performed a comprehensive analysis of risk prediction models using large case-control cohorts genotyped for 909,763 GWAS SNPs or 123,437 SNPs on the custom designed Immunochip using four prediction methods (polygenic score, best linear genomic prediction, elastic-net regularization and a Bayesian mixture model). We used the area under the curve (AUC) to assess prediction performance for discovery populations with different sample sizes and number of SNPs within cross-validation.\n\n\nRESULTS\nOn average, the Bayesian mixture approach had the best prediction performance. Using cross-validation we found little differences in prediction performance between GWAS and Immunochip, despite the GWAS array providing a 10 times larger effective genome-wide coverage. The prediction performance using Immunochip is largely due to the power of the initial GWAS for its marker selection and its low cost that enabled larger sample sizes. The predictive ability of the genomic risk score based on Immunochip was replicated in external data, with AUC of 0.75 for CD and 0.70 for UC. CD patients with higher risk scores demonstrated clinical characteristics typically associated with a more severe disease course including ileal location and earlier age at diagnosis.\n\n\nCONCLUSIONS\nOur analyses demonstrate that the power of genomic risk prediction for IBD is mainly due to strongly associated SNPs with considerable effect sizes. Additional SNPs that are only tagged by high-density GWAS arrays and low or rare-variants over-represented in the high-density region on the Immunochip contribute little to prediction accuracy. Although a quantitative assessment of IBD risk for an individual is not currently possible, we show sufficient power of genomic risk scores to stratify IBD risk among individuals at diagnosis."
},
{
"pmid": "20424251",
"title": "Coronary artery calcium score and risk classification for coronary heart disease prediction.",
"abstract": "CONTEXT\nThe coronary artery calcium score (CACS) has been shown to predict future coronary heart disease (CHD) events. However, the extent to which adding CACS to traditional CHD risk factors improves classification of risk is unclear.\n\n\nOBJECTIVE\nTo determine whether adding CACS to a prediction model based on traditional risk factors improves classification of risk.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nCACS was measured by computed tomography in 6814 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), a population-based cohort without known cardiovascular disease. Recruitment spanned July 2000 to September 2002; follow-up extended through May 2008. Participants with diabetes were excluded from the primary analysis. Five-year risk estimates for incident CHD were categorized as 0% to less than 3%, 3% to less than 10%, and 10% or more using Cox proportional hazards models. Model 1 used age, sex, tobacco use, systolic blood pressure, antihypertensive medication use, total and high-density lipoprotein cholesterol, and race/ethnicity. Model 2 used these risk factors plus CACS. We calculated the net reclassification improvement and compared the distribution of risk using model 2 vs model 1.\n\n\nMAIN OUTCOME MEASURES\nIncident CHD events.\n\n\nRESULTS\nDuring a median of 5.8 years of follow-up among a final cohort of 5878, 209 CHD events occurred, of which 122 were myocardial infarction, death from CHD, or resuscitated cardiac arrest. Model 2 resulted in significant improvements in risk prediction compared with model 1 (net reclassification improvement = 0.25; 95% confidence interval, 0.16-0.34; P < .001). In model 1, 69% of the cohort was classified in the highest or lowest risk categories compared with 77% in model 2. An additional 23% of those who experienced events were reclassified as high risk, and an additional 13% without events were reclassified as low risk using model 2.\n\n\nCONCLUSION\nIn this multi-ethnic cohort, addition of CACS to a prediction model based on traditional risk factors significantly improved the classification of risk and placed more individuals in the most extreme risk categories."
},
{
"pmid": "12063776",
"title": "Implications of the human genome project for the identification of genetic risk of coronary heart disease and its prevention in children.",
"abstract": "Most male citizens of Western countries already have some degree of atherosclerosis by the age of 18, indicating that initiation of atherosclerosis in childhood is a virtually ubiquitous process. This process has a strong genetic component. However, identifying the exact nature of that component is not an easy task, because in the overwhelming majority of cases atherosclerosis is due not to disorders in single genes but to the effects of many genes operating together against a variable environmental background. The preliminary results of the sequencing of the human genome indicate fewer genes, but more complexity in the regulation of the expression of these genes, than was previously thought. For these reasons it is likely that prediction and management of atherosclerotic risk in children in the next years will depend not on the results of genetic testing, but on the differentiated analysis of classical risk factors. These issues are discussed in detail in this review."
},
{
"pmid": "17785532",
"title": "Prediction of individual genetic risk to disease from genome-wide association studies.",
"abstract": "Empirical studies suggest that the effect sizes of individual causal risk alleles underlying complex genetic diseases are small, with most genotype relative risks in the range of 1.1-2.0. Although the increased risk of disease for a carrier is small for any single locus, knowledge of multiple-risk alleles throughout the genome could allow the identification of individuals that are at high risk. In this study, we investigate the number and effect size of risk loci that underlie complex disease constrained by the disease parameters of prevalence and heritability. Then we quantify the value of prediction of genetic risk to disease using a range of realistic combinations of the number, size, and distribution of risk effects that underlie complex diseases. We propose an approach to assess the genetic risk of a disease in healthy individuals, based on dense genome-wide SNP panels. We test this approach using simulation. When the number of loci contributing to the disease is >50, a large case-control study is needed to identify a set of risk loci for use in predicting the disease risk of healthy people not included in the case-control study. For diseases controlled by 1000 loci of mean relative risk of only 1.04, a case-control study with 10,000 cases and controls can lead to selection of approximately 75 loci that explain >50% of the genetic variance. The 5% of people with the highest predicted risk are three to seven times more likely to suffer the disease than the population average, depending on heritability and disease prevalence. Whether an individual with known genetic risk develops the disease depends on known and unknown environmental factors."
},
{
"pmid": "28699566",
"title": "Entity recognition from clinical texts via recurrent neural network.",
"abstract": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction."
},
{
"pmid": "29084508",
"title": "Long short-term memory RNN for biomedical named entity recognition.",
"abstract": "BACKGROUND\nBiomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features.\n\n\nRESULTS\nWe present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance - 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus.\n\n\nCONCLUSIONS\nOur neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER ."
}
] |
BMC Medical Informatics and Decision Making | 30961594 | PMC6454602 | 10.1186/s12911-019-0763-6 | Inverse reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units | BackgroundReinforcement learning (RL) provides a promising technique to solve complex sequential decision making problems in health care domains. To ensure such applications, an explicit reward function encoding domain knowledge should be specified beforehand to indicate the goal of tasks. However, there is usually no explicit information regarding the reward function in medical records. It is then necessary to consider an approach whereby the reward function can be learned from a set of presumably optimal treatment trajectories using retrospective real medical data. This paper applies inverse RL in inferring the reward functions that clinicians have in mind during their decisions on weaning of mechanical ventilation and sedative dosing in Intensive Care Units (ICUs).MethodsWe model the decision making problem as a Markov Decision Process, and use a batch RL method, Fitted Q Iterations with Gradient Boosting Decision Tree, to learn a suitable ventilator weaning policy from real trajectories in retrospective ICU data. A Bayesian inverse RL method is then applied to infer the latent reward functions in terms of weights in trading off various aspects of evaluation criterion. We then evaluate how the policy learned using the Bayesian inverse RL method matches the policy given by clinicians, as compared to other policies learned with fixed reward functions.ResultsResults show that the inverse RL method is capable of extracting meaningful indicators for recommending extubation readiness and sedative dosage, indicating that clinicians pay more attention to patients’ physiological stability (e.g., heart rate and respiration rate), rather than oxygenation criteria (FiO2, PEEP and SpO2) which is supported by previous RL methods. Moreover, by discovering the optimal weights, new effective treatment protocols can be suggested.ConclusionsInverse RL is an effective approach to discovering clinicians’ underlying reward functions for designing better treatment protocols in the ventilation weaning and sedative dosing in future ICUs. | Related workWith the development in ubiquitous monitoring techniques, a plethora of ICU data has been generated in a variety of formats such as free-text clinical notes, images, physiological waveforms, and vital sign time series, enable optimal diagnose, treat and mortality prediction of a patient in ICUs [15]. Thus far, a great deal of theoretical or experimental studies have employed RL techniques and models for decision support in critical care. Nemati et al. developed deep RL algorithms that learn an optimal heparin dosing policy from real trails in large electronic medical records [19, 20]. Sandu et al. studied the blood pressure regulation problem in post cardiac surgery patients using RL [21]. Padmanabhan et al. resorted to RL for the control of continuous intravenous infusion of propofol for ICU patients by both considering anesthetic effect and regulating the mean arterial pressure to a desired range [8]. Raghu et al. proposed an approach to deduce treatment policies for septic patients by using continuous deep RL methods [22], and Weng et al. applied RL to learn personalized optimal glycemic treatments for severely ill septic patients [9]. The most related work is that by Prasad et al., who applied batch RL algorithms, fitted Q iteration with extremely randomized trees, to determine the best weaning time of invasive mechanical ventilation, and the associated personalized sedative dosage [18]. Results demonstrate that the learned policies show promise in recommending weaning protocols with improved outcomes, in terms of minimizing rates of reintubation and regulating physiological stability. However, all these studies are built upon a well predefined reward function that requires heavy domain knowledge and manual engineering.Ng and Russell first introduced IRL to describe the problem of recovering a reward function of an MDP from demonstrations [10]. Numerous IRL methods have been proposed afterwards, including Apprenticeship Learning [11], Maximum Entropy IRL [23], Bayesian IRL [24], and nonlinear representations of the reward function using Gaussian processes [25]. Most of these methods need to solve an RL problem in each step of reward learning, requiring an accurate model of the system’s dynamics that is either given a priori or can be estimated well enough from demonstrations. However, such accurate models are rarely available in clinical settings. How to guarantee the performance of the RL solutions in an IRL process is an unsolved issue in IRL applications, especially in clinical settings where the only available information is the observations of a clinician’s treatment data that are subject to unavoidable noise, bias and censoring issues. | [
"28815137",
"29034482",
"25091172",
"21799585",
"28030999",
"27219127"
] | [
{
"pmid": "28815137",
"title": "Combining Kernel and Model Based Learning for HIV Therapy Selection.",
"abstract": "We present a mixture-of-experts approach for HIV therapy selection. The heterogeneity in patient data makes it difficult for one particular model to succeed at providing suitable therapy predictions for all patients. An appropriate means for addressing this heterogeneity is through combining kernel and model-based techniques. These methods capture different kinds of information: kernel-based methods are able to identify clusters of similar patients, and work well when modelling the viral response for these groups. In contrast, model-based methods capture the sequential process of decision making, and are able to find simpler, yet accurate patterns in response for patients outside these groups. We take advantage of this information by proposing a mixture-of-experts model that automatically selects between the methods in order to assign the most appropriate therapy choice to an individual. Overall, we verify that therapy combinations proposed using this approach significantly outperform previous methods."
},
{
"pmid": "29034482",
"title": "Deep reinforcement learning for automated radiation adaptation in lung cancer.",
"abstract": "PURPOSE\nTo investigate deep reinforcement learning (DRL) based on historical treatment plans for developing automated radiation adaptation protocols for nonsmall cell lung cancer (NSCLC) patients that aim to maximize tumor local control at reduced rates of radiation pneumonitis grade 2 (RP2).\n\n\nMETHODS\nIn a retrospective population of 114 NSCLC patients who received radiotherapy, a three-component neural networks framework was developed for deep reinforcement learning (DRL) of dose fractionation adaptation. Large-scale patient characteristics included clinical, genetic, and imaging radiomics features in addition to tumor and lung dosimetric variables. First, a generative adversarial network (GAN) was employed to learn patient population characteristics necessary for DRL training from a relatively limited sample size. Second, a radiotherapy artificial environment (RAE) was reconstructed by a deep neural network (DNN) utilizing both original and synthetic data (by GAN) to estimate the transition probabilities for adaptation of personalized radiotherapy patients' treatment courses. Third, a deep Q-network (DQN) was applied to the RAE for choosing the optimal dose in a response-adapted treatment setting. This multicomponent reinforcement learning approach was benchmarked against real clinical decisions that were applied in an adaptive dose escalation clinical protocol. In which, 34 patients were treated based on avid PET signal in the tumor and constrained by a 17.2% normal tissue complication probability (NTCP) limit for RP2. The uncomplicated cure probability (P+) was used as a baseline reward function in the DRL.\n\n\nRESULTS\nTaking our adaptive dose escalation protocol as a blueprint for the proposed DRL (GAN + RAE + DQN) architecture, we obtained an automated dose adaptation estimate for use at ∼2/3 of the way into the radiotherapy treatment course. By letting the DQN component freely control the estimated adaptive dose per fraction (ranging from 1-5 Gy), the DRL automatically favored dose escalation/de-escalation between 1.5 and 3.8 Gy, a range similar to that used in the clinical protocol. The same DQN yielded two patterns of dose escalation for the 34 test patients, but with different reward variants. First, using the baseline P+ reward function, individual adaptive fraction doses of the DQN had similar tendencies to the clinical data with an RMSE = 0.76 Gy; but adaptations suggested by the DQN were generally lower in magnitude (less aggressive). Second, by adjusting the P+ reward function with higher emphasis on mitigating local failure, better matching of doses between the DQN and the clinical protocol was achieved with an RMSE = 0.5 Gy. Moreover, the decisions selected by the DQN seemed to have better concordance with patients eventual outcomes. In comparison, the traditional temporal difference (TD) algorithm for reinforcement learning yielded an RMSE = 3.3 Gy due to numerical instabilities and lack of sufficient learning.\n\n\nCONCLUSION\nWe demonstrated that automated dose adaptation by DRL is a feasible and a promising approach for achieving similar results to those chosen by clinicians. The process may require customization of the reward function if individual cases were to be considered. However, development of this framework into a fully credible autonomous system for clinical decision support would require further validation on larger multi-institutional datasets."
},
{
"pmid": "25091172",
"title": "Optimization of anemia treatment in hemodialysis patients via reinforcement learning.",
"abstract": "OBJECTIVE\nAnemia is a frequent comorbidity in hemodialysis patients that can be successfully treated by administering erythropoiesis-stimulating agents (ESAs). ESAs dosing is currently based on clinical protocols that often do not account for the high inter- and intra-individual variability in the patient's response. As a result, the hemoglobin level of some patients oscillates around the target range, which is associated with multiple risks and side-effects. This work proposes a methodology based on reinforcement learning (RL) to optimize ESA therapy.\n\n\nMETHODS\nRL is a data-driven approach for solving sequential decision-making problems that are formulated as Markov decision processes (MDPs). Computing optimal drug administration strategies for chronic diseases is a sequential decision-making problem in which the goal is to find the best sequence of drug doses. MDPs are particularly suitable for modeling these problems due to their ability to capture the uncertainty associated with the outcome of the treatment and the stochastic nature of the underlying process. The RL algorithm employed in the proposed methodology is fitted Q iteration, which stands out for its ability to make an efficient use of data.\n\n\nRESULTS\nThe experiments reported here are based on a computational model that describes the effect of ESAs on the hemoglobin level. The performance of the proposed method is evaluated and compared with the well-known Q-learning algorithm and with a standard protocol. Simulation results show that the performance of Q-learning is substantially lower than FQI and the protocol. When comparing FQI and the protocol, FQI achieves an increment of 27.6% in the proportion of patients that are within the targeted range of hemoglobin during the period of treatment. In addition, the quantity of drug needed is reduced by 5.13%, which indicates a more efficient use of ESAs.\n\n\nCONCLUSION\nAlthough prospective validation is required, promising results demonstrate the potential of RL to become an alternative to current protocols."
},
{
"pmid": "21799585",
"title": "Informing sequential clinical decision-making through reinforcement learning: an empirical study.",
"abstract": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia."
},
{
"pmid": "28030999",
"title": "Seizure Control in a Computational Model Using a Reinforcement Learning Stimulation Paradigm.",
"abstract": "Neuromodulation technologies such as vagus nerve stimulation and deep brain stimulation, have shown some efficacy in controlling seizures in medically intractable patients. However, inherent patient-to-patient variability of seizure disorders leads to a wide range of therapeutic efficacy. A patient specific approach to determining stimulation parameters may lead to increased therapeutic efficacy while minimizing stimulation energy and side effects. This paper presents a reinforcement learning algorithm that optimizes stimulation frequency for controlling seizures with minimum stimulation energy. We apply our method to a computational model called the epileptor. The epileptor model simulates inter-ictal and ictal local field potential data. In order to apply reinforcement learning to the Epileptor, we introduce a specialized reward function and state-space discretization. With the reward function and discretization fixed, we test the effectiveness of the temporal difference reinforcement learning algorithm (TD(0)). For periodic pulsatile stimulation, we derive a relation that describes, for any stimulation frequency, the minimal pulse amplitude required to suppress seizures. The TD(0) algorithm is able to identify parameters that control seizures quickly. Additionally, our results show that the TD(0) algorithm refines the stimulation frequency to minimize stimulation energy thereby converging to optimal parameters reliably. An advantage of the TD(0) algorithm is that it is adaptive so that the parameters necessary to control the seizures can change over time. We show that the algorithm can converge on the optimal solution in simulation with slow and fast inter-seizure intervals."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
}
] |
BMC Medical Informatics and Decision Making | 30961582 | PMC6454670 | 10.1186/s12911-019-0771-6 | On building a diabetes centric knowledge base via mining the web | BackgroundDiabetes has become one of the hot topics in life science researches. To support the analytical procedures, researchers and analysts expend a mass of labor cost to collect experimental data, which is also error-prone. To reduce the cost and to ensure the data quality, there is a growing trend of extracting clinical events in form of knowledge from electronic medical records (EMRs). To do so, we first need a high-coverage knowledge base (KB) of a specific disease to support the above extraction tasks called KB-based Extraction.MethodsWe propose an approach to build a diabetes-centric knowledge base (a.k.a. DKB) via mining the Web. In particular, we first extract knowledge from semi-structured contents of vertical portals, fuse individual knowledge from each site, and further map them to a unified KB. The target DKB is then extracted from the overall KB based on a distance-based Expectation-Maximization (EM) algorithm.ResultsDuring the experiments, we selected eight popular vertical portals in China as data sources to construct DKB. There are 7703 instances and 96,041 edges in the final diabetes KB covering diseases, symptoms, western medicines, traditional Chinese medicines, examinations, departments, and body structures. The accuracy of DKB is 95.91%. Besides the quality assessment of extracted knowledge from vertical portals, we also carried out detailed experiments for evaluating the knowledge fusion performance as well as the convergence of the distance-based EM algorithm with positive results.ConclusionsIn this paper, we introduced an approach to constructing DKB. A knowledge extraction and fusion pipeline was first used to extract semi-structured data from vertical portals and individual KBs were further fused into a unified knowledge base. After that, we develop a distance based Expectation Maximization algorithm to extract a subset from the overall knowledge base forming the target DKB. Experiments showed that the data in DKB are rich and of high-quality. | Related WorkThere are three lines of research related to the problem we solve. Details are discussed as follows respectively.
Existing Knowledge Base Constructed Via Mining the Web Over the past decade, there emerged a number of automatically constructed large-scale knowledge bases via Web mining, which contain millions or even billions items of knowledge. Such knowledge bases usually employ information extraction techniques to extract knowledge from the Web (e.g. Wikipedia articles or general Web pages). Notable endeavors in the academic community include: Open Information Extraction [13], DBpedia [14], and YAGO [15]. In this phase, a number of large-scale Chinese knowledge bases have also emerged, including Zhishi.me [16], SSCO [17] and the commercial knowledge bases Sogou Zhilifang [18] and Baidu Zhixin [19] supporting Chinese search engines.Medical Knowledge Base There exist many different types of medical KBs. For example, UMLS [20] and SNOMED-CT [21] promote standardization and inter-operability for biomedical information systems and services. DrugBank [22] and SIDER [23] contain drug-related information. These knowledge bases are built and maintained manually with heavy human efforts. There are also some studies in the medical field which begins to construct a knowledge base by automatic algorithms. Knowlife [24] is a knowledge graph for the biomedical science which extracts and fuses data from scientific publications, encyclopedic health care portals and online communities. They used a distant supervision algorithm in the extraction phase and employed logical reasoning for consistency checking. Different from the above mentioned work, our extraction target focuses on diabetes and diabetes-related entities. Also, the types of data sources and methods used are quite different.Diabetes Knowledge Base T1Dbase [25], T2D-Db [26], T2DGADB [27] and T2D@ZJU [28] are active serving KBs for Type I and Type II diabetes. T1Dbase supports the Type I diabetes community with genetics and genomics of Type I diabetes susceptibility (T1D). The T2D-Db database provides an integrated platform for the better molecular level understanding of Type II diabetes mellitus and its pathologies. It manually created 330 candidate genes from the Pubmed literature and provided their corresponding information. T2DGADB collected 701 publications in T2D genetic association studies and T2D@ZJU contains heterogeneous connections associated with Type II diabetes. These databases concentrate on dealing with genetic association studies as well as more integrated resources involving gene expressions, pathways and protein-protein interactions. However, all the existing diabetes KBs are all counting on English resources and the presence of Chinese DKBs is rather limited, which is the focus of our work. | [
"21879900",
"18605991"
] | [
{
"pmid": "21879900",
"title": "Electronic health records and quality of diabetes care.",
"abstract": "BACKGROUND\nAvailable studies have shown few quality-related advantages of electronic health records (EHRs) over traditional paper records. We compared achievement of and improvement in quality standards for diabetes at practices using EHRs with those at practices using paper records. All practices, including many safety-net primary care practices, belonged to a regional quality collaborative and publicly reported performance.\n\n\nMETHODS\nWe used generalized estimating equations to calculate the percentage-point difference between EHR-based and paper-based practices with respect to achievement of composite standards for diabetes care (including four component standards) and outcomes (five standards), after adjusting for covariates and accounting for clustering. In addition to insurance type (Medicare, commercial, Medicaid, or uninsured), patient-level covariates included race or ethnic group (white, black, Hispanic, or other), age, sex, estimated household income, and level of education. Analyses were conducted separately for the overall sample and for safety-net practices.\n\n\nRESULTS\nFrom July 2009 through June 2010, data were reported for 27,207 adults with diabetes seen at 46 practices; safety-net practices accounted for 38% of patients. After adjustment for covariates, achievement of composite standards for diabetes care was 35.1 percentage points higher at EHR sites than at paper-based sites (P<0.001), and achievement of composite standards for outcomes was 15.2 percentage points higher (P=0.005). EHR sites were associated with higher achievement on eight of nine component standards. Such sites were also associated with greater improvement in care (a difference of 10.2 percentage points in annual improvement, P<0.001) and outcomes (a difference of 4.1 percentage points in annual improvement, P=0.02). Across all insurance types, EHR sites were associated with significantly higher achievement of care and outcome standards and greater improvement in diabetes care. Results confined to safety-net practices were similar.\n\n\nCONCLUSIONS\nThese findings support the premise that federal policies encouraging the meaningful use of EHRs may improve the quality of care across insurance types."
},
{
"pmid": "18605991",
"title": "T2D-Db: an integrated platform to study the molecular basis of Type 2 diabetes.",
"abstract": "BACKGROUND\nType 2 Diabetes Mellitus (T2DM) is a non insulin dependent, complex trait disease that develops due to genetic predisposition and environmental factors. The advanced stage in type 2 diabetes mellitus leads to several micro and macro vascular complications like nephropathy, neuropathy, retinopathy, heart related problems etc. Studies performed on the genetics, biochemistry and molecular biology of this disease to understand the pathophysiology of type 2 diabetes mellitus has led to the generation of a surfeit of data on candidate genes and related aspects. The research is highly progressive towards defining the exact etiology of this disease.\n\n\nRESULTS\nT2D-Db (Type 2 diabetes Database) is a comprehensive web resource, which provides integrated and curated information on almost all known molecular components involved in the pathogenesis of type 2 diabetes mellitus in the three widely studied mammals namely human, mouse and rat. Information on candidate genes, SNPs (Single Nucleotide Polymorphism) in candidate genes or candidate regions, genome wide association studies (GWA), tissue specific gene expression patterns, EST (Expressed Sequence Tag) data, expression information from microarray data, pathways, protein-protein interactions and disease associated risk factors or complications have been structured in this on line resource.\n\n\nCONCLUSION\nInformation available in T2D-Db provides an integrated platform for the better molecular level understanding of type 2 diabetes mellitus and its pathogenesis. Importantly, the resource facilitates graphical presentation of the gene/genome wide map of SNP markers and protein-protein interaction networks, besides providing the heat map diagram of the selected gene(s) in an organism across microarray expression experiments from either single or multiple studies. These features aid to the data interpretation in an integrative way. T2D-Db is to our knowledge the first publicly available resource that can cater to the needs of researchers working on different aspects of type 2 diabetes mellitus."
}
] |
BMC Medical Informatics and Decision Making | 30961606 | PMC6454675 | 10.1186/s12911-019-0755-6 | Incorporating causal factors into reinforcement learning for dynamic treatment regimes in HIV | BackgroundReinforcement learning (RL) provides a promising technique to solve complex sequential decision making problems in health care domains. However, existing studies simply apply naive RL algorithms in discovering optimal treatment strategies for a targeted problem. This kind of direct applications ignores the abundant causal relationships between treatment options and the associated outcomes that are inherent in medical domains.MethodsThis paper investigates how to integrate causal factors into an RL process in order to facilitate the final learning performance and increase explanations of learned strategies. A causal policy gradient algorithm is proposed and evaluated in dynamic treatment regimes (DTRs) for HIV based on a simulated computational model.ResultsSimulations prove the effectiveness of the proposed algorithm for designing more efficient treatment protocols in HIV, and different definitions of the causal factors could have significant influence on the final learning performance, indicating the necessity of human prior knowledge on defining a suitable causal relationships for a given problem.ConclusionsMore efficient and robust DTRs for HIV can be derived through incorporation of causal factors between options of anti-HIV drugs and the associated treatment outcomes. | Related workRL has been applied to DTRs in HIV by several studies. Ernst et al. [9] first introduced RL techniques in computing Structured Treatment Interruption (STI) strategies for HIV infected patients. Using a mathematical model [8] to artificially generate the clinical data, a batch RL method, i.e., fitted Q iteration (FQI) with extremely randomized trees, was applied to learn an optimal drug prescription strategy in an off-line manner. The derived STI strategy is featured with a cycling between the two main anti-HIV drugs: Reverse Transcriptase Inhibitors (RTI) and Protease Inhibitors (PI). Using the same mathematical model, Parbhoo [10] further implemented three kinds of batch RL methods, FQI with extremely randomized trees, neural FQI and least square policy iterations (LSPI), to the problem of drug scheduling and HIV treatment design. Results indicated that each learning technique had its own advantages and disadvantages. Moreover, a testing based on a ten-year period of real clinical data from 250 HIV-infected patients in Charlotte Maxeke Johannesburg Academic Hospital, South Africa, verified that the RL methods were capable of suggesting treatments that are reasonably compliant with those suggested by clinicians.The authors in [11] used the Q-learning algorithm in HIV treatment and obtained a good performance and high functionality in controlling the free virions for both certain and uncertain HIV models. A mixture-of-experts approach was proposed in [2] to combine the strengths of both kernel-based regression methods (i.e., history-alignment model) and RL (i.e., model-based Bayesian POMDP model) for HIV therapy selection. Making use of a subset of the EuResist database consisting of HIV genotype and treatment response data for 32,960 patients, together with the 312 most common drug combinations in the cohort, the treatment therapy derived by the mixture-of-experts approach outperform those derived by using each method alone. Marivate et al. [12] formalized a routine to accommodate multiple sources of uncertainty in batch RL methods to better evaluate the effectiveness of treatments across subpopulations of HIV patients. Killian et al. [13] similarly attempt to address and identify the variations across subpopulations in the development of HIV treatment policies by transferring knowledge between task instances.Unlike the above studies that mainly focus on value-based RL for developing treatment policies in HIV, we are the first to evaluate policy gradient RL methods in such problems. Moreover, in this paper, we aim at modeling causal relationships between the options of anti-HIV drugs and the associated treatment effect, and introducing such causal factors into policy gradient learning process, in order to facilitate the final learning process and increase its interpretation. | [
"28815137",
"29034482",
"21799585",
"20369969"
] | [
{
"pmid": "28815137",
"title": "Combining Kernel and Model Based Learning for HIV Therapy Selection.",
"abstract": "We present a mixture-of-experts approach for HIV therapy selection. The heterogeneity in patient data makes it difficult for one particular model to succeed at providing suitable therapy predictions for all patients. An appropriate means for addressing this heterogeneity is through combining kernel and model-based techniques. These methods capture different kinds of information: kernel-based methods are able to identify clusters of similar patients, and work well when modelling the viral response for these groups. In contrast, model-based methods capture the sequential process of decision making, and are able to find simpler, yet accurate patterns in response for patients outside these groups. We take advantage of this information by proposing a mixture-of-experts model that automatically selects between the methods in order to assign the most appropriate therapy choice to an individual. Overall, we verify that therapy combinations proposed using this approach significantly outperform previous methods."
},
{
"pmid": "29034482",
"title": "Deep reinforcement learning for automated radiation adaptation in lung cancer.",
"abstract": "PURPOSE\nTo investigate deep reinforcement learning (DRL) based on historical treatment plans for developing automated radiation adaptation protocols for nonsmall cell lung cancer (NSCLC) patients that aim to maximize tumor local control at reduced rates of radiation pneumonitis grade 2 (RP2).\n\n\nMETHODS\nIn a retrospective population of 114 NSCLC patients who received radiotherapy, a three-component neural networks framework was developed for deep reinforcement learning (DRL) of dose fractionation adaptation. Large-scale patient characteristics included clinical, genetic, and imaging radiomics features in addition to tumor and lung dosimetric variables. First, a generative adversarial network (GAN) was employed to learn patient population characteristics necessary for DRL training from a relatively limited sample size. Second, a radiotherapy artificial environment (RAE) was reconstructed by a deep neural network (DNN) utilizing both original and synthetic data (by GAN) to estimate the transition probabilities for adaptation of personalized radiotherapy patients' treatment courses. Third, a deep Q-network (DQN) was applied to the RAE for choosing the optimal dose in a response-adapted treatment setting. This multicomponent reinforcement learning approach was benchmarked against real clinical decisions that were applied in an adaptive dose escalation clinical protocol. In which, 34 patients were treated based on avid PET signal in the tumor and constrained by a 17.2% normal tissue complication probability (NTCP) limit for RP2. The uncomplicated cure probability (P+) was used as a baseline reward function in the DRL.\n\n\nRESULTS\nTaking our adaptive dose escalation protocol as a blueprint for the proposed DRL (GAN + RAE + DQN) architecture, we obtained an automated dose adaptation estimate for use at ∼2/3 of the way into the radiotherapy treatment course. By letting the DQN component freely control the estimated adaptive dose per fraction (ranging from 1-5 Gy), the DRL automatically favored dose escalation/de-escalation between 1.5 and 3.8 Gy, a range similar to that used in the clinical protocol. The same DQN yielded two patterns of dose escalation for the 34 test patients, but with different reward variants. First, using the baseline P+ reward function, individual adaptive fraction doses of the DQN had similar tendencies to the clinical data with an RMSE = 0.76 Gy; but adaptations suggested by the DQN were generally lower in magnitude (less aggressive). Second, by adjusting the P+ reward function with higher emphasis on mitigating local failure, better matching of doses between the DQN and the clinical protocol was achieved with an RMSE = 0.5 Gy. Moreover, the decisions selected by the DQN seemed to have better concordance with patients eventual outcomes. In comparison, the traditional temporal difference (TD) algorithm for reinforcement learning yielded an RMSE = 3.3 Gy due to numerical instabilities and lack of sufficient learning.\n\n\nCONCLUSION\nWe demonstrated that automated dose adaptation by DRL is a feasible and a promising approach for achieving similar results to those chosen by clinicians. The process may require customization of the reward function if individual cases were to be considered. However, development of this framework into a fully credible autonomous system for clinical decision support would require further validation on larger multi-institutional datasets."
},
{
"pmid": "21799585",
"title": "Informing sequential clinical decision-making through reinforcement learning: an empirical study.",
"abstract": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia."
},
{
"pmid": "20369969",
"title": "Dynamic multidrug therapies for hiv: optimal and sti control approaches.",
"abstract": "We formulate a dynamic mathematical model that describes the interaction of the immune system with the human immunodeficiency virus (HIV) and that permits drug \"cocktail \" therapies. We derive HIV therapeutic strategies by formulating and analyzing an optimal control problem using two types of dynamic treatments representing reverse transcriptase (RT) in hibitors and protease inhibitors (PIs). Continuous optimal therapies are found by solving the corresponding optimality systems. In addition, using ideas from dynamic programming, we formulate and derive suboptimal structured treatment interruptions (STI)in antiviral therapy that include drug-free periods of immune-mediated control of HIV. Our numerical results support a scenario in which STI therapies can lead to long-term control of HIV by the immune response system after discontinuation of therapy."
}
] |
Frontiers in Genetics | 31001322 | PMC6456706 | 10.3389/fgene.2019.00276 | UltraStrain: An NGS-Based Ultra Sensitive Strain Typing Method for Salmonella enterica | In the last few years, advances in next-generation sequencing (NGS) technology for whole genome sequencing (WGS) of foodborne pathogens have provided drastic improvements in food pathogen outbreak surveillance. WGS of foodborne pathogen enables identification of pathogens from food or environmental samples, including difficult-to-detect pathogens in culture-negative infections. Compared to traditional low-resolution methods such as the pulsed-field gel electrophoresis (PFGE), WGS provides advantages to differentiate even closely related strains of the same species, thus enables rapid identification of food-source associated with pathogen outbreak events for a fast mitigation plan. In this paper, we present UltraStrain, which is a fast and ultra sensitive pathogen detection and strain typing method for Salmonella enterica (S. enterica) based on WGS data analysis. In the proposed method, a noise filtering step is first performed where the raw sequencing data are mapped to a synthetic species-specific reference genome generated from S. enterica specific marker sequences to avoid potential interference from closely related species for low spike samples. After that, a statistical learning based method is used to identify candidate strains, from a database of known S. enterica strains, that best explain the retained S. enterica specific reads.Finally, a refinement step is further performed by mapping all the reads before filtering onto the identified top candidate strains, and recalculating the probability of presence for each candidate strain. Experiment results using both synthetic and real sequencing data show that the proposed method is able to identify the correct S. enterica strains from low-spike samples, and outperforms several existing strain-typing methods in terms of sensitivity and accuracy. | 2. Related WorkTaxonomic profiling of metagenome data can be done by aligning every read to a large database of genomic sequences using BLAST (https://blast.ncbi.nlm.nih.gov/Blast.cgi). However, this is always not clinically applicable due to the large data amount. Other methods for strain typing from metagenome data include de novo assembly based methods and mapping based methods. Depending on how the reference sequence library is constructed, mapping based methods further include k-mer and marker-gene based methods, and those that map reads to full reference genomes.Metagenomic assembly of single isolates can be used to identify strains of uncharacterized species with high sensitivity. Strain level metagenomic assembly methods, such as the Lineage (OBrien et al., 2014) and the DESMAN algorithms (Quince et al., 2017), typically use contig binning and statistical analysis of base frequencies across different strains in the sample to resolve ambiguities. The intuition behind is that the frequencies of variants associated with a strain fluctuate with the abundance of that strain. However, metagenomic assembly for multiple strains is computationally challenging. In addition, especially for complex clinical samples when multiple similar strains co-exist, it is generally impossible for assembly based method to achieve high accuracy on strain level due to the conserved regions between strains. Instead, direct assembly of multiple similar strains always produces highly fragmented assemblies which represent aggregates of multiple similar strains. Therefore, it is difficult to generalize assembly-based approaches to large sets of metagenomes and low abundance microbes.Mapping based methods align the reads to a target reference library and apply statistical and probabilistic analysis techniques on the alignment results to identify the multiple strains that present in the sample. Raw reads of a metagenome can be aligned against full reference genomes for microbe identification if the library of target reference genomes can be constructed. Short read alignment-based methods can achieve high accuracy in strain level identification and are considerably faster than metagenome assembly based methods. Sigma (Ahn et al., 2015) is a read mapping based method that maps the metagenomic dataset onto a user-defined database of reference genomes. A probabilistic model is used to identify and quantify genomes, and the reads are assigned to their most likely reference genomes for variant calling. PathoScope2 (Hong et al., 2014b) builds a complete pipeline for taxonomic profiling and abundance estimation from metagenomic data, integrating modules for reads quality control (Hong et al., 2014a), reference library preparation, filtering of host and non-target reads (Byrd et al., 2014), alignment, and Bayesian statistical inference to estimate the posterior probability profiles of identified organisms (Francis et al., 2013), etc. It can quantify the proportions of reads from individual microbial strains in metagenomic data from environmental or clinical samples.To speed up the alignment process, the reference library may contain only part of the whole reference genomes that have differentiating power among different but closely related strains. In such methods, metagenomic reads are aligned to a set of preselected marker sequences, e.g., k-mers, marker genes, or even pangenomes, and assigned to its most likely origin according to the alignment results. The taxonomic classification can be inferred from phylogenetic distances to these marker sequences. These methods differ in terms of the selection of the markers and the probabilistic algorithms for read assignment. The performance also heavily depends on the completeness of the reference database, and how the marker sequences are extracted.Kraken (Wood and Salzberg, 2014) is a fast k-mer based method for metagenomic sequence classification. Kraken builds a database that contains records consisting of a k-mer and the lowest common ancestor (LCA) of all organisms whose genomes contain that k-mer. The database is built from a user-specified library of genomes and allows quick look-up of the most specific node in the taxonomic tree, leading to fast and accurate strain identification. StrainSeeker (Roosaare et al., 2017) constructs a list of specific k-mers for each node of a given guide tree, whose leaves are all the strains, and analyzes the observed and expected fractions of node-specific k-mers to test the presence of each node in the sample. MetaPhlAn (Segata et al., 2012) is a taxonomic profiling method using marker genes. The method estimates the relative abundance of microbial cells by mapping reads against a reduced set of clade-specific marker sequences that unequivocally identify specific microbial clades at the species level and cover all of the main functional categories. MetaPhlAn2 (Truong et al., 2015) further extends the reference library from species level markers to subspecies markers that enable strain-level analysis, and increases the accuracy on taxonomic composition reconstruction. PanPhlAn (Scholz et al., 2016) builds a pangenome of the species of interest by extracting all genes from available reference genomes and merging them into gene family clusters. The method then leverages gene family co-abundance within a metagenomic sample to identify strain-specific gene repertoires, with the assumption that single-copy genes from the same genome should have comparable sequencing coverage within the sample. | [
"25266224",
"23024404",
"27041363",
"25091138",
"23843222",
"7512093",
"25225611",
"22199392",
"27530840",
"20843356",
"28824552",
"19453749",
"23965924",
"27429480",
"28077169",
"29234176",
"28533988",
"26451363",
"21192848",
"26999001",
"22688413",
"26418763",
"28167665",
"24580807",
"28649236",
"25762776"
] | [
{
"pmid": "25266224",
"title": "Sigma: strain-level inference of genomes from metagenomic analysis for biosurveillance.",
"abstract": "MOTIVATION\nMetagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis.\n\n\nRESULTS\nSigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic reads to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. The algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "23024404",
"title": "A universal method for the identification of bacteria based on general PCR primers.",
"abstract": "The Universal Method (UM) described here will allow the detection of any bacterial rDNA leading to the identification of that bacterium. The method should allow prompt and accurate identification of bacteria. The principle of the method is simple; when a pure PCR product of the 16S gene is obtained, sequenced, and aligned against bacterial DNA data base, then the bacterium can be identified. Confirmation of identity may follow. In this work, several general 16S primers were designed, mixed and applied successfully against 101 different bacterial isolates. One mixture, the Golden mixture7 (G7) detected all tested isolates (67/67). Other golden mixtures; G11, G10, G12, and G5 were useful as well. The overall sensitivity of the UM was 100% since all 101 isolates were detected yielding intended PCR amplicons. A selected PCR band from each of 40 isolates was sequenced and the bacterium identified to species or genus level using BLAST. The results of the UM were consistent with bacterial identities as validated with other identification methods; cultural, API 20E, API 20NE, or genera and species specific PCR primers. Bacteria identified in the study, covered 34 species distributed among 24 genera. The UM should allow the identification of species, genus, novel species or genera, variations within species, and detection of bacterial DNA in otherwise sterile samples such as blood, cerebrospinal fluid, manufactured products, medical supplies, cosmetics, and other samples. Applicability of the method to identifying members of bacterial communities is discussed. The approach itself can be applied to other taxa such as protists and nematodes."
},
{
"pmid": "27041363",
"title": "Recent and emerging innovations in Salmonella detection: a food and environmental perspective.",
"abstract": "Salmonella is a diverse genus of Gram-negative bacilli and a major foodborne pathogen responsible for more than a million illnesses annually in the United States alone. Rapid, reliable detection and identification of this pathogen in food and environmental sources is key to safeguarding the food supply. Traditional microbiological culture techniques have been the 'gold standard' for State and Federal regulators. Unfortunately, the time to result is too long to effectively monitor foodstuffs, especially those with very short shelf lives. Advances in traditional microbiology and molecular biology over the past 25 years have greatly improved the speed at which this pathogen is detected. Nonetheless, food and environmental samples possess a distinctive set of challenges for these newer, more rapid methodologies. Furthermore, more detailed identification and subtyping strategies still rely heavily on the availability of a pure isolate. However, major shifts in DNA sequencing technologies are meeting this challenge by advancing the detection, identification and subtyping of Salmonella towards a culture-independent diagnostic framework. This review will focus on current approaches and state-of-the-art next-generation advances in the detection, identification and subtyping of Salmonella from food and environmental sources."
},
{
"pmid": "25091138",
"title": "Clinical PathoScope: rapid alignment and filtration for accurate pathogen identification in clinical samples using unassembled sequencing data.",
"abstract": "BACKGROUND\nThe use of sequencing technologies to investigate the microbiome of a sample can positively impact patient healthcare by providing therapeutic targets for personalized disease treatment. However, these samples contain genomic sequences from various sources that complicate the identification of pathogens.\n\n\nRESULTS\nHere we present Clinical PathoScope, a pipeline to rapidly and accurately remove host contamination, isolate microbial reads, and identify potential disease-causing pathogens. We have accomplished three essential tasks in the development of Clinical PathoScope. First, we developed an optimized framework for pathogen identification using a computational subtraction methodology in concordance with read trimming and ambiguous read reassignment. Second, we have demonstrated the ability of our approach to identify multiple pathogens in a single clinical sample, accurately identify pathogens at the subspecies level, and determine the nearest phylogenetic neighbor of novel or highly mutated pathogens using real clinical sequencing data. Finally, we have shown that Clinical PathoScope outperforms previously published pathogen identification methods with regard to computational speed, sensitivity, and specificity.\n\n\nCONCLUSIONS\nClinical PathoScope is the only pathogen identification method currently available that can identify multiple pathogens from mixed samples and distinguish between very closely related species and strains in samples with very few reads per pathogen. Furthermore, Clinical PathoScope does not rely on genome assembly and thus can more rapidly complete the analysis of a clinical sample when compared with current assembly-based methods. Clinical PathoScope is freely available at: http://sourceforge.net/projects/pathoscope/."
},
{
"pmid": "23843222",
"title": "Pathoscope: species identification and strain attribution with unassembled sequencing data.",
"abstract": "Emerging next-generation sequencing technologies have revolutionized the collection of genomic data for applications in bioforensics, biosurveillance, and for use in clinical settings. However, to make the most of these new data, new methodology needs to be developed that can accommodate large volumes of genetic data in a computationally efficient manner. We present a statistical framework to analyze raw next-generation sequence reads from purified or mixed environmental or targeted infected tissue samples for rapid species identification and strain attribution against a robust database of known biological agents. Our method, Pathoscope, capitalizes on a Bayesian statistical framework that accommodates information on sequence quality, mapping quality, and provides posterior probabilities of matches to a known database of target genomes. Importantly, our approach also incorporates the possibility that multiple species can be present in the sample and considers cases when the sample species/strain is not in the reference database. Furthermore, our approach can accurately discriminate between very closely related strains of the same species with very little coverage of the genome and without the need for multiple alignment steps, extensive homology searches, or genome assembly--which are time-consuming and labor-intensive steps. We demonstrate the utility of our approach on genomic data from purified and in silico \"environmental\" samples from known bacterial agents impacting human health for accuracy assessment and comparison with other approaches."
},
{
"pmid": "7512093",
"title": "PCR primers and probes for the 16S rRNA gene of most species of pathogenic bacteria, including bacteria found in cerebrospinal fluid.",
"abstract": "A set of broad-range PCR primers for the 16S rRNA gene in bacteria were tested, along with three series of oligonucleotide probes to detect the PCR product. The first series of probes is broad in range and consists of a universal bacterial probe, a gram-positive probe, a Bacteroides-Flavobacterium probe, and two probes for other gram-negative species. The second series was designed to detect PCR products from seven major bacterial species or groups frequently causing meningitis: Neisseria meningitidis, Haemophilus influenzae, Streptococcus pneumoniae, S. agalactiae, Escherichia coli and other enteric bacteria, Listeria monocytogenes, and Staphylococcus aureus. The third series was designed for the detection of DNA from species or genera commonly considered potential contaminants of clinical samples, including cerebrospinal fluid (CSF): Bacillus, Corynebacterium, Propionibacterium, and coagulase-negative Staphylococcus spp. The primers amplified DNA from all 124 different species of bacteria tested. Southern hybridization testing of the broad-range probes with washes containing 3 M tetramethylammonium chloride indicated that this set of probes correctly identified all but two of the 102 bacterial species tested, the exceptions being Deinococcus radiopugnans and Gardnerella vaginalis. The gram-negative and gram-positive probes hybridized to isolates of two newly characterized bacteria, Alloiococcus otitis and Rochalimaea henselii, as predicted by Gram stain characteristics. The CSF pathogen and contaminant probe sequences were compared with available sequence information and with sequencing data for 32 different species. Testing of the CSF pathogen and contaminant probes against DNA from over 60 different strains indicated that, with the exception of the coagulase-negative Staphylococcus probes, these probes provided the correct identification of bacterial species known to be found in CSF."
},
{
"pmid": "25225611",
"title": "PathoScope 2.0: a complete computational framework for strain identification in environmental or clinical sequencing samples.",
"abstract": "BACKGROUND\nRecent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework.\n\n\nRESULTS\nWe present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4.\n\n\nCONCLUSIONS\nThe results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/."
},
{
"pmid": "22199392",
"title": "ART: a next-generation sequencing read simulator.",
"abstract": "UNLABELLED\nART is a set of simulation tools that generate synthetic next-generation sequencing reads. This functionality is essential for testing and benchmarking tools for next-generation sequencing data analysis including read alignment, de novo assembly and genetic variation discovery. ART generates simulated sequencing reads by emulating the sequencing process with built-in, technology-specific read error models and base quality value profiles parameterized empirically in large sequencing datasets. We currently support all three major commercial next-generation sequencing platforms: Roche's 454, Illumina's Solexa and Applied Biosystems' SOLiD. ART also allows the flexibility to use customized read error model parameters and quality profiles.\n\n\nAVAILABILITY\nBoth source and binary software packages are available at http://www.niehs.nih.gov/research/resources/software/art."
},
{
"pmid": "27530840",
"title": "Plasmid metagenomics reveals multiple antibiotic resistance gene classes among the gut microbiomes of hospitalised patients.",
"abstract": "Antibiotic resistance genes are rapidly spread between pathogens and the normal flora, with plasmids playing an important role in their circulation. This study aimed to investigate antibiotic resistance plasmids in the gut microbiome of hospitalised patients. Stool samples were collected from seven inpatients at Siriraj Hospital (Bangkok, Thailand) and were compared with a sample from a healthy volunteer. Plasmids from the gut microbiomes extracted from the stool samples were subjected to high-throughput DNA sequencing (GS Junior). Newbler-assembled DNA reads were categorised into known and unknown sequences (using >80% alignment length as the cut-off), and ResFinder was used to classify the antibiotic resistance gene pools. Plasmid replicon modules were used for plasmid typing. Forty-six genes conferring resistance to several classes of antibiotics were identified in the stool samples. Several antibiotic resistance genes were shared by the patients; interestingly, most were reported previously in food animals and healthy humans. Four antibiotic resistance genes were found in the healthy subject. One gene (aph3-III) was identified in the patients and the healthy subject and was related to that in cattle. Uncommon genes of hospital origin such as blaTEM-124-like and fosA, which confer resistance to extended-spectrum β-lactams and fosfomycin, respectively, were identified. The resistance genes did not match the patients' drug treatments. In conclusion, several plasmid types were identified in the gut microbiome; however, it was difficult to link these to the antibiotic resistance genes identified. That the antibiotic resistance genes came from hospital and community environments is worrying."
},
{
"pmid": "20843356",
"title": "Pan-genome sequence analysis using Panseq: an online tool for the rapid analysis of core and accessory genomic regions.",
"abstract": "BACKGROUND\nThe pan-genome of a bacterial species consists of a core and an accessory gene pool. The accessory genome is thought to be an important source of genetic variability in bacterial populations and is gained through lateral gene transfer, allowing subpopulations of bacteria to better adapt to specific niches. Low-cost and high-throughput sequencing platforms have created an exponential increase in genome sequence data and an opportunity to study the pan-genomes of many bacterial species. In this study, we describe a new online pan-genome sequence analysis program, Panseq.\n\n\nRESULTS\nPanseq was used to identify Escherichia coli O157:H7 and E. coli K-12 genomic islands. Within a population of 60 E. coli O157:H7 strains, the existence of 65 accessory genomic regions identified by Panseq analysis was confirmed by PCR. The accessory genome and binary presence/absence data, and core genome and single nucleotide polymorphisms (SNPs) of six L. monocytogenes strains were extracted with Panseq and hierarchically clustered and visualized. The nucleotide core and binary accessory data were also used to construct maximum parsimony (MP) trees, which were compared to the MP tree generated by multi-locus sequence typing (MLST). The topology of the accessory and core trees was identical but differed from the tree produced using seven MLST loci. The Loci Selector module found the most variable and discriminatory combinations of four loci within a 100 loci set among 10 strains in 1 s, compared to the 449 s required to exhaustively search for all possible combinations; it also found the most discriminatory 20 loci from a 96 loci E. coli O157:H7 SNP dataset.\n\n\nCONCLUSION\nPanseq determines the core and accessory regions among a collection of genomic sequences based on user-defined parameters. It readily extracts regions unique to a genome or group of genomes, identifies SNPs within shared core genomic regions, constructs files for use in phylogeny programs based on both the presence/absence of accessory regions and SNPs within core regions and produces a graphical overview of the output. Panseq also includes a loci selector that calculates the most variable and discriminatory loci among sets of accessory loci or core gene SNPs.\n\n\nAVAILABILITY\nPanseq is freely available online at http://76.70.11.198/panseq. Panseq is written in Perl."
},
{
"pmid": "28824552",
"title": "Pan-genome Analyses of the Species Salmonella enterica, and Identification of Genomic Markers Predictive for Species, Subspecies, and Serovar.",
"abstract": "Food safety is a global concern, with upward of 2.2 million deaths due to enteric disease every year. Current whole-genome sequencing platforms allow routine sequencing of enteric pathogens for surveillance, and during outbreaks; however, a remaining challenge is the identification of genomic markers that are predictive of strain groups that pose the most significant health threats to humans, or that can persist in specific environments. We have previously developed the software program Panseq, which identifies the pan-genome among a group of sequences, and the SuperPhy platform, which utilizes this pan-genome information to identify biomarkers that are predictive of groups of bacterial strains. In this study, we examined the pan-genome of 4893 genomes of Salmonella enterica, an enteric pathogen responsible for the loss of more disability adjusted life years than any other enteric pathogen. We identified a pan-genome of 25.3 Mbp, a strict core of 1.5 Mbp present in all genomes, and a conserved core of 3.2 Mbp found in at least 96% of these genomes. We also identified 404 genomic regions of 1000 bp that were specific to the species S. enterica. These species-specific regions were found to encode mostly hypothetical proteins, effectors, and other proteins related to virulence. For each of the six S. enterica subspecies, markers unique to each were identified. No serovar had pan-genome regions that were present in all of its genomes and absent in all other serovars; however, each serovar did have genomic regions that were universally present among all constituent members, and statistically predictive of the serovar. The phylogeny based on SNPs within the conserved core genome was found to be highly concordant to that produced by a phylogeny using the presence/absence of 1000 bp regions of the entire pan-genome. Future studies could use these predictive regions as components of a vaccine to prevent salmonellosis, as well as in simple and rapid diagnostic tests for both in silico and wet-lab applications, with uses ranging from food safety to public health. Lastly, the tools and methods described in this study could be applied as a pan-genomics framework to other population genomic studies seeking to identify markers for other bacterial species and their sub-groups."
},
{
"pmid": "19453749",
"title": "Bacterial strain typing in the genomic era.",
"abstract": "Bacterial strain typing, or identifying bacteria at the strain level, is particularly important for diagnosis, treatment, and epidemiological surveillance of bacterial infections. This is especially the case for bacteria exhibiting high levels of antibiotic resistance or virulence, and those involved in nosocomial or pandemic infections. Strain typing also has applications in studying bacterial population dynamics. Over the last two decades, molecular methods have progressively replaced phenotypic assays to type bacterial strains. In this article, we review the current bacterial genotyping methods and classify them into three main categories: (1) DNA banding pattern-based methods, which classify bacteria according to the size of fragments generated by amplification and/or enzymatic digestion of genomic DNA, (2) DNA sequencing-based methods, which study the polymorphism of DNA sequences, and (3) DNA hybridization-based methods using nucleotidic probes. We described and compared the applications of genotyping methods to the study of bacterial strain diversity. We also discussed the selection of appropriate genotyping methods and the challenges of bacterial strain typing, described the current trends of genotyping methods, and investigated the progresses allowed by the availability of genomic sequences."
},
{
"pmid": "23965924",
"title": "Reporting of foodborne illness by U.S. consumers and healthcare professionals.",
"abstract": "During 2009-2010, a total of 1,527 foodborne disease outbreaks were reported by the Centers for Disease Control and Prevention (CDC) (2013). However, in a 2011 CDC report, Scallan et al. estimated about 48 million people contract a foodborne illness annually in the United States. Public health officials are concerned with this under-reporting; thus, the purpose of this study was to identify why consumers and healthcare professionals don't report foodborne illness. Focus groups were conducted with 35 consumers who reported a previous experience with foodborne illness and with 16 healthcare professionals. Also, interviews with other healthcare professionals with responsibility of diagnosing foodborne illness were conducted. Not knowing who to contact, being too ill, being unsure of the cause, and believing reporting would not be beneficial were all identified by consumers as reasons for not reporting foodborne illness. Healthcare professionals that participated in the focus groups indicated the amount of time between patients' consumption of food and seeking treatment and lack of knowledge were barriers to diagnosing foodborne illness. Issues related to stool samples such as knowledge, access and cost were noted by both groups. Results suggest that barriers identified could be overcome with targeted education and improved access and information about the reporting process."
},
{
"pmid": "27429480",
"title": "Targeted Treatment for Bacterial Infections: Prospects for Pathogen-Specific Antibiotics Coupled with Rapid Diagnostics.",
"abstract": "Antibiotics are a cornerstone of modern medicine and have significantly reduced the burden of infectious diseases. However, commonly used broad-spectrum antibiotics can cause major collateral damage to the human microbiome, causing complications ranging from antibiotic-associated colitis to the rapid spread of resistance. Employing narrower spectrum antibiotics targeting specific pathogens may alleviate this predicament as well as provide additional tools to expand an antibiotic repertoire threatened by the inevitability of resistance. Improvements in clinical diagnosis will be required to effectively utilize pathogen-specific antibiotics and new molecular diagnostics are poised to fulfill this need. Here we review recent trends and the future prospects of deploying narrower spectrum antibiotics coupled with rapid diagnostics. Further, we discuss the theoretical advantages and limitations of this emerging approach to controlling bacterial infectious diseases."
},
{
"pmid": "28077169",
"title": "Distinct 5-methylcytosine profiles in poly(A) RNA from mouse embryonic stem cells and brain.",
"abstract": "BACKGROUND\nRecent work has identified and mapped a range of posttranscriptional modifications in mRNA, including methylation of the N6 and N1 positions in adenine, pseudouridylation, and methylation of carbon 5 in cytosine (m5C). However, knowledge about the prevalence and transcriptome-wide distribution of m5C is still extremely limited; thus, studies in different cell types, tissues, and organisms are needed to gain insight into possible functions of this modification and implications for other regulatory processes.\n\n\nRESULTS\nWe have carried out an unbiased global analysis of m5C in total and nuclear poly(A) RNA of mouse embryonic stem cells and murine brain. We show that there are intriguing differences in these samples and cell compartments with respect to the degree of methylation, functional classification of methylated transcripts, and position bias within the transcript. Specifically, we observe a pronounced accumulation of m5C sites in the vicinity of the translational start codon, depletion in coding sequences, and mixed patterns of enrichment in the 3' UTR. Degree and pattern of methylation distinguish transcripts modified in both embryonic stem cells and brain from those methylated in either one of the samples. We also analyze potential correlations between m5C and micro RNA target sites, binding sites of RNA binding proteins, and N6-methyladenosine.\n\n\nCONCLUSION\nOur study presents the first comprehensive picture of cytosine methylation in the epitranscriptome of pluripotent and differentiated stages in the mouse. These data provide an invaluable resource for future studies of function and biological significance of m5C in mRNA in mammals."
},
{
"pmid": "29234176",
"title": "Multilocus Sequence Typing of the Clinical Isolates of Salmonella Enterica Serovar Typhimurium in Tehran Hospitals.",
"abstract": "BACKGROUND\nSalmonella enterica serovar Typhimurium is one of the most important serovars of Salmonella enterica and is associated with human salmonellosis worldwide. Many epidemiological studies have focused on the characteristics of Salmonella Typhimurium in many countries as well as in Asia. This study was conducted to investigate the genetic characteristics of Salmonella Typhimurium using multilocus sequence typing (MLST).\n\n\nMETHODS\nClinical samples (urine, blood, and stool) were collected from patients, who were admitted to 2 hospitals in Tehran between April and September, 2015. Salmonella Typhimurium strains were identified by conventional standard biochemical and serological testing. The antibiotic susceptibility patterns of the Salmonella Typhimurium isolates against 16 antibiotics was determined using the disk diffusion assay. The clonal relationship between the strains of Salmonella Typhimurium was analyzed using MLST.\n\n\nRESULTS\nAmong the 68 Salmonella isolates, 31% (n=21) were Salmonella Typhimurium. Of the total 21 Salmonella Typhimurium isolates, 76% (n=16) were multidrug-resistant and showed resistance to 3 or more antibiotic families. The Salmonella Typhimurium isolates were assigned to 2 sequence types: ST19 and ST328. ST19 was more common (86%). Both sequence types were further assigned to 1 eBURST group.\n\n\nCONCLUSION\nThis is the first study of its kind in Iran to determine the sequence types of the clinical isolates of Salmonella Typhimurium in Tehran hospitals using MLST. ST19 was detected as the major sequence type of Salmonella Typhimurium."
},
{
"pmid": "28533988",
"title": "StrainSeeker: fast identification of bacterial strains from raw sequencing reads using user-provided guide trees.",
"abstract": "BACKGROUND\nFast, accurate and high-throughput identification of bacterial isolates is in great demand. The present work was conducted to investigate the possibility of identifying isolates from unassembled next-generation sequencing reads using custom-made guide trees.\n\n\nRESULTS\nA tool named StrainSeeker was developed that constructs a list of specific k-mers for each node of any given Newick-format tree and enables the identification of bacterial isolates in 1-2 min. It uses a novel algorithm, which analyses the observed and expected fractions of node-specific k-mers to test the presence of each node in the sample. This allows StrainSeeker to determine where the isolate branches off the guide tree and assign it to a clade whereas other tools assign each read to a reference genome. Using a dataset of 100 Escherichia coli isolates, we demonstrate that StrainSeeker can predict the clades of E. coli with 92% accuracy and correct tree branch assignment with 98% accuracy. Twenty-five thousand Illumina HiSeq reads are sufficient for identification of the strain.\n\n\nCONCLUSION\nStrainSeeker is a software program that identifies bacterial isolates by assigning them to nodes or leaves of a custom-made guide tree. StrainSeeker's web interface and pre-computed guide trees are available at http://bioinfo.ut.ee/strainseeker. Source code is stored at GitHub: https://github.com/bioinfo-ut/StrainSeeker."
},
{
"pmid": "26451363",
"title": "Challenges of the Unknown: Clinical Application of Microbial Metagenomics.",
"abstract": "Availability of fast, high throughput and low cost whole genome sequencing holds great promise within public health microbiology, with applications ranging from outbreak detection and tracking transmission events to understanding the role played by microbial communities in health and disease. Within clinical metagenomics, identifying microorganisms from a complex and host enriched background remains a central computational challenge. As proof of principle, we sequenced two metagenomic samples, a known viral mixture of 25 human pathogens and an unknown complex biological model using benchtop technology. The datasets were then analysed using a bioinformatic pipeline developed around recent fast classification methods. A targeted approach was able to detect 20 of the viruses against a background of host contamination from multiple sources and bacterial contamination. An alternative untargeted identification method was highly correlated with these classifications, and over 1,600 species were identified when applied to the complex biological model, including several species captured at over 50% genome coverage. In summary, this study demonstrates the great potential of applying metagenomics within the clinical laboratory setting and that this can be achieved using infrastructure available to nondedicated sequencing centres."
},
{
"pmid": "21192848",
"title": "Foodborne illness acquired in the United States--major pathogens.",
"abstract": "Estimates of foodborne illness can be used to direct food safety policy and interventions. We used data from active and passive surveillance and other sources to estimate that each year 31 major pathogens acquired in the United States caused 9.4 million episodes of foodborne illness (90% credible interval [CrI] 6.6-12.7 million), 55,961 hospitalizations (90% CrI 39,534-75,741), and 1,351 deaths (90% CrI 712-2,268). Most (58%) illnesses were caused by norovirus, followed by nontyphoidal Salmonella spp. (11%), Clostridium perfringens (10%), and Campylobacter spp. (9%). Leading causes of hospitalization were nontyphoidal Salmonella spp. (35%), norovirus (26%), Campylobacter spp. (15%), and Toxoplasma gondii (8%). Leading causes of death were nontyphoidal Salmonella spp. (28%), T. gondii (24%), Listeria monocytogenes (19%), and norovirus (11%). These estimates cannot be compared with prior (1999) estimates to assess trends because different methods were used. Additional data and more refined methods can improve future estimates."
},
{
"pmid": "26999001",
"title": "Strain-level microbial epidemiology and population genomics from shotgun metagenomics.",
"abstract": "Identifying microbial strains and characterizing their functional potential is essential for pathogen discovery, epidemiology and population genomics. We present pangenome-based phylogenomic analysis (PanPhlAn; http://segatalab.cibio.unitn.it/tools/panphlan), a tool that uses metagenomic data to achieve strain-level microbial profiling resolution. PanPhlAn recognized outbreak strains, produced the largest strain-level population genomic study of human-associated bacteria and, in combination with metatranscriptomics, profiled the transcriptional activity of strains in complex communities."
},
{
"pmid": "22688413",
"title": "Metagenomic microbial community profiling using unique clade-specific marker genes.",
"abstract": "Metagenomic shotgun sequencing data can identify microbes populating a microbial community and their proportions, but existing taxonomic profiling methods are inefficient for increasingly large data sets. We present an approach that uses clade-specific marker genes to unambiguously assign reads to microbial clades more accurately and >50× faster than current approaches. We validated our metagenomic phylogenetic analysis tool, MetaPhlAn, on terabases of short reads and provide the largest metagenomic profiling to date of the human gut. It can be accessed at http://huttenhower.sph.harvard.edu/metaphlan/."
},
{
"pmid": "28167665",
"title": "Microbial strain-level population structure and genetic diversity from metagenomes.",
"abstract": "Among the human health conditions linked to microbial communities, phenotypes are often associated with only a subset of strains within causal microbial groups. Although it has been critical for decades in microbial physiology to characterize individual strains, this has been challenging when using culture-independent high-throughput metagenomics. We introduce StrainPhlAn, a novel metagenomic strain identification approach, and apply it to characterize the genetic structure of thousands of strains from more than 125 species in more than 1500 gut metagenomes drawn from populations spanning North and South American, European, Asian, and African countries. The method relies on per-sample dominant sequence variant reconstruction within species-specific marker genes. It identified primarily subject-specific strain variants (<5% inter-subject strain sharing), and we determined that a single strain typically dominated each species and was retained over time (for >70% of species). Microbial population structure was correlated in several distinct ways with the geographic structure of the host population. In some cases, discrete subspecies (e.g., for Eubacterium rectale and Prevotella copri) or continuous microbial genetic variations (e.g., for Faecalibacterium prausnitzii) were associated with geographically distinct human populations, whereas few strains occurred in multiple unrelated cohorts. We further estimated the genetic variability of gut microbes, with Bacteroides species appearing remarkably consistent (0.45% median number of nucleotide variants between strains), whereas P. copri was among the most plastic gut colonizers. We thus characterize here the population genetics of previously inaccessible intestinal microbes, providing a comprehensive strain-level genetic overview of the gut microbial diversity."
},
{
"pmid": "24580807",
"title": "Kraken: ultrafast metagenomic sequence classification using exact alignments.",
"abstract": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/."
},
{
"pmid": "28649236",
"title": "The Validation and Implications of Using Whole Genome Sequencing as a Replacement for Traditional Serotyping for a National Salmonella Reference Laboratory.",
"abstract": "Salmonella serotyping remains the gold-standard tool for the classification of Salmonella isolates and forms the basis of Canada's national surveillance program for this priority foodborne pathogen. Public health officials have been increasingly looking toward whole genome sequencing (WGS) to provide a large set of data from which all the relevant information about an isolate can be mined. However, rigorous validation and careful consideration of potential implications in the replacement of traditional surveillance methodologies with WGS data analysis tools is needed. Two in silico tools for Salmonella serotyping have been developed, the Salmonella in silico Typing Resource (SISTR) and SeqSero, while seven gene MLST for serovar prediction can be adapted for in silico analysis. All three analysis methods were assessed and compared to traditional serotyping techniques using a set of 813 verified clinical and laboratory isolates, including 492 Canadian clinical isolates and 321 isolates of human and non-human sources. Successful results were obtained for 94.8, 88.2, and 88.3% of the isolates tested using SISTR, SeqSero, and MLST, respectively, indicating all would be suitable for maintaining historical records, surveillance systems, and communication structures currently in place and the choice of the platform used will ultimately depend on the users need. Results also pointed to the need to reframe serotyping in the genomic era as a test to understand the genes that are carried by an isolate, one which is not necessarily congruent with what is antigenically expressed. The adoption of WGS for serotyping will provide the simultaneous collection of information that can be used by multiple programs within the current surveillance paradigm; however, this does not negate the importance of the various programs or the role of serotyping going forward."
},
{
"pmid": "25762776",
"title": "Salmonella serotype determination utilizing high-throughput genome sequencing data.",
"abstract": "Serotyping forms the basis of national and international surveillance networks for Salmonella, one of the most prevalent foodborne pathogens worldwide (1-3). Public health microbiology is currently being transformed by whole-genome sequencing (WGS), which opens the door to serotype determination using WGS data. SeqSero (www.denglab.info/SeqSero) is a novel Web-based tool for determining Salmonella serotypes using high-throughput genome sequencing data. SeqSero is based on curated databases of Salmonella serotype determinants (rfb gene cluster, fliC and fljB alleles) and is predicted to determine serotype rapidly and accurately for nearly the full spectrum of Salmonella serotypes (more than 2,300 serotypes), from both raw sequencing reads and genome assemblies. The performance of SeqSero was evaluated by testing (i) raw reads from genomes of 308 Salmonella isolates of known serotype; (ii) raw reads from genomes of 3,306 Salmonella isolates sequenced and made publicly available by GenomeTrakr, a U.S. national monitoring network operated by the Food and Drug Administration; and (iii) 354 other publicly available draft or complete Salmonella genomes. We also demonstrated Salmonella serotype determination from raw sequencing reads of fecal metagenomes from mice orally infected with this pathogen. SeqSero can help to maintain the well-established utility of Salmonella serotyping when integrated into a platform of WGS-based pathogen subtyping and characterization."
}
] |
Frontiers in Computational Neuroscience | 31019458 | PMC6458299 | 10.3389/fncom.2019.00018 | Deep Learning With Asymmetric Connections and Hebbian Updates | We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights—a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights. | 2. Related workAs indicated in the introduction, the issue of the weight symmetry required for feedback computation in back-propagation, was already raised by Zipser and Rumelhart (1990) and the idea of separating the feedback connections from the feedforward connections was proposed. They then suggested updating each feedforward connection and feedback connection with the same increment. Assuming all weights are initialized at the same value the resulting computation is equivalent to back-propagation. The problem is that this reintroduces the implausible symmetry since the feedback and feedforward weights end up being identical.In Lillicrap et al. (2016) the simple idea of having fixed random feedback connections was explored and found to work well for shallow networks. However, the performance degrades as the depth of the network increases. It was noted that in shallow networks the feedforward weights gradually align with the fixed feedback weights so that in the long run an approximate back-propagation is being computed, hence the name feedback alignment. In Liao et al. (2016) the performance degradation of feedback alignment with depth was addressed by using layer-wise normalization of the outputs. This yielded results with fixed random feedback FRFB that are close to momentum based gradient descent of the back-propagation algorithm for certain network architectures. However, the propagation of the gradient through the normalization layer is complex and it is unclear how to implement it in a network. Furthermore Liao et al. (2016), showed that a simple transfer of information on the sign of the actual back-propagation gradient yields an improvement on using the purely random back-propagation matrix. It is however unclear how such information could be transmitted between different synapses.In Whittington and Bogacz (2017) a model for training a multilayer network is proposed using a predictive coding framework. However it appears that the model assumes symmetric connections, i.e., the strength of the connection from an error node and a variable in the preceding layer is the same as the reverse connection. A similar issue arises in Roelfsema and Holtmaat (2018), where in the analysis of their algorithm, they assume that in the long run, since the updates are the same, the synaptic values are the same. This is approximately true, in the sense that the correlations between feedforward and feedback weights increase but significant improvement in error rates are observed even early on when the correlations are weak.Burbank (2015) implements a proposal similar to Zipser and Rumelhart (1990) in the context of an autoencoder and attempts to find STDP rules that can implement the same increment for the feedforward and feedback connections. Again it is assumed that the initial conditions are very similar so that at each step the feedforward and feedback weights are closely aligned.In a recently archived paper (Pozzi et al., 2018) also goes back to the proposal in Zipser and Rumelhart (1990). However, as in our paper, they experiment with different initializations of the feedforward and feedback connections. They introduce a pairing of feedback and feedforward units to model the gating of information from the feedforward pass and the feedback pass. Algorithmically, the only substantial difference to our proposal is in the error signal produced by the output layer, only connections to the output unit representing the correct class are updated.Here we show that there is a natural way to update all units in the output layer so that subsequent synaptic modifications in the back-propagation are all Hebbian. The correct class unit is activated at the value 1 if the input is below a threshold, and the other classes are activated as −μ if the input is above a threshold. Thus, corrections occur through top-down feedback in the system when the inputs of any of the output units are not of sufficient magnitude and of the correct sign. We show that this approach works well even in much deeper networks with several convolutional layers and with more challenging data sets. We also present a mathematical analysis of the linearized version of this algorithm and show that the error converges faster when the feedback weights are updated compared to when they are held fixed as in Lillicrap et al. (2016).Lee et al. (2015) and Bartunov et al. (2018) study target propagation where an error signal is computed in each hidden unit as the difference between the feedforward activity of that unit and a target value propagated from above with feedback connections that are separate from the feedforward connections. The feedback connections between each two consecutive layers are trained to approximate the inverse of the feedforward function between those layers, i.e., the non-linearity applied to the linear transformation of the lower layer. In Bartunov et al. (2018) they analyze the performance of this method on a number of image classification problems and use locally connected layers instead of convolutional layers. In target propagation the losses for both the forward and the backward connections rely on magnitudes of differences between signals requiring a more complex synaptic modification mechanism than simple products of activities of pre and post-synaptic neurons as proposed in our model.Such synaptic modification mechanisms are studied in Guerguiev et al. (2017). A biological model for the neuronal units is presented that combines the feedforward and feedback signals within each neuron, and produces an error signal assuming fixed feedback weights as in Lillicrap et al. (2016). The idea is to divide the neuron into two separate compartments one computing feedforward signals and one computing feedback signals, with different phases of learning involving different combinations of these two signals. In addition to computing an error signal internally to the neuron this model avoids the need to compute signed errors, which imply negative as well as positive neuronal activity. However, this is done by assuming the neuron can internally compute the difference in average voltage between two time intervals. In Sacramento et al. (2018) this model is extended to include an inhibitory neuron attached to each hidden unit neuron with plastic synaptic connections to and from the hidden unit. They claim that this eliminates the need to compute the feedback error in separate phases form the feedforward error.In our model we simply assume that once the feedforward phase is complete the feedback signal replaces the feedforward signal at a unit—at the proper timing—to allow for the proper update of the incoming feedforward and outgoing feedback synapses. | [
"12842160",
"22737121",
"26633645",
"12929920",
"30108488",
"29205151",
"18244442",
"28532370",
"15333209",
"27824044",
"27683554",
"29449713",
"28333583",
"26906502"
] | [
{
"pmid": "12842160",
"title": "An integrated network for invariant visual detection and recognition.",
"abstract": "We describe an architecture for invariant visual detection and recognition. Learning is performed in a single central module. The architecture makes use of a replica module consisting of copies of retinotopic layers of local features, with a particular design of inputs and outputs, that allows them to be primed either to attend to a particular location, or to attend to a particular object representation. In the former case the data at a selected location can be classified in the central module. In the latter case all instances of the selected object are detected in the field of view. The architecture is used to explain a number of psychophysical and physiological observations: object based attention, the different response time slopes of target detection among distractors, and observed attentional modulation of neuronal responses. We hypothesize that the organization of visual cortex in columns of neurons responding to the same feature at the same location may provide the copying architecture needed for translation invariance."
},
{
"pmid": "22737121",
"title": "Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs.",
"abstract": "We describe an attractor network of binary perceptrons receiving inputs from a retinotopic visual feature layer. Each class is represented by a random subpopulation of the attractor layer, which is turned on in a supervised manner during learning of the feed forward connections. These are discrete three state synapses and are updated based on a simple field dependent Hebbian rule. For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronous random updating until convergence to a stable state. Classification is indicated by the sub-population that is persistently activated. The contribution of this paper is two-fold. This is the first example of competitive classification rates of real data being achieved through recurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced. Second, we demonstrate that employing three state synapses with feedforward inhibition is essential for achieving the competitive classification rates due to the ability to effectively employ both positive and negative informative features."
},
{
"pmid": "26633645",
"title": "Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.",
"abstract": "The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks."
},
{
"pmid": "12929920",
"title": "Spike-driven synaptic plasticity for learning correlated patterns of mean firing rates.",
"abstract": "Long term synaptic changes induced by neural spike activity are believed to underlie learning and memory. Spike-driven long-term synaptic plasticity has been investigated in simplified situations in which the patterns of mean rates to be encoded were statistically independent. An additional regulatory mechanism is required to extend the learning capability to more complex and natural stimuli. This mechanism can be provided by those effects of the action potentials that are believed to be responsible for spike-timing dependent plasticity. These effects, when combined with the dependence of synaptic plasticity on the post-synaptic depolarization, produce the non-monotonic learning rule needed for storing correlated patterns of mean rates."
},
{
"pmid": "30108488",
"title": "Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules.",
"abstract": "Most elementary behaviors such as moving the arm to grasp an object or walking into the next room to explore a museum evolve on the time scale of seconds; in contrast, neuronal action potentials occur on the time scale of a few milliseconds. Learning rules of the brain must therefore bridge the gap between these two different time scales. Modern theories of synaptic plasticity have postulated that the co-activation of pre- and postsynaptic neurons sets a flag at the synapse, called an eligibility trace, that leads to a weight change only if an additional factor is present while the flag is set. This third factor, signaling reward, punishment, surprise, or novelty, could be implemented by the phasic activity of neuromodulators or specific neuronal inputs signaling special events. While the theoretical framework has been developed over the last decades, experimental evidence in support of eligibility traces on the time scale of seconds has been collected only during the last few years. Here we review, in the context of three-factor rules of synaptic plasticity, four key experiments that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules."
},
{
"pmid": "29205151",
"title": "Towards deep learning with segregated dendrites.",
"abstract": "Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons."
},
{
"pmid": "18244442",
"title": "A comparison of methods for multiclass support vector machines.",
"abstract": "Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such \"all-together\" methods. We then compare their performance with three methods based on binary classifications: \"one-against-all,\" \"one-against-one,\" and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the \"one-against-one\" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors."
},
{
"pmid": "28532370",
"title": "Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.",
"abstract": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision."
},
{
"pmid": "15333209",
"title": "Minimal models of adapted neuronal response to in vivo-like input currents.",
"abstract": "Rate models are often used to study the behavior of large networks of spiking neurons. Here we propose a procedure to derive rate models that take into account the fluctuations of the input current and firing-rate adaptation, two ubiquitous features in the central nervous system that have been previously overlooked in constructing rate models. The procedure is general and applies to any model of firing unit. As examples, we apply it to the leaky integrate-and-fire (IF) neuron, the leaky IF neuron with reversal potentials, and to the quadratic IF neuron. Two mechanisms of adaptation are considered, one due to an afterhyperpolarization current and the other to an adapting threshold for spike emission. The parameters of these simple models can be tuned to match experimental data obtained from neocortical pyramidal neurons. Finally, we show how the stationary model can be used to predict the time-varying activity of a large population of adapting neurons."
},
{
"pmid": "27824044",
"title": "Random synaptic feedback weights support error backpropagation for deep learning.",
"abstract": "The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning."
},
{
"pmid": "27683554",
"title": "Toward an Integration of Deep Learning and Neuroscience.",
"abstract": "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses."
},
{
"pmid": "29449713",
"title": "Control of synaptic plasticity in deep cortical networks.",
"abstract": "Humans and many other animals have an enormous capacity to learn about sensory stimuli and to master new skills. However, many of the mechanisms that enable us to learn remain to be understood. One of the greatest challenges of systems neuroscience is to explain how synaptic connections change to support maximally adaptive behaviour. Here, we provide an overview of factors that determine the change in the strength of synapses, with a focus on synaptic plasticity in sensory cortices. We review the influence of neuromodulators and feedback connections in synaptic plasticity and suggest a specific framework in which these factors can interact to improve the functioning of the entire network."
},
{
"pmid": "28333583",
"title": "An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.",
"abstract": "To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output."
},
{
"pmid": "26906502",
"title": "Using goal-driven deep learning models to understand sensory cortex.",
"abstract": "Fueled by innovation in the computer vision and artificial intelligence communities, recent developments in computational neuroscience have used goal-driven hierarchical convolutional neural networks (HCNNs) to make strides in modeling neural single-unit and population responses in higher visual cortical areas. In this Perspective, we review the recent progress in a broader modeling context and describe some of the key technical innovations that have supported it. We then outline how the goal-driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing."
}
] |
Frontiers in Medicine | 31058150 | PMC6478793 | 10.3389/fmed.2019.00066 | The Revival of the Notes Field: Leveraging the Unstructured Content in Electronic Health Records | Problem: Clinical practice requires the production of a time- and resource-consuming great amount of notes. They contain relevant information, but their secondary use is almost impossible, due to their unstructured nature. Researchers are trying to address this problems, with traditional and promising novel techniques. Application in real hospital settings seems not to be possible yet, though, both because of relatively small and dirty dataset, and for the lack of language-specific pre-trained models.Aim: Our aim is to demonstrate the potential of the above techniques, but also raise awareness of the still open challenges that the scientific communities of IT and medical practitioners must jointly address to realize the full potential of unstructured content that is daily produced and digitized in hospital settings, both to improve its data quality and leverage the insights from data-driven predictive models.Methods: To this extent, we present a narrative literature review of the most recent and relevant contributions to leverage the application of Natural Language Processing techniques to the free-text content electronic patient records. In particular, we focused on four selected application domains, namely: data quality, information extraction, sentiment analysis and predictive models, and automated patient cohort selection. Then, we will present a few empirical studies that we undertook at a major teaching hospital specializing in musculoskeletal diseases.Results: We provide the reader with some simple and affordable pipelines, which demonstrate the feasibility of reaching literature performance levels with a single institution non-English dataset. In such a way, we bridged literature and real world needs, performing a step further toward the revival of notes fields. | 3.4.1. Related WorksWe report in Figure 10 a diagram that presents the most relevant methods related to the topic of “cohort selection” as a summary of the methods present in the literature and which we have reported in more detail below.Figure 10The figure represents a schematization of cohort selection. After an information retrieval process, concepts are mapped to standard medical classifications and used to select the relevant EHR.As above said we will briefly talk about ‘automatic patient cohort selection’ for clinical trials. An important step in clinical trials is the selection of patients that will participate to the tests. As a matter of fact, patients are selected randomly and this is the first problem to obtain valuable results in clinical trials (91). Moreover, another problem is that building cohorts for epidemiologic studies usually relies on a time-consuming and laborious manual selection of appropriate cases.There are many works that face this topic trying to extract information from EHR to select a specific patients cohort automatically (4, 92–96). The study conducted by Liao et al. (4) showed that the addition of NLP techniques to structured data improved the classification sensitivity compared to algorithms that use only structured data.Another example (despite this is not strictly related to clinical trials) is the research by Sada et al. (97) that tried to identify patients with HCC (hepatocellular cancer) using directly EHRs. Reports were first manually classified as diagnostic of HCC or not, then NLP techniques by the Automated Retrieval Console (ARC) were implemented to perform a classification of the documents using the Clinical Text Analysis and Knowledge Extraction System. The results showed that the classification performance improved using a combined approach of ICD9 codes and NLP techniques.EMRs can be used to enable large-scale clinical studies. The aim of the research conducted by Kumar et al. (98) was to create an EMR cohort of T2D (type 2 diabetes) patients. NLP was performed on narrative notes using the previously described platform called cTAKES that extracts medical concepts. Then, a logistic regression algorithm was implemented to perform a classification using codified data (ICD9) and narrative NLP data. The results showed a good identification of patients' cohort, with a 97% of specificity and 0.97 of positive predictive value (PPV).Clinical research eligibility criteria specify the medical, demographic, or social characteristics of eligible clinical research volunteers. Their free-text format remains a significant barrier to computer-based decision support for electronic patient eligibility determination. EliXR is a semi-automated approach, developed by Weng et al. (99) that standardizes eligibility concept encoding, through UMLS coding, and allows syntactic parsing to reduce complexity of patterns. The generated labels were used to generate semi-structured eligibility criteria. | [
"29989977",
"25825667",
"23549579",
"25911572",
"30666882",
"29029685",
"28419261",
"15120659",
"19250696",
"10495728",
"29779718",
"26185244",
"27089187",
"29162496",
"27919387",
"28729030",
"26911811",
"27388877",
"30010941",
"27185194",
"21862746",
"30687797",
"25882031",
"15187068",
"25982909",
"24664671",
"30118854",
"29849998",
"27130217",
"23410888",
"25943550",
"29605561",
"26464024",
"12123149",
"25791500",
"15802475",
"30032970",
"26022228",
"22586067",
"28096249",
"26456569",
"16872495",
"26318122",
"20819864",
"20819853",
"24296907",
"26054428",
"26911826",
"23605114",
"24212118",
"23954311",
"23911344",
"29025149",
"9865037",
"16386470",
"26851224",
"14759819",
"28494618",
"27521897",
"28328520",
"29353160",
"26302085",
"29677975",
"15614517",
"29888090",
"24302669",
"24201027",
"22627647",
"24108448",
"24303276",
"23929403",
"21807647"
] | [
{
"pmid": "29989977",
"title": "Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis.",
"abstract": "The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHRs). While primarily designed for archiving patient information and performing administrative healthcare tasks like billing, many researchers have found secondary use of these records for various clinical informatics applications. Over the same period, the machine learning community has seen widespread advances in the field of deep learning. In this review, we survey the current research on applying deep learning to clinical tasks based on EHR data, where we find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification. We identify several limitations of current research involving topics such as model interpretability, data heterogeneity, and lack of universal benchmarks. We conclude by summarizing the state of the field and identifying avenues of future deep EHR research."
},
{
"pmid": "25825667",
"title": "Big data analytics in healthcare: promise and potential.",
"abstract": "OBJECTIVE\nTo describe the promise and potential of big data analytics in healthcare.\n\n\nMETHODS\nThe paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions.\n\n\nRESULTS\nThe paper provides a broad overview of big data analytics for healthcare researchers and practitioners.\n\n\nCONCLUSIONS\nBig data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome."
},
{
"pmid": "30666882",
"title": "The elephant in the record: On the multiplicity of data recording work.",
"abstract": "This article focuses on the production side of clinical data work, or data recording work, and in particular, on its multiplicity in terms of data variability. We report the findings from two case studies aimed at assessing the multiplicity that can be observed when the same medical phenomenon is recorded by multiple competent experts, yet the recorded data enable the knowledgeable management of illness trajectories. Often framed in terms of the latent unreliability of medical data, and then treated as a problem to solve, we argue that practitioners in the health informatics field must gain a greater awareness of the natural variability of data inscribing work, assess it, and design solutions that allow actors on both sides of clinical data work, that is, the production and care, as well as the primary and secondary uses of data to aptly inform each other's practices."
},
{
"pmid": "29029685",
"title": "Using structured and unstructured data to identify patients' need for services that address the social determinants of health.",
"abstract": "INTRODUCTION\nIncreasingly, health care providers are adopting population health management approaches that address the social determinants of health (SDH). However, effectively identifying patients needing services that address a SDH in primary care settings is challenging. The purpose of the current study is to explore how various data sources can identify adult primary care patients that are in need of services that address SDH.\n\n\nMETHODS\nA cross-sectional study described patients in need of SDH services offered by a safety-net hospital's federally qualified health center clinics. SDH services of social work, behavioral health, nutrition counseling, respiratory therapy, financial planning, medical-legal partnership assistance, patient navigation, and pharmacist consultation were offered on a co-located basis and were identified using structured billing and scheduling data, and unstructured electronic health record data. We report the prevalence of the eight different SDH service needs and the patient characteristics associated with service need. Moreover, characteristics of patients with SDH services need documented in structured data sources were compared with those documented by unstructured data sources.\n\n\nRESULTS\nMore than half (53%) of patients needed SDH services. Those in need of such services tended to be female, older, more medically complex, and higher utilizers of services. Structured and unstructured data sources exhibited poor agreement on patient SDH services need. Patients with SDH services need documented by unstructured data tended to be more complex.\n\n\nDISCUSSION\nThe need for SDH services among a safety-net population is high. Identifying patients in need of such services requires multiple data sources with structured and unstructured data."
},
{
"pmid": "28419261",
"title": "Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings.",
"abstract": "OBJECTIVE\nWidespread application of clinical natural language processing (NLP) systems requires taking existing NLP systems and adapting them to diverse and heterogeneous settings. We describe the challenges faced and lessons learned in adapting an existing NLP system for measuring colonoscopy quality.\n\n\nMATERIALS AND METHODS\nColonoscopy and pathology reports from 4 settings during 2013-2015, varying by geographic location, practice type, compensation structure, and electronic health record.\n\n\nRESULTS\nThough successful, adaptation required considerably more time and effort than anticipated. Typical NLP challenges in assembling corpora, diverse report structures, and idiosyncratic linguistic content were greatly magnified.\n\n\nDISCUSSION\nStrategies for addressing adaptation challenges include assessing site-specific diversity, setting realistic timelines, leveraging local electronic health record expertise, and undertaking extensive iterative development. More research is needed on how to make it easier to adapt NLP systems to new clinical settings.\n\n\nCONCLUSIONS\nA key challenge in widespread application of NLP is adapting existing systems to new clinical settings."
},
{
"pmid": "15120659",
"title": "Incorporating ideas from computer-supported cooperative work.",
"abstract": "Many information systems have failed when deployed into complex health-care settings. We believe that one cause of these failures is the difficulty in systematically accounting for the collaborative and exception-filled nature of medical work. In this methodological review paper, we highlight research from the field of computer-supported cooperative work (CSCW) that could help biomedical informaticists recognize and design around the kinds of challenges that lead to unanticipated breakdowns and eventual abandonment of their systems. The field of CSCW studies how people collaborate with each other and the role that technology plays in this collaboration for a wide variety of organizational settings. Thus, biomedical informaticists could benefit from the lessons learned by CSCW researchers. In this paper, we provide a focused review of CSCW methods and ideas-we review aspects of the field that could be applied to improve the design and deployment of medical information systems. To make our discussion concrete, we use electronic medical record systems as an example medical information system, and present three specific principles from CSCW: accounting for incentive structures, understanding workflow, and incorporating awareness."
},
{
"pmid": "19250696",
"title": "Hospital factors associated with clinical data quality.",
"abstract": "OBJECTIVES\nAs chronic conditions affect the evaluation, treatment, and possible clinical outcomes of patients, accurate reporting of chronic diseases into the patient record is expected. In some countries, the reported magnitude of comorbidity inaccuracy and incompleteness is compelling. Beyond incentives provided in payment systems, the role and significance of other factors that contribute to inaccurate and incomplete reporting of chronic conditions is not well understood. A complementary approach that identifies factors associated with inaccurate and incomplete data is proposed.\n\n\nMETHODS\nIn a two-step process, the method links hospitalizations of patients who are repeatedly hospitalized over a determined period and identifies characteristics associated with accurate and complete reporting of chronic conditions. These methods leverage the high prevalence of chronic conditions amongst patients with multiple hospitalizations. The study is based on retrospective analysis of longitudinal hospital discharge data from a cohort of Ontario (Canada) patients.\n\n\nRESULTS\nThere are a multitude of factors associated with incomplete clinical data reporting. Patients discharged from community or small hospitals, discharged alive, or transferred to another acute inpatient hospital tend to have less complete comorbidity reporting. For some chronic diseases, very old age affects chronic disease reporting.\n\n\nCONCLUSIONS\nLongitudinally analyzing chronically ill patients is a novel approach to identifying incompletely reported clinical data. Using these results, coding quality initiatives can be focused in a directed manner."
},
{
"pmid": "10495728",
"title": "Natural language processing and its future in medicine.",
"abstract": "If accurate clinical information were available electronically, automated applications could be developed to use this information to improve patient care and lower costs. However, to be fully retrievable, clinical information must be structured or coded. Many online patient reports are not coded, but are recorded in natural-language text that cannot be reliably accessed. Natural language processing (NLP) can solve this problem by extracting and structuring text-based clinical information, making clinical data available for use. NLP systems are quite difficult to develop, as they require substantial amounts of knowledge, but progress has definitely been made. Some NLP systems have been developed and tested and have demonstrated promising performance in practical clinical applications; some of these systems have already been deployed. The authors provide background information about NLP, briefly describe some of the systems that have been recently developed, and discuss the future of NLP in medicine."
},
{
"pmid": "29779718",
"title": "The impact of three discharge coding methods on the accuracy of diagnostic coding and hospital reimbursement for inpatient medical care.",
"abstract": "BACKGROUND\nCoding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement.\n\n\nMETHODS\nComparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder.\n\n\nRESULTS\n107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M.\n\n\nCONCLUSION\nThe accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries."
},
{
"pmid": "26185244",
"title": "Advances in natural language processing.",
"abstract": "Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area."
},
{
"pmid": "27089187",
"title": "Natural Language Processing in Radiology: A Systematic Review.",
"abstract": "Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed."
},
{
"pmid": "29162496",
"title": "Clinical information extraction applications: A literature review.",
"abstract": "BACKGROUND\nWith the rapid adoption of electronic health records (EHRs), it is desirable to harvest information and knowledge from EHRs to support automated systems at the point of care and to enable secondary use of EHRs for clinical and translational research. One critical component used to facilitate the secondary use of EHR data is the information extraction (IE) task, which automatically extracts and encodes clinical information from text.\n\n\nOBJECTIVES\nIn this literature review, we present a review of recent published research on clinical information extraction (IE) applications.\n\n\nMETHODS\nA literature search was conducted for articles published from January 2009 to September 2016 based on Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid EMBASE, Scopus, Web of Science, and ACM Digital Library.\n\n\nRESULTS\nA total of 1917 publications were identified for title and abstract screening. Of these publications, 263 articles were selected and discussed in this review in terms of publication venues and data sources, clinical IE tools, methods, and applications in the areas of disease- and drug-related studies, and clinical workflow optimizations.\n\n\nCONCLUSIONS\nClinical IE has been used for a wide range of applications, however, there is a considerable gap between clinical studies using EHR data and studies using clinical IE. This study enabled us to gain a more concrete understanding of the gap and to provide potential solutions to bridge this gap."
},
{
"pmid": "27919387",
"title": "Impacts of structuring the electronic health record: Results of a systematic literature review from the perspective of secondary use of patient data.",
"abstract": "PURPOSE\nTo explore the impacts that structuring of electronic health records (EHRs) has had from the perspective of secondary use of patient data as reflected in currently published literature. This paper presents the results of a systematic literature review aimed at answering the following questions; (1) what are the common methods of structuring patient data to serve secondary use purposes; (2) what are the common methods of evaluating patient data structuring in the secondary use context, and (3) what impacts or outcomes of EHR structuring have been reported from the secondary use perspective.\n\n\nMETHODS\nThe reported study forms part of a wider systematic literature review on the impacts of EHR structuring methods and evaluations of their impact. The review was based on a 12-step systematic review protocol adapted from the Cochrane methodology. Original articles included in the study were divided into three groups for analysis and reporting based on their use focus: nursing documentation, medical use and secondary use (presented in this paper). The analysis from the perspective of secondary use of data includes 85 original articles from 1975 to 2010 retrieved from 15 bibliographic databases.\n\n\nRESULTS\nThe implementation of structured EHRs can be roughly divided into applications for documenting patient data at the point of care and application for retrieval of patient data (post hoc structuring). Two thirds of the secondary use articles concern EHR structuring methods which were still under development or in the testing phase.\n\n\nMETHODS\nof structuring patient data such as codes, terminologies, reference information models, forms or templates and documentation standards were usually applied in combination. Most of the identified benefits of utilizing structured EHR data for secondary use purposes concentrated on information content and quality or on technical quality and reliability, particularly in the case of Natural Language Processing (NLP) studies. A few individual articles evaluated impacts on care processes, productivity and costs, patient safety, care quality or other health impacts. In most articles these endpoints were usually discussed as goals of secondary use and less as evidence-supported impacts, resulting from the use of structured EHR data for secondary purposes.\n\n\nCONCLUSIONS\nFurther studies and more sound evaluation methods are needed for evidence on how EHRs are utilized for secondary purposes, and how structured documentation methods can serve different users' needs, e.g. administration, statistics and research and development, in parallel to medical use purposes."
},
{
"pmid": "28729030",
"title": "Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review.",
"abstract": "We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP."
},
{
"pmid": "26911811",
"title": "Extracting information from the text of electronic medical records to improve case detection: a systematic review.",
"abstract": "BACKGROUND\nElectronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality.\n\n\nMETHODS\nA systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed.\n\n\nRESULTS\nStudies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025).\n\n\nCONCLUSIONS\nText in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall)."
},
{
"pmid": "27388877",
"title": "Using automatically extracted information from mammography reports for decision-support.",
"abstract": "OBJECTIVE\nTo evaluate a system we developed that connects natural language processing (NLP) for information extraction from narrative text mammography reports with a Bayesian network for decision-support about breast cancer diagnosis. The ultimate goal of this system is to provide decision support as part of the workflow of producing the radiology report.\n\n\nMATERIALS AND METHODS\nWe built a system that uses an NLP information extraction system (which extract BI-RADS descriptors and clinical information from mammography reports) to provide the necessary inputs to a Bayesian network (BN) decision support system (DSS) that estimates lesion malignancy from BI-RADS descriptors. We used this integrated system to predict diagnosis of breast cancer from radiology text reports and evaluated it with a reference standard of 300 mammography reports. We collected two different outputs from the DSS: (1) the probability of malignancy and (2) the BI-RADS final assessment category. Since NLP may produce imperfect inputs to the DSS, we compared the difference between using perfect (\"reference standard\") structured inputs to the DSS (\"RS-DSS\") vs NLP-derived inputs (\"NLP-DSS\") on the output of the DSS using the concordance correlation coefficient. We measured the classification accuracy of the BI-RADS final assessment category when using NLP-DSS, compared with the ground truth category established by the radiologist.\n\n\nRESULTS\nThe NLP-DSS and RS-DSS had closely matched probabilities, with a mean paired difference of 0.004±0.025. The concordance correlation of these paired measures was 0.95. The accuracy of the NLP-DSS to predict the correct BI-RADS final assessment category was 97.58%.\n\n\nCONCLUSION\nThe accuracy of the information extracted from mammography reports using the NLP system was sufficient to provide accurate DSS results. We believe our system could ultimately reduce the variation in practice in mammography related to assessment of malignant lesions and improve management decisions."
},
{
"pmid": "30010941",
"title": "Conversational agents in healthcare: a systematic review.",
"abstract": "Objective\nOur objective was to review the characteristics, current applications, and evaluation measures of conversational agents with unconstrained natural language input capabilities used for health-related purposes.\n\n\nMethods\nWe searched PubMed, Embase, CINAHL, PsycInfo, and ACM Digital using a predefined search strategy. Studies were included if they focused on consumers or healthcare professionals; involved a conversational agent using any unconstrained natural language input; and reported evaluation measures resulting from user interaction with the system. Studies were screened by independent reviewers and Cohen's kappa measured inter-coder agreement.\n\n\nResults\nThe database search retrieved 1513 citations; 17 articles (14 different conversational agents) met the inclusion criteria. Dialogue management strategies were mostly finite-state and frame-based (6 and 7 conversational agents, respectively); agent-based strategies were present in one type of system. Two studies were randomized controlled trials (RCTs), 1 was cross-sectional, and the remaining were quasi-experimental. Half of the conversational agents supported consumers with health tasks such as self-care. The only RCT evaluating the efficacy of a conversational agent found a significant effect in reducing depression symptoms (effect size d = 0.44, p = .04). Patient safety was rarely evaluated in the included studies.\n\n\nConclusions\nThe use of conversational agents with unconstrained natural language input capabilities for health-related purposes is an emerging field of research, where the few published studies were mainly quasi-experimental, and rarely evaluated efficacy or safety. Future studies would benefit from more robust experimental designs and standardized reporting.\n\n\nProtocol Registration\nThe protocol for this systematic review is registered at PROSPERO with the number CRD42017065917."
},
{
"pmid": "27185194",
"title": "Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records.",
"abstract": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems."
},
{
"pmid": "21862746",
"title": "Automated identification of postoperative complications within an electronic medical record using natural language processing.",
"abstract": "CONTEXT\nCurrently most automated methods to identify patient safety occurrences rely on administrative data codes; however, free-text searches of electronic medical records could represent an additional surveillance approach.\n\n\nOBJECTIVE\nTo evaluate a natural language processing search-approach to identify postoperative surgical complications within a comprehensive electronic medical record.\n\n\nDESIGN, SETTING, AND PATIENTS\nCross-sectional study involving 2974 patients undergoing inpatient surgical procedures at 6 Veterans Health Administration (VHA) medical centers from 1999 to 2006.\n\n\nMAIN OUTCOME MEASURES\nPostoperative occurrences of acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction identified through medical record review as part of the VA Surgical Quality Improvement Program. We determined the sensitivity and specificity of the natural language processing approach to identify these complications and compared its performance with patient safety indicators that use discharge coding information.\n\n\nRESULTS\nThe proportion of postoperative events for each sample was 2% (39 of 1924) for acute renal failure requiring dialysis, 0.7% (18 of 2327) for pulmonary embolism, 1% (29 of 2327) for deep vein thrombosis, 7% (61 of 866) for sepsis, 16% (222 of 1405) for pneumonia, and 2% (35 of 1822) for myocardial infarction. Natural language processing correctly identified 82% (95% confidence interval [CI], 67%-91%) of acute renal failure cases compared with 38% (95% CI, 25%-54%) for patient safety indicators. Similar results were obtained for venous thromboembolism (59%, 95% CI, 44%-72% vs 46%, 95% CI, 32%-60%), pneumonia (64%, 95% CI, 58%-70% vs 5%, 95% CI, 3%-9%), sepsis (89%, 95% CI, 78%-94% vs 34%, 95% CI, 24%-47%), and postoperative myocardial infarction (91%, 95% CI, 78%-97%) vs 89%, 95% CI, 74%-96%). Both natural language processing and patient safety indicators were highly specific for these diagnoses.\n\n\nCONCLUSION\nAmong patients undergoing inpatient surgical procedures at VA medical centers, natural language processing analysis of electronic medical records to identify postoperative complications had higher sensitivity and lower specificity compared with patient safety indicators based on discharge coding."
},
{
"pmid": "30687797",
"title": "Natural language generation for electronic health records.",
"abstract": "One broad goal of biomedical informatics is to generate fully-synthetic, faithfully representative electronic health records (EHRs) to facilitate data sharing between healthcare providers and researchers and promote methodological research. A variety of methods existing for generating synthetic EHRs, but they are not capable of generating unstructured text, like emergency department (ED) chief complaints, history of present illness, or progress notes. Here, we use the encoder-decoder model, a deep learning algorithm that features in many contemporary machine translation systems, to generate synthetic chief complaints from discrete variables in EHRs, like age group, gender, and discharge diagnosis. After being trained end-to-end on authentic records, the model can generate realistic chief complaint text that appears to preserve the epidemiological information encoded in the original record-sentence pairs. As a side effect of the model's optimization goal, these synthetic chief complaints are also free of relatively uncommon abbreviation and misspellings, and they include none of the personally identifiable information (PII) that was in the training data, suggesting that this model may be used to support the de-identification of text in EHRs. When combined with algorithms like generative adversarial networks (GANs), our model could be used to generate fully-synthetic EHRs, allowing healthcare providers to share faithful representations of multimodal medical data without compromising patient privacy. This is an important advance that we hope will facilitate the development of machine-learning methods for clinical decision support, disease surveillance, and other data-hungry applications in biomedical informatics."
},
{
"pmid": "25882031",
"title": "Automated methods for the summarization of electronic health records.",
"abstract": "OBJECTIVES\nThis review examines work on automated summarization of electronic health record (EHR) data and in particular, individual patient record summarization. We organize the published research and highlight methodological challenges in the area of EHR summarization implementation.\n\n\nTARGET AUDIENCE\nThe target audience for this review includes researchers, designers, and informaticians who are concerned about the problem of information overload in the clinical setting as well as both users and developers of clinical summarization systems.\n\n\nSCOPE\nAutomated summarization has been a long-studied subject in the fields of natural language processing and human-computer interaction, but the translation of summarization and visualization methods to the complexity of the clinical workflow is slow moving. We assess work in aggregating and visualizing patient information with a particular focus on methods for detecting and removing redundancy, describing temporality, determining salience, accounting for missing data, and taking advantage of encoded clinical knowledge. We identify and discuss open challenges critical to the implementation and use of robust EHR summarization systems."
},
{
"pmid": "15187068",
"title": "Automated encoding of clinical documents based on natural language processing.",
"abstract": "OBJECTIVE\nThe aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method.\n\n\nMETHODS\nAn existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts.\n\n\nRESULTS\nRecall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91.\n\n\nCONCLUSION\nExtraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval."
},
{
"pmid": "25982909",
"title": "Sentiment analysis in medical settings: New opportunities and challenges.",
"abstract": "OBJECTIVE\nClinical documents reflect a patient's health status in terms of observations and contain objective information such as descriptions of examination results, diagnoses and interventions. To evaluate this information properly, assessing positive or negative clinical outcomes or judging the impact of a medical condition on patient's well being are essential. Although methods of sentiment analysis have been developed to address these tasks, they have not yet found broad application in the medical domain.\n\n\nMETHODS AND MATERIAL\nIn this work, we characterize the facets of sentiment in the medical sphere and identify potential use cases. Through a literature review, we summarize the state of the art in healthcare settings. To determine the linguistic peculiarities of sentiment in medical texts and to collect open research questions of sentiment analysis in medicine, we perform a quantitative assessment with respect to word usage and sentiment distribution of a dataset of clinical narratives and medical social media derived from six different sources.\n\n\nRESULTS\nWord usage in clinical narratives differs from that in medical social media: Nouns predominate. Even though adjectives are also frequently used, they mainly describe body locations. Between 12% and 15% of sentiment terms are determined in medical social media datasets when applying existing sentiment lexicons. In contrast, in clinical narratives only between 5% and 11% opinionated terms were identified. This proves the less subjective use of language in clinical narratives, requiring adaptations to existing methods for sentiment analysis.\n\n\nCONCLUSIONS\nMedical sentiment concerns the patient's health status, medical conditions and treatment. Its analysis and extraction from texts has multiple applications, even for clinical narratives that remained so far unconsidered. Given the varying usage and meanings of terms, sentiment analysis from medical documents requires a domain-specific sentiment source and complementary context-dependent features to be able to correctly interpret the implicit sentiment."
},
{
"pmid": "24664671",
"title": "Using natural language processing and machine learning to identify gout flares from electronic clinical notes.",
"abstract": "OBJECTIVE\nGout flares are not well documented by diagnosis codes, making it difficult to conduct accurate database studies. We implemented a computer-based method to automatically identify gout flares using natural language processing (NLP) and machine learning (ML) from electronic clinical notes.\n\n\nMETHODS\nOf 16,519 patients, 1,264 and 1,192 clinical notes from 2 separate sets of 100 patients were selected as the training and evaluation data sets, respectively, which were reviewed by rheumatologists. We created separate NLP searches to capture different aspects of gout flares. For each note, the NLP search outputs became the ML system inputs, which provided the final classification decisions. The note-level classifications were grouped into patient-level gout flares. Our NLP+ML results were validated using a gold standard data set and compared with the claims-based method used by prior literatures.\n\n\nRESULTS\nFor 16,519 patients with a diagnosis of gout and a prescription for a urate-lowering therapy, we identified 18,869 clinical notes as gout flare positive (sensitivity 82.1%, specificity 91.5%): 1,402 patients with ≥3 flares (sensitivity 93.5%, specificity 84.6%), 5,954 with 1 or 2 flares, and 9,163 with no flare (sensitivity 98.5%, specificity 96.4%). Our method identified more flare cases (18,869 versus 7,861) and patients with ≥3 flares (1,402 versus 516) when compared to the claims-based method.\n\n\nCONCLUSION\nWe developed a computer-based method (NLP and ML) to identify gout flares from the clinical notes. Our method was validated as an accurate tool for identifying gout flares with higher sensitivity and specificity compared to previous studies."
},
{
"pmid": "30118854",
"title": "A convolutional route to abbreviation disambiguation in clinical text.",
"abstract": "OBJECTIVE\nAbbreviations sense disambiguation is a special case of word sense disambiguation. Machine learning methods based on neural networks showed promising results for word sense disambiguation (Festag and Spreckelsen, 2017) [1] and, here we assess their effectiveness for abbreviation sense disambiguation.\n\n\nMETHODS\nConvolutional Neural Network (CNN) models were trained, one for each abbreviation, to disambiguate abbreviation senses. A reverse substitution (of long forms with short forms) method from a previous study was used on clinical narratives from Cleveland Clinic, USA, to auto-generate training data. Accuracy of the CNN and traditional Support Vector Machine (SVM) models were studied using: (a) 5-fold cross validation on the auto-generated training data; (b) a manually created, set-aside gold standard; and (c) 10-fold cross validation on a publicly available dataset from a previous study.\n\n\nRESULTS\nCNN improved accuracy by 1-4 percentage points on all the three datasets compared to SVM, and the improvement was the most for the set-aside dataset. The improvement was statistically significant at p < 0.05 on the auto-generated dataset. We found that for some common abbreviations, sense distributions mismatch between the test and auto generated training data, and mitigating the mismatch significantly improved the model accuracy.\n\n\nCONCLUSION\nThe neural network models work well in disambiguating abbreviations in clinical narratives, and they are robust across datasets. This avoids feature-engineering for each dataset. Coupled with an enhanced auto-training data generation, neural networks can simplify development of a practical abbreviation disambiguation system."
},
{
"pmid": "29849998",
"title": "Data Processing and Text Mining Technologies on Electronic Medical Records: A Review.",
"abstract": "Currently, medical institutes generally use EMR to record patient's condition, including diagnostic information, procedures performed, and treatment results. EMR has been recognized as a valuable resource for large-scale analysis. However, EMR has the characteristics of diversity, incompleteness, redundancy, and privacy, which make it difficult to carry out data mining and analysis directly. Therefore, it is necessary to preprocess the source data in order to improve data quality and improve the data mining results. Different types of data require different processing technologies. Most structured data commonly needs classic preprocessing technologies, including data cleansing, data integration, data transformation, and data reduction. For semistructured or unstructured data, such as medical text, containing more health information, it requires more complex and challenging processing methods. The task of information extraction for medical texts mainly includes NER (named-entity recognition) and RE (relation extraction). This paper focuses on the process of EMR processing and emphatically analyzes the key techniques. In addition, we make an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work."
},
{
"pmid": "27130217",
"title": "Quality of EHR data extractions for studies of preterm birth in a tertiary care center: guidelines for obtaining reliable data.",
"abstract": "BACKGROUND\nThe use of Electronic Health Records (EHR) has increased significantly in the past 15 years. This study compares electronic vs. manual data abstractions from an EHR for accuracy. While the dataset is limited to preterm birth data, our work is generally applicable. We enumerate challenges to reliable extraction, and state guidelines to maximize reliability.\n\n\nMETHODS\nAn Epic™ EHR data extraction of structured data values from 1,772 neonatal records born between the years 2001-2011 was performed. The data were directly compared to a manually-abstracted database. Specific data values important to studies of perinatology were chosen to compare discrepancies between the two databases.\n\n\nRESULTS\nDiscrepancy rates between the EHR extraction and the manual database were calculated for gestational age in weeks (2.6 %), birthweight (9.7 %), first white blood cell count (3.2 %), initial hemoglobin (11.9 %), peak total and direct bilirubin (11.4 % and 4.9 %), and patent ductus arteriosus (PDA) diagnosis (12.8 %). Using the discrepancies, errors were quantified in both datasets using chart review. The EHR extraction errors were significantly fewer than manual abstraction errors for PDA and laboratory values excluding neonates transferred from outside hospitals, but significantly greater for birth weight. Reasons for the observed errors are discussed.\n\n\nCONCLUSIONS\nWe show that an EHR not modified specifically for research purposes had discrepancy ranges comparable to a manually created database. We offer guidelines to minimize EHR extraction errors in future study designs. As EHRs become more research-friendly, electronic chart extractions should be more efficient and have lower error rates compared to manual abstractions."
},
{
"pmid": "23410888",
"title": "An enhanced CRFs-based system for information extraction from radiology reports.",
"abstract": "We discuss the problem of performing information extraction from free-text radiology reports via supervised learning. In this task, segments of text (not necessarily coinciding with entire sentences, and possibly crossing sentence boundaries) need to be annotated with tags representing concepts of interest in the radiological domain. In this paper we present two novel approaches to IE for radiology reports: (i) a cascaded, two-stage method based on pipelining two taggers generated via the well known linear-chain conditional random fields (LC-CRFs) learner and (ii) a confidence-weighted ensemble method that combines standard LC-CRFs and the proposed two-stage method. We also report on the use of \"positional features\", a novel type of feature intended to aid in the automatic annotation of texts in which the instances of a given concept may be hypothesized to systematically occur in specific areas of the text. We present experiments on a dataset of mammography reports in which the proposed ensemble is shown to outperform a traditional, single-stage CRFs system in two different, applicatively interesting scenarios."
},
{
"pmid": "25943550",
"title": "An end-to-end hybrid algorithm for automated medication discrepancy detection.",
"abstract": "BACKGROUND\nIn this study we implemented and developed state-of-the-art machine learning (ML) and natural language processing (NLP) technologies and built a computerized algorithm for medication reconciliation. Our specific aims are: (1) to develop a computerized algorithm for medication discrepancy detection between patients' discharge prescriptions (structured data) and medications documented in free-text clinical notes (unstructured data); and (2) to assess the performance of the algorithm on real-world medication reconciliation data.\n\n\nMETHODS\nWe collected clinical notes and discharge prescription lists for all 271 patients enrolled in the Complex Care Medical Home Program at Cincinnati Children's Hospital Medical Center between 1/1/2010 and 12/31/2013. A double-annotated, gold-standard set of medication reconciliation data was created for this collection. We then developed a hybrid algorithm consisting of three processes: (1) a ML algorithm to identify medication entities from clinical notes, (2) a rule-based method to link medication names with their attributes, and (3) a NLP-based, hybrid approach to match medications with structured prescriptions in order to detect medication discrepancies. The performance was validated on the gold-standard medication reconciliation data, where precision (P), recall (R), F-value (F) and workload were assessed.\n\n\nRESULTS\nThe hybrid algorithm achieved 95.0%/91.6%/93.3% of P/R/F on medication entity detection and 98.7%/99.4%/99.1% of P/R/F on attribute linkage. The medication matching achieved 92.4%/90.7%/91.5% (P/R/F) on identifying matched medications in the gold-standard and 88.6%/82.5%/85.5% (P/R/F) on discrepant medications. By combining all processes, the algorithm achieved 92.4%/90.7%/91.5% (P/R/F) and 71.5%/65.2%/68.2% (P/R/F) on identifying the matched and the discrepant medications, respectively. The error analysis on algorithm outputs identified challenges to be addressed in order to improve medication discrepancy detection.\n\n\nCONCLUSION\nBy leveraging ML and NLP technologies, an end-to-end, computerized algorithm achieves promising outcome in reconciling medications between clinical notes and discharge prescriptions."
},
{
"pmid": "29605561",
"title": "Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain.",
"abstract": "RATIONALE AND OBJECTIVES\nTo evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems.\n\n\nMATERIALS AND METHODS\nWe used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).\n\n\nRESULTS\nThe multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based).\n\n\nCONCLUSIONS\nOur NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC."
},
{
"pmid": "26464024",
"title": "Learning probabilistic phenotypes from heterogeneous EHR data.",
"abstract": "We present the Unsupervised Phenome Model (UPhenome), a probabilistic graphical model for large-scale discovery of computational models of disease, or phenotypes. We tackle this challenge through the joint modeling of a large set of diseases and a large set of clinical observations. The observations are drawn directly from heterogeneous patient record data (notes, laboratory tests, medications, and diagnosis codes), and the diseases are modeled in an unsupervised fashion. We apply UPhenome to two qualitatively different mixtures of patients and diseases: records of extremely sick patients in the intensive care unit with constant monitoring, and records of outpatients regularly followed by care providers over multiple years. We demonstrate that the UPhenome model can learn from these different care settings, without any additional adaptation. Our experiments show that (i) the learned phenotypes combine the heterogeneous data types more coherently than baseline LDA-based phenotypes; (ii) they each represent single diseases rather than a mix of diseases more often than the baseline ones; and (iii) when applied to unseen patient records, they are correlated with the patients' ground-truth disorders. Code for training, inference, and quantitative evaluation is made available to the research community."
},
{
"pmid": "12123149",
"title": "A simple algorithm for identifying negated findings and diseases in discharge summaries.",
"abstract": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries."
},
{
"pmid": "25791500",
"title": "DEEPEN: A negation detection system for clinical text incorporating dependency relation into NegEx.",
"abstract": "In Electronic Health Records (EHRs), much of valuable information regarding patients' conditions is embedded in free text format. Natural language processing (NLP) techniques have been developed to extract clinical information from free text. One challenge faced in clinical NLP is that the meaning of clinical entities is heavily affected by modifiers such as negation. A negation detection algorithm, NegEx, applies a simplistic approach that has been shown to be powerful in clinical NLP. However, due to the failure to consider the contextual relationship between words within a sentence, NegEx fails to correctly capture the negation status of concepts in complex sentences. Incorrect negation assignment could cause inaccurate diagnosis of patients' condition or contaminated study cohorts. We developed a negation algorithm called DEEPEN to decrease NegEx's false positives by taking into account the dependency relationship between negation words and concepts within a sentence using Stanford dependency parser. The system was developed and tested using EHR data from Indiana University (IU) and it was further evaluated on Mayo Clinic dataset to assess its generalizability. The evaluation results demonstrate DEEPEN, which incorporates dependency parsing into NegEx, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs."
},
{
"pmid": "15802475",
"title": "Automated detection of adverse events using natural language processing of discharge summaries.",
"abstract": "OBJECTIVE\nTo determine whether natural language processing (NLP) can effectively detect adverse events defined in the New York Patient Occurrence Reporting and Tracking System (NYPORTS) using discharge summaries.\n\n\nDESIGN\nAn adverse event detection system for discharge summaries using the NLP system MedLEE was constructed to identify 45 NYPORTS event types. The system was first applied to a random sample of 1,000 manually reviewed charts. The system then processed all inpatient cases with electronic discharge summaries for two years. All system-identified events were reviewed, and performance was compared with traditional reporting.\n\n\nMEASUREMENTS\nSystem sensitivity, specificity, and predictive value, with manual review serving as the gold standard.\n\n\nRESULTS\nThe system correctly identified 16 of 65 events in 1,000 charts. Of 57,452 total electronic discharge summaries, the system identified 1,590 events in 1,461 cases, and manual review verified 704 events in 652 cases, resulting in an overall sensitivity of 0.28 (95% confidence interval [CI]: 0.17-0.42), specificity of 0.985 (CI: 0.984-0.986), and positive predictive value of 0.45 (CI: 0.42-0.47) for detecting cases with events and an average specificity of 0.9996 (CI: 0.9996-0.9997) per event type. Traditional event reporting detected 322 events during the period (sensitivity 0.09), of which the system identified 110 as well as 594 additional events missed by traditional methods.\n\n\nCONCLUSION\nNLP is an effective technique for detecting a broad range of adverse events in text documents and outperformed traditional and previous automated adverse event detection methods."
},
{
"pmid": "30032970",
"title": "Accuracy of using natural language processing methods for identifying healthcare-associated infections.",
"abstract": "OBJECTIVE\nThere is a growing interest in using natural language processing (NLP) for healthcare-associated infections (HAIs) monitoring. A French project consortium, SYNODOS, developed a NLP solution for detecting medical events in electronic medical records for epidemiological purposes. The objective of this study was to evaluate the performance of the SYNODOS data processing chain for detecting HAIs in clinical documents.\n\n\nMATERIALS AND METHODS\nThe collection of textual records in these hospitals was carried out between October 2009 and December 2010 in three French University hospitals (Lyon, Rouen and Nice). The following medical specialties were included in the study: digestive surgery, neurosurgery, orthopedic surgery, adult intensive-care units. Reference Standard surveillance was compared with the results of automatic detection using NLP. Sensitivity on 56 HAI cases and specificity on 57 non-HAI cases were calculated.\n\n\nRESULTS\nThe accuracy rate was 84% (n = 95/113). The overall sensitivity of automatic detection of HAIs was 83.9% (CI 95%: 71.7-92.4) and the specificity was 84.2% (CI 95%: 72.1-92.5). The sensitivity varies from one specialty to the other, from 69.2% (CI 95%: 38.6-90.9) for intensive care to 93.3% (CI 95%: 68.1-99.8) for orthopedic surgery. The manual review of classification errors showed that the most frequent cause was an inaccurate temporal labeling of medical events, which is an important factor for HAI detection.\n\n\nCONCLUSION\nThis study confirmed the feasibility of using NLP for the HAI detection in hospital facilities. Automatic HAI detection algorithms could offer better surveillance standardization for hospital comparisons."
},
{
"pmid": "26022228",
"title": "Natural Language Processing for Real-Time Catheter-Associated Urinary Tract Infection Surveillance: Results of a Pilot Implementation Trial.",
"abstract": "BACKGROUND\nIncidence of catheter-associated urinary tract infection (CAUTI) is a quality benchmark. To streamline conventional detection methods, an electronic surveillance system augmented with natural language processing (NLP), which gathers data recorded in clinical notes without manual review, was implemented for real-time surveillance.\n\n\nOBJECTIVE\nTo assess the utility of this algorithm for identifying indwelling urinary catheter days and CAUTI.\n\n\nSETTING\nLarge, urban tertiary care Veterans Affairs hospital.\n\n\nMETHODS\nAll patients admitted to the acute care units and the intensive care unit from March 1, 2013, through November 30, 2013, were included. Standard surveillance, which includes electronic and manual data extraction, was compared with the NLP-augmented algorithm.\n\n\nRESULTS\nThe NLP-augmented algorithm identified 27% more indwelling urinary catheter days in the acute care units and 28% fewer indwelling urinary catheter days in the intensive care unit. The algorithm flagged 24 CAUTI versus 20 CAUTI by standard surveillance methods; the CAUTI identified were overlapping but not the same. The overall positive predictive value was 54.2%, and overall sensitivity was 65% (90.9% in the acute care units but 33% in the intensive care unit). Dissimilarities in the operating characteristics of the algorithm between types of unit were due to differences in documentation practice. Development and implementation of the algorithm required substantial upfront effort of clinicians and programmers to determine current language patterns.\n\n\nCONCLUSIONS\nThe NLP algorithm was most useful for identifying simple clinical variables. Algorithm operating characteristics were specific to local documentation practices. The algorithm did not perform as well as standard surveillance methods."
},
{
"pmid": "22586067",
"title": "Feature engineering combined with machine learning and rule-based methods for structured information extraction from narrative clinical discharge summaries.",
"abstract": "OBJECTIVE\nA system that translates narrative text in the medical domain into structured representation is in great demand. The system performs three sub-tasks: concept extraction, assertion classification, and relation identification.\n\n\nDESIGN\nThe overall system consists of five steps: (1) pre-processing sentences, (2) marking noun phrases (NPs) and adjective phrases (APs), (3) extracting concepts that use a dosage-unit dictionary to dynamically switch two models based on Conditional Random Fields (CRF), (4) classifying assertions based on voting of five classifiers, and (5) identifying relations using normalized sentences with a set of effective discriminating features.\n\n\nMEASUREMENTS\nMacro-averaged and micro-averaged precision, recall and F-measure were used to evaluate results.\n\n\nRESULTS\nThe performance is competitive with the state-of-the-art systems with micro-averaged F-measure of 0.8489 for concept extraction, 0.9392 for assertion classification and 0.7326 for relation identification.\n\n\nCONCLUSIONS\nThe system exploits an array of common features and achieves state-of-the-art performance. Prudent feature engineering sets the foundation of our systems. In concept extraction, we demonstrated that switching models, one of which is especially designed for telegraphic sentences, improved extraction of the treatment concept significantly. In assertion classification, a set of features derived from a rule-based classifier were proven to be effective for the classes such as conditional and possible. These classes would suffer from data scarcity in conventional machine-learning methods. In relation identification, we use two-staged architecture, the second of which applies pairwise classifiers to possible candidate classes. This architecture significantly improves performance."
},
{
"pmid": "28096249",
"title": "Natural language processing to extract symptoms of severe mental illness from clinical text: the Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project.",
"abstract": "OBJECTIVES\nWe sought to use natural language processing to develop a suite of language models to capture key symptoms of severe mental illness (SMI) from clinical text, to facilitate the secondary use of mental healthcare data in research.\n\n\nDESIGN\nDevelopment and validation of information extraction applications for ascertaining symptoms of SMI in routine mental health records using the Clinical Record Interactive Search (CRIS) data resource; description of their distribution in a corpus of discharge summaries.\n\n\nSETTING\nElectronic records from a large mental healthcare provider serving a geographic catchment of 1.2 million residents in four boroughs of south London, UK.\n\n\nPARTICIPANTS\nThe distribution of derived symptoms was described in 23 128 discharge summaries from 7962 patients who had received an SMI diagnosis, and 13 496 discharge summaries from 7575 patients who had received a non-SMI diagnosis.\n\n\nOUTCOME MEASURES\nFifty SMI symptoms were identified by a team of psychiatrists for extraction based on salience and linguistic consistency in records, broadly categorised under positive, negative, disorganisation, manic and catatonic subgroups. Text models for each symptom were generated using the TextHunter tool and the CRIS database.\n\n\nRESULTS\nWe extracted data for 46 symptoms with a median F1 score of 0.88. Four symptom models performed poorly and were excluded. From the corpus of discharge summaries, it was possible to extract symptomatology in 87% of patients with SMI and 60% of patients with non-SMI diagnosis.\n\n\nCONCLUSIONS\nThis work demonstrates the possibility of automatically extracting a broad range of SMI symptoms from English text discharge summaries for patients with an SMI diagnosis. Descriptive data also indicated that most symptoms cut across diagnoses, rather than being restricted to particular groups."
},
{
"pmid": "26456569",
"title": "Using natural language processing to identify problem usage of prescription opioids.",
"abstract": "BACKGROUND\nAccurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.\n\n\nOBJECTIVES\nWe developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).\n\n\nMETHODS\nWe used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. We developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. We applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.\n\n\nRESULTS\nTraditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.\n\n\nCONCLUSIONS\nScalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods."
},
{
"pmid": "16872495",
"title": "Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system.",
"abstract": "BACKGROUND\nThe text descriptions in electronic medical records are a rich source of information. We have developed a Health Information Text Extraction (HITEx) tool and used it to extract key findings for a research study on airways disease.\n\n\nMETHODS\nThe principal diagnosis, co-morbidity and smoking status extracted by HITEx from a set of 150 discharge summaries were compared to an expert-generated gold standard.\n\n\nRESULTS\nThe accuracy of HITEx was 82% for principal diagnosis, 87% for co-morbidity, and 90% for smoking status extraction, when cases labeled \"Insufficient Data\" by the gold standard were excluded.\n\n\nCONCLUSION\nWe consider the results promising, given the complexity of the discharge summaries and the extraction tasks."
},
{
"pmid": "26318122",
"title": "Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes.",
"abstract": "The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts."
},
{
"pmid": "20819864",
"title": "Textractor: a hybrid system for medications and reason for their prescription extraction from clinical text documents.",
"abstract": "UNLABELLED\nOBJECTIVE To describe a new medication information extraction system-Textractor-developed for the 'i2b2 medication extraction challenge'. The development, functionalities, and official evaluation of the system are detailed.\n\n\nDESIGN\nTextractor is based on the Apache Unstructured Information Management Architecture (UMIA) framework, and uses methods that are a hybrid between machine learning and pattern matching. Two modules in the system are based on machine learning algorithms, while other modules use regular expressions, rules, and dictionaries, and one module embeds MetaMap Transfer.\n\n\nMEASUREMENTS\nThe official evaluation was based on a reference standard of 251 discharge summaries annotated by all teams participating in the challenge. The metrics used were recall, precision, and the F(1)-measure. They were calculated with exact and inexact matches, and were averaged at the level of systems and documents.\n\n\nRESULTS\nThe reference metric for this challenge, the system-level overall F(1)-measure, reached about 77% for exact matches, with a recall of 72% and a precision of 83%. Performance was the best with route information (F(1)-measure about 86%), and was good for dosage and frequency information, with F(1)-measures of about 82-85%. Results were not as good for durations, with F(1)-measures of 36-39%, and for reasons, with F(1)-measures of 24-27%.\n\n\nCONCLUSION\nThe official evaluation of Textractor for the i2b2 medication extraction challenge demonstrated satisfactory performance. This system was among the 10 best performing systems in this challenge."
},
{
"pmid": "20819853",
"title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.",
"abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text."
},
{
"pmid": "24296907",
"title": "Diagnosis code assignment: models and evaluation metrics.",
"abstract": "BACKGROUND AND OBJECTIVE\nThe volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments.\n\n\nMETHODS\nWe study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community.\n\n\nRESULTS\nThe hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated.\n\n\nCONCLUSIONS\nHierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art."
},
{
"pmid": "26054428",
"title": "An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records.",
"abstract": "BACKGROUND\nDiagnosis codes are assigned to medical records in healthcare facilities by trained coders by reviewing all physician authored documents associated with a patient's visit. This is a necessary and complex task involving coders adhering to coding guidelines and coding all assignable codes. With the popularity of electronic medical records (EMRs), computational approaches to code assignment have been proposed in the recent years. However, most efforts have focused on single and often short clinical narratives, while realistic scenarios warrant full EMR level analysis for code assignment.\n\n\nOBJECTIVE\nWe evaluate supervised learning approaches to automatically assign international classification of diseases (ninth revision) - clinical modification (ICD-9-CM) codes to EMRs by experimenting with a large realistic EMR dataset. The overall goal is to identify methods that offer superior performance in this task when considering such datasets.\n\n\nMETHODS\nWe use a dataset of 71,463 EMRs corresponding to in-patient visits with discharge date falling in a two year period (2011-2012) from the University of Kentucky (UKY) Medical Center. We curate a smaller subset of this dataset and also use a third gold standard dataset of radiology reports. We conduct experiments using different problem transformation approaches with feature and data selection components and employing suitable label calibration and ranking methods with novel features involving code co-occurrence frequencies and latent code associations.\n\n\nRESULTS\nOver all codes with at least 50 training examples we obtain a micro F-score of 0.48. On the set of codes that occur at least in 1% of the two year dataset, we achieve a micro F-score of 0.54. For the smaller radiology report dataset, the classifier chaining approach yields best results. For the smaller subset of the UKY dataset, feature selection, data selection, and label calibration offer best performance.\n\n\nCONCLUSIONS\nWe show that datasets at different scale (size of the EMRs, number of distinct codes) and with different characteristics warrant different learning approaches. For shorter narratives pertaining to a particular medical subdomain (e.g., radiology, pathology), classifier chaining is ideal given the codes are highly related with each other. For realistic in-patient full EMRs, feature and data selection methods offer high performance for smaller datasets. However, for large EMR datasets, we observe that the binary relevance approach with learning-to-rank based code reranking offers the best performance. Regardless of the training dataset size, for general EMRs, label calibration to select the optimal number of labels is an indispensable final step."
},
{
"pmid": "26911826",
"title": "A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.",
"abstract": "OBJECTIVE\nNatural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative.\n\n\nMETHODS\nWe propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes.\n\n\nRESULTS\nWe tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence.\n\n\nCONCLUSIONS\nThe promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding."
},
{
"pmid": "23605114",
"title": "Combining rules and machine learning for extraction of temporal expressions and events from clinical narratives.",
"abstract": "OBJECTIVE\nIdentification of clinical events (eg, problems, tests, treatments) and associated temporal expressions (eg, dates and times) are key tasks in extracting and managing data from electronic health records. As part of the i2b2 2012 Natural Language Processing for Clinical Data challenge, we developed and evaluated a system to automatically extract temporal expressions and events from clinical narratives. The extracted temporal expressions were additionally normalized by assigning type, value, and modifier.\n\n\nMATERIALS AND METHODS\nThe system combines rule-based and machine learning approaches that rely on morphological, lexical, syntactic, semantic, and domain-specific features. Rule-based components were designed to handle the recognition and normalization of temporal expressions, while conditional random fields models were trained for event and temporal recognition.\n\n\nRESULTS\nThe system achieved micro F scores of 90% for the extraction of temporal expressions and 87% for clinical event extraction. The normalization component for temporal expressions achieved accuracies of 84.73% (expression's type), 70.44% (value), and 82.75% (modifier).\n\n\nDISCUSSION\nCompared to the initial agreement between human annotators (87-89%), the system provided comparable performance for both event and temporal expression mining. While (lenient) identification of such mentions is achievable, finding the exact boundaries proved challenging.\n\n\nCONCLUSIONS\nThe system provides a state-of-the-art method that can be used to support automated identification of mentions of clinical events and temporal expressions in narratives either to support the manual review process or as a part of a large-scale processing of electronic health databases."
},
{
"pmid": "24212118",
"title": "Towards generating a patient's timeline: extracting temporal relationships from clinical notes.",
"abstract": "Clinical records include both coded and free-text fields that interact to reflect complicated patient stories. The information often covers not only the present medical condition and events experienced by the patient, but also refers to relevant events in the past (such as signs, symptoms, tests or treatments). In order to automatically construct a timeline of these events, we first need to extract the temporal relations between pairs of events or time expressions presented in the clinical notes. We designed separate extraction components for different types of temporal relations, utilizing a novel hybrid system that combines machine learning with a graph-based inference mechanism to extract the temporal links. The temporal graph is a directed graph based on parse tree dependencies of the simplified sentences and frequent pattern clues. We generalized the sentences in order to discover patterns that, given the complexities of natural language, might not be directly discoverable in the original sentences. The proposed hybrid system performance reached an F-measure of 0.63, with precision at 0.76 and recall at 0.54 on the 2012 i2b2 Natural Language Processing corpus for the temporal relation (TLink) extraction task, achieving the highest precision and third highest f-measure among participating teams in the TLink track."
},
{
"pmid": "23954311",
"title": "Classifying temporal relations in clinical data: a hybrid, knowledge-rich approach.",
"abstract": "We address the TLINK track of the 2012 i2b2 challenge on temporal relations. Unlike other approaches to this task, we (1) employ sophisticated linguistic knowledge derived from semantic and discourse relations, rather than focus on morpho-syntactic knowledge; and (2) leverage a novel combination of rule-based and learning-based approaches, rather than rely solely on one or the other. Experiments show that our knowledge-rich, hybrid approach yields an F-score of 69.3, which is the best result reported to date on this dataset."
},
{
"pmid": "23911344",
"title": "MedTime: a temporal information extraction system for clinical narratives.",
"abstract": "Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives."
},
{
"pmid": "29025149",
"title": "Segment convolutional neural networks (Seg-CNNs) for classifying relations in clinical notes.",
"abstract": "We propose Segment Convolutional Neural Networks (Seg-CNNs) for classifying relations from clinical notes. Seg-CNNs use only word-embedding features without manual feature engineering. Unlike typical CNN models, relations between 2 concepts are identified by simultaneously learning separate representations for text segments in a sentence: preceding, concept1, middle, concept2, and succeeding. We evaluate Seg-CNN on the i2b2/VA relation classification challenge dataset. We show that Seg-CNN achieves a state-of-the-art micro-average F-measure of 0.742 for overall evaluation, 0.686 for classifying medical problem-treatment relations, 0.820 for medical problem-test relations, and 0.702 for medical problem-medical problem relations. We demonstrate the benefits of learning segment-level representations. We show that medical domain word embeddings help improve relation classification. Seg-CNNs can be trained quickly for the i2b2/VA dataset on a graphics processing unit (GPU) platform. These results support the use of CNNs computed over segments of text for classifying medical relations, as they show state-of-the-art performance while requiring no manual feature engineering."
},
{
"pmid": "9865037",
"title": "Desiderata for controlled medical vocabularies in the twenty-first century.",
"abstract": "Builders of medical informatics applications need controlled medical vocabularies to support their applications and it is to their advantage to use available standards. In order to do so, however, these standards need to address the requirements of their intended users. Over the past decade, medical informatics researchers have begun to articulate some of these requirements. This paper brings together some of the common themes which have been described, including: vocabulary content, concept orientation, concept permanence, nonsemantic concept identifiers, polyhierarchy, formal definitions, rejection of \"not elsewhere classified\" terms, multiple granularities, multiple consistent views, context representation, graceful evolution, and recognized redundancy. Standards developers are beginning to recognize and address these desiderata and adapt their offerings to meet them."
},
{
"pmid": "16386470",
"title": "In defense of the Desiderata.",
"abstract": "A 1998 paper that delineated desirable characteristics, or desiderata for controlled medical terminologies attempted to summarize emerging consensus regarding structural issues of such terminologies. Among the Desiderata was a call for terminologies to be \"concept oriented.\" Since then, research has trended toward the extension of terminologies into ontologies. A paper by Smith, entitled \"From Concepts to Clinical Reality: An Essay on the Benchmarking of Biomedical Terminologies\" urges a realist approach that seeks terminologies composed of universals, rather than concepts. The current paper addresses issues raised by Smith and attempts to extend the Desiderata, not away from concepts, but towards recognition that concepts and universals must both be embraced and can coexist peaceably in controlled terminologies. To that end, additional Desiderata are defined that deal with the purpose, rather than the structure, of controlled medical terminologies."
},
{
"pmid": "26851224",
"title": "Bridging semantics and syntax with graph algorithms-state-of-the-art of extracting biomedical relations.",
"abstract": "Research on extracting biomedical relations has received growing attention recently, with numerous biological and clinical applications including those in pharmacogenomics, clinical trial screening and adverse drug reaction detection. The ability to accurately capture both semantic and syntactic structures in text expressing these relations becomes increasingly critical to enable deep understanding of scientific papers and clinical narratives. Shared task challenges have been organized by both bioinformatics and clinical informatics communities to assess and advance the state-of-the-art research. Significant progress has been made in algorithm development and resource construction. In particular, graph-based approaches bridge semantics and syntax, often achieving the best performance in shared tasks. However, a number of problems at the frontiers of biomedical relation extraction continue to pose interesting challenges and present opportunities for great improvement and fruitful research. In this article, we place biomedical relation extraction against the backdrop of its versatile applications, present a gentle introduction to its general pipeline and shared resources, review the current state-of-the-art in methodology advancement, discuss limitations and point out several promising future directions."
},
{
"pmid": "14759819",
"title": "The interaction of domain knowledge and linguistic structure in natural language processing: interpreting hypernymic propositions in biomedical text.",
"abstract": "Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering."
},
{
"pmid": "28494618",
"title": "Machine Learning Methods to Predict Diabetes Complications.",
"abstract": "One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice."
},
{
"pmid": "27521897",
"title": "Using recurrent neural network models for early detection of heart failure onset.",
"abstract": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months."
},
{
"pmid": "28328520",
"title": "A Natural Language Processing Framework for Assessing Hospital Readmissions for Patients With COPD.",
"abstract": "With the passage of recent federal legislation, many medical institutions are now responsible for reaching target hospital readmission rates. Chronic diseases account for many hospital readmissions and chronic obstructive pulmonary disease has been recently added to the list of diseases for which the United States government penalizes hospitals incurring excessive readmissions. Though there have been efforts to statistically predict those most in danger of readmission, a few have focused primarily on unstructured clinical notes. We have proposed a framework, which uses natural language processing to analyze clinical notes and predict readmission. Many algorithms within the field of data mining and machine learning exist, so a framework for component selection is created to select the best components. Naïve Bayes using Chi-Squared feature selection offers an AUC of 0.690 while maintaining fast computational times."
},
{
"pmid": "29353160",
"title": "Prediction of venous thromboembolism using semantic and sentiment analyses of clinical narratives.",
"abstract": "Venous thromboembolism (VTE) is the third most common cardiovascular disorder. It affects people of both genders at ages as young as 20 years. The increased number of VTE cases with a high fatality rate of 25% at first occurrence makes preventive measures essential. Clinical narratives are a rich source of knowledge and should be included in the diagnosis and treatment processes, as they may contain critical information on risk factors. It is very important to make such narrative blocks of information usable for searching, health analytics, and decision-making. This paper proposes a Semantic Extraction and Sentiment Assessment of Risk Factors (SESARF) framework. Unlike traditional machine-learning approaches, SESARF, which consists of two main algorithms, namely, ExtractRiskFactor and FindSeverity, prepares a feature vector as the input to a support vector machine (SVM) classifier to make a diagnosis. SESARF matches and maps the concepts of VTE risk factors and finds adjectives and adverbs that reflect their levels of severity. SESARF uses a semantic- and sentiment-based approach to analyze clinical narratives of electronic health records (EHR) and then predict a diagnosis of VTE. We use a dataset of 150 clinical narratives, 80% of which are used to train our prediction classifier support vector machine, with the remaining 20% used for testing. Semantic extraction and sentiment analysis results yielded precisions of 81% and 70%, respectively. Using a support vector machine, prediction of patients with VTE yielded precision and recall values of 54.5% and 85.7%, respectively."
},
{
"pmid": "26302085",
"title": "Sentiment Measured in Hospital Discharge Notes Is Associated with Readmission and Mortality Risk: An Electronic Health Record Study.",
"abstract": "Natural language processing tools allow the characterization of sentiment--that is, terms expressing positive and negative emotion--in text. Applying such tools to electronic health records may provide insight into meaningful patient or clinician features not captured in coded data alone. We performed sentiment analysis on 2,484 hospital discharge notes for 2,010 individuals from a psychiatric inpatient unit, as well as 20,859 hospital discharges for 15,011 individuals from general medical units, in a large New England health system between January 2011 and 2014. The primary measures of sentiment captured intensity of subjective positive or negative sentiment expressed in the discharge notes. Mean scores were contrasted between sociodemographic and clinical groups in mixed effects regression models. Discharge note sentiment was then examined for association with risk for readmission in Cox regression models. Discharge notes for individuals with greater medical comorbidity were modestly but significantly lower in positive sentiment among both psychiatric and general medical cohorts (p<0.001 in each). Greater positive sentiment at discharge was associated with significantly decreased risk of hospital readmission in each cohort (~12% decrease per standard deviation above the mean). Automated characterization of discharge notes in terms of sentiment identifies differences between sociodemographic groups, as well as in clinical outcomes, and is not explained by differences in diagnosis. Clinician sentiment merits investigation to understand why and how it reflects or impacts outcomes."
},
{
"pmid": "29677975",
"title": "Minimal Important Difference in Outcome of Disc Degenerative Disease Treatment: The Patients' Perspective.",
"abstract": "Evaluation of treatments effectiveness in a context of value-based health care is based on outcomes, and in their assessment. The patient perspective is gaining renovated interest, as demonstrated by the increasing diffusion of Patient Reported Outcome Measure (PROMs) collection initiatives. The concept of Minimal Clinically Important Dif-ference (MID) is generally seen as the basis to estimate the actual effect perceived by the patient after a treatment, like a surgical intervention, but a universally recognized threshold has not yet been established. At the Orthopedic Institute Galeazzi (Milan, Italy) we began a digitized program of PROM collection in spine surgery by means of a digital platform, called Datareg. In this work we aim to investigate MID in the treatment of degenerated disc in terms of patients' perceptions as these are collected through the above electronic registry. We proposed a computation of MID on the basis of two PROM scores, and a critical comparison with a domain expert's proposal."
},
{
"pmid": "29888090",
"title": "The Data Gap in the EHR for Clinical Research Eligibility Screening.",
"abstract": "Much effort has been devoted to leverage EHR data for matching patients into clinical trials. However, EHRs may not contain all important data elements for clinical research eligibility screening. To better design research-friendly EHRs, an important step is to identify data elements frequently used for eligibility screening but not yet available in EHRs. This study fills this knowledge gap. Using the Alzheimer's disease domain as an example, we performed text mining on the eligibility criteria text in Clinicaltrials.gov to identify frequently used eligibility criteria concepts. We compared them to the EHR data elements of a cohort of Alzheimer's Disease patients to assess the data gap by usingthe OMOP Common Data Model to standardize the representations for both criteria concepts and EHR data elements. We identified the most common SNOMED CT concepts used in Alzheimer 's Disease trials, andfound 40% of common eligibility criteria concepts were not even defined in the concept space in the EHR dataset for a cohort of Alzheimer 'sDisease patients, indicating a significant data gap may impede EHR-based eligibility screening. The results of this study can be useful for designing targeted research data collection forms to help fill the data gap in the EHR."
},
{
"pmid": "24201027",
"title": "A review of approaches to identifying patient phenotype cohorts using electronic health records.",
"abstract": "OBJECTIVE\nTo summarize literature describing approaches aimed at automatically identifying patients with a common phenotype.\n\n\nMATERIALS AND METHODS\nWe performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association, (2) Journal of Biomedical Informatics, (3) Proceedings of the Annual American Medical Informatics Association Symposium, and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included.\n\n\nRESULTS\nNinety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients.\n\n\nDISCUSSION\nWe observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems.\n\n\nCONCLUSIONS\nThere are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses."
},
{
"pmid": "22627647",
"title": "Automated identification of patients with pulmonary nodules in an integrated health system using administrative health plan data, radiology reports, and natural language processing.",
"abstract": "INTRODUCTION\nLung nodules are commonly encountered in clinical practice, yet little is known about their management in community settings. An automated method for identifying patients with lung nodules would greatly facilitate research in this area.\n\n\nMETHODS\nUsing members of a large, community-based health plan from 2006 to 2010, we developed a method to identify patients with lung nodules, by combining five diagnostic codes, four procedural codes, and a natural language processing algorithm that performed free text searches of radiology transcripts. An experienced pulmonologist reviewed a random sample of 116 radiology transcripts, providing a reference standard for the natural language processing algorithm.\n\n\nRESULTS\nWith the use of an automated method, we identified 7112 unique members as having one or more incident lung nodules. The mean age of the patients was 65 years (standard deviation 14 years). There were slightly more women (54%) than men, and Hispanics and non-whites comprised 45% of the lung nodule cohort. Thirty-six percent were never smokers whereas 11% were current smokers. Fourteen percent of the patients were subsequently diagnosed with lung cancer. The sensitivity and specificity of the natural language processing algorithm for identifying the presence of lung nodules were 96% and 86%, respectively, compared with clinician review. Among the true positive transcripts in the validation sample, only 35% were solitary and unaccompanied by one or more associated findings, and 56% measured 8 to 30 mm in diameter.\n\n\nCONCLUSIONS\nA combination of diagnostic codes, procedural codes, and a natural language processing algorithm for free text searching of radiology reports can accurately and efficiently identify patients with incident lung nodules, many of whom are subsequently diagnosed with lung cancer."
},
{
"pmid": "24108448",
"title": "Automated determination of metastases in unstructured radiology reports for eligibility screening in oncology clinical trials.",
"abstract": "Enrolling adequate numbers of patients that meet protocol eligibility criteria in a timely manner is critical, yet clinical trial accrual continues to be problematic. One approach to meet these accrual challenges is to utilize technology to automatically screen patients for clinical trial eligibility. This manuscript reports on the evaluation of different automated approaches to determine the metastatic status from unstructured radiology reports using the Clinical Trials Eligibility Database Integrated System (CTED). The study sample included all patients (N = 5,523) with radiologic diagnostic studies (N = 10,492) completed in a two-week period. Eight search algorithms (queries) within CTED were developed and applied to radiology reports. The performance of each algorithm was compared to a reference standard which consisted of a physician's review of the radiology reports. Sensitivity, specificity, positive, and negative predicted values were calculated for each algorithm. The number of patients identified by each algorithm varied from 187 to 330 and the number of true positive cases confirmed by physician review ranged from 171 to 199 across the algorithms. The best performing algorithm had sensitivity 94%, specificity 100%, positive predictive value 90%, negative predictive value 100%, and accuracy of 99%. Our evaluation process identified the optimal method for rapid identification of patients with metastatic disease through automated screening of unstructured radiology reports. The methods developed using the CTED system could be readily implemented at other institutions to enhance the efficiency of research staff in the clinical trials eligibility screening process."
},
{
"pmid": "24303276",
"title": "Identifying Abdominal Aortic Aneurysm Cases and Controls using Natural Language Processing of Radiology Reports.",
"abstract": "Prevalence of abdominal aortic aneurysm (AAA) is increasing due to longer life expectancy and implementation of screening programs. Patient-specific longitudinal measurements of AAA are important to understand pathophysiology of disease development and modifiers of abdominal aortic size. In this paper, we applied natural language processing (NLP) techniques to process radiology reports and developed a rule-based algorithm to identify AAA patients and also extract the corresponding aneurysm size with the examination date. AAA patient cohorts were determined by a hierarchical approach that: 1) selected potential AAA reports using keywords; 2) classified reports into AAA-case vs. non-case using rules; and 3) determined the AAA patient cohort based on a report-level classification. Our system was built in an Unstructured Information Management Architecture framework that allows efficient use of existing NLP components. Our system produced an F-score of 0.961 for AAA-case report classification with an accuracy of 0.984 for aneurysm size extraction."
},
{
"pmid": "23929403",
"title": "Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.",
"abstract": "BACKGROUND\nAccurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification.\n\n\nMETHODS\nWe identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated.\n\n\nRESULTS\nA total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68.\n\n\nCONCLUSIONS\nA combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data."
},
{
"pmid": "21807647",
"title": "EliXR: an approach to eligibility criteria extraction and representation.",
"abstract": "OBJECTIVE\nTo develop a semantic representation for clinical research eligibility criteria to automate semistructured information extraction from eligibility criteria text.\n\n\nMATERIALS AND METHODS\nAn analysis pipeline called eligibility criteria extraction and representation (EliXR) was developed that integrates syntactic parsing and tree pattern mining to discover common semantic patterns in 1000 eligibility criteria randomly selected from http://ClinicalTrials.gov. The semantic patterns were aggregated and enriched with unified medical language systems semantic knowledge to form a semantic representation for clinical research eligibility criteria.\n\n\nRESULTS\nThe authors arrived at 175 semantic patterns, which form 12 semantic role labels connected by their frequent semantic relations in a semantic network.\n\n\nEVALUATION\nThree raters independently annotated all the sentence segments (N=396) for 79 test eligibility criteria using the 12 top-level semantic role labels. Eight-six per cent (339) of the sentence segments were unanimously labelled correctly and 13.8% (55) were correctly labelled by two raters. The Fleiss' κ was 0.88, indicating a nearly perfect interrater agreement.\n\n\nCONCLUSION\nThis study present a semi-automated data-driven approach to developing a semantic network that aligns well with the top-level information structure in clinical research eligibility criteria text and demonstrates the feasibility of using the resulting semantic role labels to generate semistructured eligibility criteria with nearly perfect interrater reliability."
}
] |
International Journal of Biomedical Imaging | 31093268 | PMC6481128 | 10.1155/2019/7305832 | Brain Tumor Segmentation Based on Hybrid Clustering and Morphological Operations | Inference of tumor and edema areas from brain magnetic resonance imaging (MRI) data remains challenging owing to the complex structure of brain tumors, blurred boundaries, and external factors such as noise. To alleviate noise sensitivity and improve the stability of segmentation, an effective hybrid clustering algorithm combined with morphological operations is proposed for segmenting brain tumors in this paper. The main contributions of the paper are as follows: firstly, adaptive Wiener filtering is utilized for denoising, and morphological operations are used for removing nonbrain tissue, effectively reducing the method's sensitivity to noise. Secondly, K-means++ clustering is combined with the Gaussian kernel-based fuzzy C-means algorithm to segment images. This clustering not only improves the algorithm's stability, but also reduces the sensitivity of clustering parameters. Finally, the extracted tumor images are postprocessed using morphological operations and median filtering to obtain accurate representations of brain tumors. In addition, the proposed algorithm was compared with other current segmentation algorithms. The results show that the proposed algorithm performs better in terms of accuracy, sensitivity, specificity, and recall. | 2. Related WorkSegmentation of medical images is a very popular research topic, and many methods have been developed. Clustering algorithms for image segmentation are very popular among scholars, and many of these algorithms have been employed for image segmentation. Dhanalakshmi and Kanimozhi [10] proposed an algorithm for automatic segmentation of brain tumor images based on K-means clustering. During preprocessing, a median filter is used to remove artifacts and sharpen the image's edges. Seed points are randomly selected for K-means in this method. A binary mask is applied for identification of high-contrast categories. However, K-means clustering is more affected by abnormal points and is more sensitive to initialization.Kalaiselvi and Somasundaram [11] applied fuzzy C-means (FCM) to segmentation of brain tissue images, which is computationally more efficient owing to the initialization of seed points using the image histogram information. Yet, this method still does not address the sensitivity to noise and intensity inhomogeneity (IIH). Noreen et al. [12] introduced a hybrid MR segmentation method based on the discrete wavelet transform (DWT) and FCM for removal of inhomogeneity. This method applies the DWT to the input MR image, to obtain four subbands; then, the inverse discrete wavelet transform (IDWT) is applied to obtain a high-pass image. Finally, FCM clustering is performed to segment the image. Although this method addresses the sensitivity problem of intensity nonuniformity, it does not consider the uncertainty of the data space information. Christe et al. [13] combined K-means with fuzzy C-means. They defined the number of clusters, ambiguity, distance, and stopping criteria. Their method can handle overlapping intensities, but it cannot clearly define tissue boundaries. Wilson and Dhas [14] used K-means and FCM to detect iron in brain SWI, and compared the two algorithms. The experimental results showed that the FCM algorithm is better at detecting iron-containing regions, compared with K-means. Abdel-Maksoud et al. [15] reconsidered the advantages and disadvantages of K-means clustering and FCM clustering. They also proved that the K-means algorithm can detect brain tumors faster than the FCM algorithm, while the FCM algorithm can detect tumors that are not detected by K-means. They proposed to combine K-means clustering with FCM for segmentation. Their experimental results showed that the combination of the two algorithms is more advantageous than the individual algorithms. The disadvantage of this approach is that the two algorithms select their seed points in a random manner, which can easily result in overfitting.Chuang et al. [16] proposed to add spatial information to the FCM algorithm and update the membership function twice, which significantly improved the effect of FCM clustering. On this basis, Adhikari and Sing [17] introduced the conditional space fuzzy C-means (csFCM) clustering algorithm. The underlying idea is to apply an adjustment effect to the auxiliary variables corresponding to each pixel, which effectively reduces the algorithm's sensitivity to noise and intensity nonuniformity with respect to MRI data. Bai and Chen [18] proposed an improved FCM segmentation algorithm based on the spatial information for infrared ship segmentation (sFCM), which introduced improvement from the viewpoint of the following two aspects: (1) addition of nonlocal spatial information based on ship targets (2); refining of the local space constraints through the Markov random field using the spatial shape information of the ship's target contour. Ghosh and Mali et al. [19] put forward a new FCM clustering application, which uses the firefly algorithm and a chaotic map to initialize the firefly population and adjusts the absorption coefficient to improve the mobility of global search. The algorithm is called C-FAFCM. Al-Dmour and Al-Ani [20] proposed a fully automatic algorithm for brain tissue segmentation, based on the clustering fusion methodology. They combined three clustering techniques (K-means, FCM, and self-organizing map (SOM)) with neural network models for training and testing. Classification was performed using a voting strategy, which significantly improved the algorithm's segmentation performance. Still, the stability of the algorithm remained unresolved.Although the current medical image segmentation algorithm reduces the sensitivity of noise to some extent, the stability of segmentation is still a huge challenge. For the purpose of alleviating the sensitivity of the clustering algorithm to noise and for improving the stability of the clustering algorithm, here we propose to the K++GKFCM algorithm, benefitting from the advantages of the two clustering algorithms. In addition, morphological operations are applied for preprocessing and postprocessing, to further improve the accuracy of segmentation. Finally, the proposed method is compared with the K-means algorithm, the FCM algorithm, and the improved clustering algorithm in recent years. The results of this comparison show that the proposed algorithm performs better. | [
"27865153",
"29096577",
"16361080",
"26672055",
"24459099"
] | [
{
"pmid": "27865153",
"title": "Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.",
"abstract": "We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available."
},
{
"pmid": "29096577",
"title": "Human brain atlasing: past, present and future.",
"abstract": "We have recently witnessed an explosion of large-scale initiatives and projects addressing mapping, modeling, simulation and atlasing of the human brain, including the BRAIN Initiative, the Human Brain Project, the Human Connectome Project (HCP), the Big Brain, the Blue Brain Project, the Allen Brain Atlas, the Brainnetome, among others. Besides these large and international initiatives, there are numerous mid-size and small brain atlas-related projects. My contribution to these global efforts has been to create adult human brain atlases in health and disease, and to develop atlas-based applications. For over two decades with my R&D lab I developed 35 brain atlases, licensed to 67 companies and made available in about 100 countries. This paper has two objectives. First, it provides an overview of the state of the art in brain atlasing. Second, as it is already 20 years from the release of our first brain atlas, I summarise my past and present efforts, share my experience in atlas creation, validation and commercialisation, compare with the state of the art, and propose future directions."
},
{
"pmid": "16361080",
"title": "Fuzzy c-means clustering with spatial information for image segmentation.",
"abstract": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information."
},
{
"pmid": "26672055",
"title": "Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.",
"abstract": "Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images."
},
{
"pmid": "24459099",
"title": "Comparison of 10 brain tissue segmentation methods using revisited IBSR annotations.",
"abstract": "PURPOSE\nGround-truth annotations from the well-known Internet Brain Segmentation Repository (IBSR) datasets consider Sulcal cerebrospinal fluid (SCSF) voxels as gray matter. This can lead to bias when evaluating the performance of tissue segmentation methods. In this work we compare the accuracy of 10 brain tissue segmentation methods analyzing the effects of SCSF ground-truth voxels on accuracy estimations.\n\n\nMATERIALS AND METHODS\nThe set of methods is composed by FAST, SPM5, SPM8, GAMIXTURE, ANN, FCM, KNN, SVPASEG, FANTASM, and PVC. Methods are evaluated using original IBSR ground-truth and ranked by means of their performance on pairwise comparisons using permutation tests. Afterward, the evaluation is repeated using IBSR ground-truth without considering SCSF.\n\n\nRESULTS\nThe Dice coefficient of all methods is affected by changes in SCSF annotations, especially on SPM5, SPM8 and FAST. When not considering SCSF voxels, SVPASEG (0.90 ± 0.01) and SPM8 (0.91 ± 0.01) are the methods from our study that appear more suitable for gray matter tissue segmentation, while FAST (0.89 ± 0.02) is the best tool for segmenting white matter tissue.\n\n\nCONCLUSION\nThe performance and the accuracy of methods on IBSR images vary notably when not considering SCSF voxels. The fact that three of the most common methods (FAST, SPM5, and SPM8) report an important change in their accuracy suggest to consider these differences in labeling for new comparative studies."
}
] |
BMC Medical Informatics and Decision Making | 30777059 | PMC6483150 | 10.1186/s12911-019-0747-6 | Importance of medical data preprocessing in predictive modeling and risk factor discovery for the frailty syndrome | BackgroundIncreasing life expectancy results in more elderly people struggling with age related diseases and functional conditions. This poses huge challenges towards establishing new approaches for maintaining health at a higher age. An important aspect for age related deterioration of the general patient condition is frailty. The frailty syndrome is associated with a high risk for falls, hospitalization, disability, and finally increased mortality. Using predictive data mining enables the discovery of potential risk factors and can be used as clinical decision support system, which provides the medical doctor with information on the probable clinical patient outcome. This enables the professional to react promptly and to avert likely adverse events in advance.MethodsMedical data of 474 study participants containing 284 health related parameters, including questionnaire answers, blood parameters and vital parameters from the Toledo Study for Healthy Aging (TSHA) was used. Binary classification models were built in order to distinguish between frail and non-frail study subjects.ResultsUsing the available TSHA data and the discovered potential predictors, it was possible to design, develop and evaluate a variety of different predictive models for the frailty syndrome. The best performing model was the support vector machine (SVM, 78.31%). Moreover, a methodology was developed, making it possible to explore and to use incomplete medical data and further identify potential predictors and enable interpretability.ConclusionsThis work demonstrates that it is feasible to use incomplete, imbalanced medical data for the development of a predictive model for the frailty syndrome. Moreover, potential predictive factors have been discovered, which were clinically approved by the clinicians. Future work will improve prediction accuracy, especially with regard to separating the group of frail patients into frail and pre-frail ones and analyze the differences among them.Electronic supplementary materialThe online version of this article (10.1186/s12911-019-0747-6) contains supplementary material, which is available to authorized users. | Related workThe main focus of this paper lies in building predictive models for the frailty syndrome and in discovering potential predictors. Consequently, it will be reviewed in what follows, the existing literature related to data mining in the medical domain and frailty.Data mining in the medical domainPredictive data mining is becoming an important analytical instrument for the scientific community and clinical practitioners in the field of medicine [4]. Secondary use of patient and clinical study data is able to enhance health care experiences for individuals. Further, it enables the expansion of knowledge about diseases and treatments and leads to an increase of efficiency and effectiveness of health care systems [7]. Moreover, molecular data holds the potential to offer insights on single patients, therefore changing decision-making strategies. Thus, it seems predictive data mining will be a strong ally for the transformation of medicine from population-based to personalized practice.Medical data has already successfully been used for developing various clinical decision support systems’ (CDSSs), which significantly impact practitioner’s performance and the health care process in a positive way and will do so in the future [8, 9]. Nevertheless, there still is a lot of room for improvement and the remaining issues have to be tackled.Regarding building predictive models, the currently widely used neural networks (NN) [10] and also the deep learning approaches [11] are a very robust group of techniques with a good performance and they do deliver very promising results, but they are very hard to interpret because of their complex inner working. Simpler techniques like the naive Bayes classifier (NB) [12], linear discriminant analysis (LDA) [13], support vector machines (SVM) [14] and tree-based approaches [15] produce results that are much easier to interpret. Consequently, we propose in this paper to use the latter kind of techniques.An important feature of the medical data due to its nature, is that in order to understand it, the involvement of the medical professional is paramount. Interactive machine learning (iML) [16] approaches allow to insert the physician in the “loop” of learning and that is what we have attempted to realize in this research.FrailtyThe frailty syndrome was defined by Fried et al. [5] as a syndrome where three or more of the following criteria are present: unintentional weight loss (10 lbs/4.54 kg in the past year), self-reported exhaustion, weakness (measured via grip strength), slow walking speed, and low physical activity. Subjects with no deficits in all criteria score 0, which means they are not frail. Those who have deficits in 1 criterion or 2 criteria are called intermediate frail or pre-frail. All higher scores lead to the classification frail.Frailty is considered highly prevalent in old age and associated with an elevated risk for falls, disability, institutionalization, hospitalization, and mortality [5]. However, it should not be considered synonymous with disability or comorbidity. Fried et al. state that comorbidity should rather be treated as an etiologic risk factor for frailty and disability as an outcome. Disability cannot be reversed, but it is preceded, sometimes by several years, by the frailty syndrome, which can be reversed, and thus prevented from worsening and its progression monitored.Even that we use this work [5] as reference, in this research also other literature regarding frailty is presented. Apart from the index Fried et al. proposed, also others have emerged [17, 18]. Moreover, frailty is entangled with other concepts like disability and comorbidity and some effort has already been made to separate those [19]. Frailty has been also successfully used as a predictor itself, for example for predicting postoperative outcomes [20], where one study [21] found that it is more useful than conventional methods. These findings affirm the potential of the syndrome definitions and available indexes as being a stable concept.Frailty seems to be strongly connected to physical activity and exercise, which have been proven to be protective factors [22, 23]. Further, it seems that the syndrome is closely related to mental impairment and mental health, especially depression [24]. Increased age and not having a daily consumption of vegetables and fruits were each associated with frailty or pre-frailty [25]. There is also a considerable gender aspect to this syndrome. Women are more likely to become frail in higher age and also frail women have a higher risk of developing disability, being hospitalized and death [26]. Moreover, some physiological blood parameters seem to be related to frailty and hold the potential to serve as markers and/or predictors. Studies found that this geriatric syndrome is also related to increased inflammation and elevated markers of blood clotting [27]. In a study done by Baylis et al. (2013) [28] the relationship between immune-endocrine parameters and frailty and also mortality after 10 years in females and males with an age between 65 and 70 years was investigated. Their findings were that higher baseline levels of white blood cell counts, lower levels of dehydroepiandrosterone sulfate (DHEAS) and higher cortisol to DHEAS ratio could be related to a higher probability of frailty in the future. Additionally, it was found that the presence of diabetes also is a risk factor for the onset of the frailty syndrome [25]. Concluding, a lot of suitable predictors (preventive and risk factors) have already been found and are used for frailty screening and also prediction.From the previous review of literature related to the frailty syndrome the main conclusions are:
Fried’s frailty score [5] seems to be the one widely-used by physiciansIn the research of Fried et al. the following factors are used to establish the frailty level (non-frail, pre-frail and frail):
unintentional weight loss (10 lbs in past year)self-reported exhaustionweakness (grip strength)slow walking speedlow physical activity.These variables are highly correlated with the variable presenting the frailty status. Thus, we propose in our research to use any other factors (variables) to predict frailty. | [
"17188928",
"25458730",
"17077452",
"21422100",
"22751758",
"25462637",
"27747607",
"18299493",
"20510798",
"24804971",
"8190152",
"24103860",
"12418947",
"22388931",
"22159772",
"6418786",
"1202204",
"8437031",
"17890752",
"24589914",
"17106103"
] | [
{
"pmid": "17188928",
"title": "Predictive data mining in clinical medicine: current issues and guidelines.",
"abstract": "BACKGROUND\nThe widespread availability of new computational methods and tools for data analysis and predictive modeling requires medical informatics researchers and practitioners to systematically select the most appropriate strategy to cope with clinical prediction problems. In particular, the collection of methods known as 'data mining' offers methodological and technical solutions to deal with the analysis of medical data and construction of prediction models. A large variety of these methods requires general and simple guidelines that may help practitioners in the appropriate selection of data mining tools, construction and validation of predictive models, along with the dissemination of predictive models within clinical environments.\n\n\nPURPOSE\nThe goal of this review is to discuss the extent and role of the research area of predictive data mining and to propose a framework to cope with the problems of constructing, assessing and exploiting data mining models in clinical medicine.\n\n\nMETHODS\nWe review the recent relevant work published in the area of predictive data mining in clinical medicine, highlighting critical issues and summarizing the approaches in a set of learned lessons.\n\n\nRESULTS\nThe paper provides a comprehensive review of the state of the art of predictive data mining in clinical medicine and gives guidelines to carry out data mining studies in this field.\n\n\nCONCLUSIONS\nPredictive data mining is becoming an essential instrument for researchers and clinical practitioners in medicine. Understanding the main issues underlying these methods and the application of agreed and standardized procedures is mandatory for their deployment and the dissemination of results. Thanks to the integration of molecular and clinical data taking place within genomic medicine, the area has recently not only gained a fresh impulse but also a new set of complex problems it needs to address."
},
{
"pmid": "17077452",
"title": "Toward a national framework for the secondary use of health data: an American Medical Informatics Association White Paper.",
"abstract": "Secondary use of health data applies personal health information (PHI) for uses outside of direct health care delivery. It includes such activities as analysis, research, quality and safety measurement, public health, payment, provider certification or accreditation, marketing, and other business applications, including strictly commercial activities. Secondary use of health data can enhance health care experiences for individuals, expand knowledge about disease and appropriate treatments, strengthen understanding about effectiveness and efficiency of health care systems, support public health and security goals, and aid businesses in meeting customers' needs. Yet, complex ethical, political, technical, and social issues surround the secondary use of health data. While not new, these issues play increasingly critical and complex roles given current public and private sector activities not only expanding health data volume, but also improving access to data. Lack of coherent policies and standard \"good practices\" for secondary use of health data impedes efforts to strengthen the U.S. health care system. The nation requires a framework for the secondary use of health data with a robust infrastructure of policies, standards, and best practices. Such a framework can guide and facilitate widespread collection, storage, aggregation, linkage, and transmission of health data. The framework will provide appropriate protections for legitimate secondary use."
},
{
"pmid": "21422100",
"title": "Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings.",
"abstract": "OBJECTIVE\nTo synthesize the literature on clinical decision-support systems' (CDSS) impact on healthcare practitioner performance and patient outcomes.\n\n\nDESIGN\nLiterature search on Medline, Embase, Inspec, Cinahl, Cochrane/Dare and analysis of high-quality systematic reviews (SRs) on CDSS in hospital settings. Two-stage inclusion procedure: (1) selection of publications on predefined inclusion criteria; (2) independent methodological assessment of preincluded SRs by the 11-item measurement tool, AMSTAR. Inclusion of SRs with AMSTAR score 9 or above. SRs were thereafter rated on level of evidence. Each stage was performed by two independent reviewers.\n\n\nRESULTS\n17 out of 35 preincluded SRs were of high methodological quality and further analyzed. Evidence that CDSS significantly impacted practitioner performance was found in 52 out of 91 unique studies of the 16 SRs examining this effect (57%). Only 25 out of 82 unique studies of the 16 SRs reported evidence that CDSS positively impacted patient outcomes (30%).\n\n\nCONCLUSIONS\nFew studies have found any benefits on patient outcomes, though many of these have been too small in sample size or too short in time to reveal clinically important effects. There is significant evidence that CDSS can positively impact healthcare providers' performance with drug ordering and preventive care reminder systems as most clear examples. These outcomes may be explained by the fact that these types of CDSS require a minimum of patient data that are largely available before the advice is (to be) generated: at the time clinicians make the decisions."
},
{
"pmid": "22751758",
"title": "Effect of clinical decision-support systems: a systematic review.",
"abstract": "BACKGROUND\nDespite increasing emphasis on the role of clinical decision-support systems (CDSSs) for improving care and reducing costs, evidence to support widespread use is lacking.\n\n\nPURPOSE\nTo evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation.\n\n\nDATA SOURCES\nMEDLINE, CINAHL, PsycINFO, and Web of Science through January 2011.\n\n\nSTUDY SELECTION\nInvestigators independently screened reports to identify randomized trials published in English of electronic CDSSs that were implemented in clinical settings; used by providers to aid decision making at the point of care; and reported clinical, health care process, workload, relationship-centered, economic, or provider use outcomes.\n\n\nDATA EXTRACTION\nInvestigators extracted data about study design, participant characteristics, interventions, outcomes, and quality.\n\n\nDATA SYNTHESIS\n148 randomized, controlled trials were included. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n= 25; odds ratio [OR], 1.42 [95% CI, 1.27 to 1.58]), ordering clinical studies (n= 20; OR, 1.72 [CI, 1.47 to 2.00]), and prescribing therapies (n= 46; OR, 1.57 [CI, 1.35 to 1.82]). Few studies measured potential unintended consequences or adverse effects.\n\n\nLIMITATIONS\nStudies were heterogeneous in interventions, populations, settings, and outcomes. Publication bias and selective reporting cannot be excluded.\n\n\nCONCLUSION\nBoth commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload, and efficiency outcomes remains sparse. This review expands knowledge in the field by demonstrating the benefits of CDSSs outside of experienced academic centers.\n\n\nPRIMARY FUNDING SOURCE\nAgency for Healthcare Research and Quality."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "27747607",
"title": "Interactive machine learning for health informatics: when do we need the human-in-the-loop?",
"abstract": "Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as \"algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.\" This \"human-in-the-loop\" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase."
},
{
"pmid": "18299493",
"title": "Comparison of 2 frailty indexes for prediction of falls, disability, fractures, and death in older women.",
"abstract": "BACKGROUND\nFrailty, as defined by the index derived from the Cardiovascular Health Study (CHS index), predicts risk of adverse outcomes in older adults. Use of this index, however, is impractical in clinical practice.\n\n\nMETHODS\nWe conducted a prospective cohort study in 6701 women 69 years or older to compare the predictive validity of a simple frailty index with the components of weight loss, inability to rise from a chair 5 times without using arms, and reduced energy level (Study of Osteoporotic Fractures [SOF index]) with that of the CHS index with the components of unintentional weight loss, poor grip strength, reduced energy level, slow walking speed, and low level of physical activity. Women were classified as robust, of intermediate status, or frail using each index. Falls were reported every 4 months for 1 year. Disability (> or =1 new impairment in performing instrumental activities of daily living) was ascertained at 4(1/2) years, and fractures and deaths were ascertained during 9 years of follow-up. Area under the curve (AUC) statistics from receiver operating characteristic curve analysis and -2 log likelihood statistics were compared for models containing the CHS index vs the SOF index.\n\n\nRESULTS\nIncreasing evidence of frailty as defined by either the CHS index or the SOF index was similarly associated with an increased risk of adverse outcomes. Frail women had a higher age-adjusted risk of recurrent falls (odds ratio, 2.4), disability (odds ratio, 2.2-2.8), nonspine fracture (hazard ratio, 1.4-1.5), hip fracture (hazard ratio, 1.7-1.8), and death (hazard ratio, 2.4-2.7) (P < .001 for all models). The AUC comparisons revealed no differences between models with the CHS index vs the SOF index in discriminating falls (AUC = 0.61 for both models; P = .66), disability (AUC = 0.64; P = .23), nonspine fracture (AUC = 0.55; P = .80), hip fracture (AUC = 0.63; P = .64), or death (AUC = 0.72; P = .10). Results were similar when -2 log likelihood statistics were compared.\n\n\nCONCLUSION\nThe simple SOF index predicts risk of falls, disability, fracture, and death as well as the more complex CHS index and may provide a useful definition of frailty to identify older women at risk of adverse health outcomes in clinical practice."
},
{
"pmid": "20510798",
"title": "Frailty as a predictor of surgical outcomes in older patients.",
"abstract": "BACKGROUND\nPreoperative risk assessment is important yet inexact in older patients because physiologic reserves are difficult to measure. Frailty is thought to estimate physiologic reserves, although its use has not been evaluated in surgical patients. We designed a study to determine if frailty predicts surgical complications and enhances current perioperative risk models.\n\n\nSTUDY DESIGN\nWe prospectively measured frailty in 594 patients (age 65 years or older) presenting to a university hospital for elective surgery between July 2005 and July 2006. Frailty was classified using a validated scale (0 to 5) that included weakness, weight loss, exhaustion, low physical activity, and slowed walking speed. Patients scoring 4 to 5 were classified as frail, 2 to 3 were intermediately frail, and 0 to 1 were nonfrail. Main outcomes measures were 30-day surgical complications, length of stay, and discharge disposition. Multiple logistic regression (complications and discharge) and negative binomial regression (length of stay) were done to analyze frailty and postoperative outcomes associations.\n\n\nRESULTS\nPreoperative frailty was associated with an increased risk for postoperative complications (intermediately frail: odds ratio [OR] 2.06; 95% CI 1.18-3.60; frail: OR 2.54; 95% CI 1.12-5.77), length of stay (intermediately frail: incidence rate ratio 1.49; 95% CI 1.24-1.80; frail: incidence rate ratio 1.69; 95% CI 1.28-2.23), and discharge to a skilled or assisted-living facility after previously living at home (intermediately frail: OR 3.16; 95% CI 1.0-9.99; frail: OR 20.48; 95% CI 5.54-75.68). Frailty improved predictive power (p < 0.01) of each risk index (ie, American Society of Anesthesiologists, Lee, and Eagle scores).\n\n\nCONCLUSIONS\nFrailty independently predicts postoperative complications, length of stay, and discharge to a skilled or assisted-living facility in older surgical patients and enhances conventional risk models. Assessing frailty using a standardized definition can help patients and physicians make more informed decisions."
},
{
"pmid": "24804971",
"title": "Multidimensional frailty score for the prediction of postoperative mortality risk.",
"abstract": "IMPORTANCE\nThe number of geriatric patients who undergo surgery has been increasing, but there are insufficient tools to predict postoperative outcomes in the elderly.\n\n\nOBJECTIVE\nTo design a predictive model for adverse outcomes in older surgical patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFrom October 19, 2011, to July 31, 2012, a single tertiary care center enrolled 275 consecutive elderly patients (aged ≥65 years) undergoing intermediate-risk or high-risk elective operations in the Department of Surgery.\n\n\nMAIN OUTCOMES AND MEASURES\nThe primary outcome was the 1-year all-cause mortality rate. The secondary outcomes were postoperative complications (eg, pneumonia, urinary tract infection, delirium, acute pulmonary thromboembolism, and unplanned intensive care unit admission), length of hospital stay, and discharge to nursing facility.\n\n\nRESULTS\nTwenty-five patients (9.1%) died during the follow-up period (median [interquartile range], 13.3 [11.5-16.1] months), including 4 in-hospital deaths after surgery. Twenty-nine patients (10.5%) experienced at least 1 complication after surgery and 24 (8.7%) were discharged to nursing facilities. Malignant disease and low serum albumin levels were more common in the patients who died. Among the geriatric assessment domains, Charlson Comorbidity Index, dependence in activities of daily living, dependence in instrumental activities of daily living, dementia, risk of delirium, short midarm circumference, and malnutrition were associated with increased mortality rates. A multidimensional frailty score model composed of the above items predicted all-cause mortality rates more accurately than the American Society of Anesthesiologists classification (area under the receiver operating characteristic curve, 0.821 vs 0.647; P = .01). The sensitivity and specificity for predicting all-cause mortality rates were 84.0% and 69.2%, respectively, according to the model's cutoff point (>5 vs ≤5). High-risk patients (multidimensional frailty score >5) showed increased postoperative mortality risk (hazard ratio, 9.01; 95% CI, 2.15-37.78; P = .003) and longer hospital stays after surgery (median [interquartile range], 9 [5-15] vs 6 [3-9] days; P < .001).\n\n\nCONCLUSIONS AND RELEVANCE\nThe multidimensional frailty score based on comprehensive geriatric assessment is more useful than conventional methods for predicting outcomes in geriatric patients undergoing surgery."
},
{
"pmid": "8190152",
"title": "Exercise training and nutritional supplementation for physical frailty in very elderly people.",
"abstract": "BACKGROUND\nAlthough disuse of skeletal muscle and undernutrition are often cited as potentially reversible causes of frailty in elderly people, the efficacy of interventions targeted specifically at these deficits has not been carefully studied.\n\n\nMETHODS\nWe conducted a randomized, placebo-controlled trial comparing progressive resistance exercise training, multinutrient supplementation, both interventions, and neither in 100 frail nursing home residents over a 10-week period.\n\n\nRESULTS\nThe mean (+/- SE) age of the 63 women and 37 men enrolled in the study was 87.1 +/- 0.6 years (range, 72 to 98); 94 percent of the subjects completed the study. Muscle strength increased by 113 +/- 8 percent in the subjects who underwent exercise training, as compared with 3 +/- 9 percent in the nonexercising subjects (P < 0.001). Gait velocity increased by 11.8 +/- 3.8 percent in the exercisers but declined by 1.0 +/- 3.8 percent in the nonexercisers (P = 0.02). Stair-climbing power also improved in the exercisers as compared with the nonexercisers (by 28.4 +/- 6.6 percent vs. 3.6 +/- 6.7 percent, P = 0.01), as did the level of spontaneous physical activity. Cross-sectional thigh-muscle area increased by 2.7 +/- 1.8 percent in the exercisers but declined by 1.8 +/- 2.0 percent in the nonexercisers (P = 0.11). The nutritional supplement had no effect on any primary outcome measure. Total energy intake was significantly increased only in the exercising subjects who also received nutritional supplementation.\n\n\nCONCLUSIONS\nHigh-intensity resistance exercise training is a feasible and effective means of counteracting muscle weakness and physical frailty in very elderly people. In contrast, multi-nutrient supplementation without concomitant exercise does not reduce muscle weakness or physical frailty."
},
{
"pmid": "24103860",
"title": "Diabetes risk factors, diabetes risk algorithms, and the prediction of future frailty: the Whitehall II prospective cohort study.",
"abstract": "OBJECTIVE\nTo examine whether established diabetes risk factors and diabetes risk algorithms are associated with future frailty.\n\n\nDESIGN\nProspective cohort study. Risk algorithms at baseline (1997-1999) were the Framingham Offspring, Cambridge, and Finnish diabetes risk scores.\n\n\nSETTING\nCivil service departments in London, United Kingdom.\n\n\nPARTICIPANTS\nThere were 2707 participants (72% men) aged 45 to 69 years at baseline assessment and free of diabetes.\n\n\nMEASUREMENTS\nRisk factors (age, sex, family history of diabetes, body mass index, waist circumference, systolic and diastolic blood pressure, antihypertensive and corticosteroid treatments, history of high blood glucose, smoking status, physical activity, consumption of fruits and vegetables, fasting glucose, HDL-cholesterol, and triglycerides) were used to construct the risk algorithms. Frailty, assessed during a resurvey in 2007-2009, was denoted by the presence of 3 or more of the following indicators: self-reported exhaustion, low physical activity, slow walking speed, low grip strength, and weight loss; \"prefrailty\" was defined as having 2 or fewer of these indicators.\n\n\nRESULTS\nAfter a mean follow-up of 10.5 years, 2.8% of the sample was classified as frail and 37.5% as prefrail. Increased age, being female, stopping smoking, low physical activity, and not having a daily consumption of fruits and vegetables were each associated with frailty or prefrailty. The Cambridge and Finnish diabetes risk scores were associated with frailty/prefrailty with odds ratios per 1 SD increase (disadvantage) in score of 1.18 (95% confidence interval: 1.09-1.27) and 1.27 (1.17-1.37), respectively.\n\n\nCONCLUSION\nSelected diabetes risk factors and risk scores are associated with subsequent frailty. Risk scores may have utility for frailty prediction in clinical practice."
},
{
"pmid": "12418947",
"title": "Frailty and activation of the inflammation and coagulation systems with and without clinical comorbidities: results from the Cardiovascular Health Study.",
"abstract": "BACKGROUND\nThe biological basis of frailty has been difficult to establish owing to the lack of a standard definition, its complexity, and its frequent coexistence with illness.\n\n\nOBJECTIVE\nTo establish the biological correlates of frailty in the presence and absence of concurrent cardiovascular disease and diabetes mellitus.\n\n\nMETHODS\nParticipants were 4735 community-dwelling adults 65 years and older. Frail, intermediate, and nonfrail subjects were identified by a validated screening tool and exclusion criteria. Bivariate relationships between frailty level and physiological measures were evaluated by Pearson chi2 tests for categorical variables and analysis of variance F tests for continuous variables. Multinomial logistic regression was performed to evaluate multivariable relationships between frailty status and physiological measures.\n\n\nRESULTS\nOf 4735 Cardiovascular Health Study participants, 299 (6.3%) were identified as frail, 2147 (45.3%) as intermediate, and 2289 (48.3%) as not frail. Frail vs nonfrail participants had increased mean +/- SD levels of C-reactive protein (5.5 +/- 9.8 vs 2.7 +/- 4.0 mg/L), factor VIII (13 790 +/- 4480 vs 11 860 +/- 3460 mg/dL), and, in a smaller subset, D dimer (647 +/- 1033 vs 224 +/- 258 ng/mL) (P< or =.001 for all, chi2 test for trend). These differences persisted when individuals with cardiovascular disease and diabetes were excluded and after adjustment for age, sex, and race.\n\n\nCONCLUSIONS\nThese findings support the hypothesis that there is a specific physiological basis to the geriatric syndrome of frailty that is characterized in part by increased inflammation and elevated markers of blood clotting and that these physiological differences persist when those with diabetes and cardiovascular disease are excluded."
},
{
"pmid": "22388931",
"title": "Immune-endocrine biomarkers as predictors of frailty and mortality: a 10-year longitudinal study in community-dwelling older people.",
"abstract": "Frailty is a multidimensional geriatric syndrome characterised by a state of increased vulnerability to disease. Its causes are unclear, limiting opportunities for intervention. Age-related changes to the immune-endocrine axis are implicated. This study investigated the associations between the immune-endocrine axis and frailty as well as mortality 10 years later among men and women aged 65 to 70 years. We studied 254 participants of the Hertfordshire Ageing Study at baseline and 10-year follow-up. At baseline, they completed a health questionnaire and had collection of blood samples for immune-endocrine analysis. At follow-up, Fried frailty was characterised and mortality ascertained. Higher baseline levels of differential white cell counts (WCC), lower levels of dehydroepiandosterone sulphate (DHEAS) and higher cortisol:DHEAS ratio were all significantly associated with increased odds of frailty at 10-year follow-up. Baseline WCC and cortisol:DHEAS clearly discriminated between individuals who went on to be frail at follow-up. We present the first evidence that immune-endocrine biomarkers are associated with the likelihood of frailty as well as mortality over a 10-year period. This augments our understanding of the aetiology of frailty, and suggests that a screening programme at ages 60-70 years could help to identify individuals who are at high risk of becoming frail and who would benefit from early, targeted intervention, for example with DHEA supplementation or anti-inflammatory strategies. Progress towards the prevention of frailty would bring major health and socio-economic benefits at the individual and the population level."
},
{
"pmid": "22159772",
"title": "The prevalence of frailty syndrome in an older population from Spain. The Toledo Study for Healthy Aging.",
"abstract": "OBJECTIVE\nTo assess the prevalence of the frailty syndrome and its associated variables among the older adult population in the province of Toledo (Spain).\n\n\nMETHODS\nData were taken from the Toledo Study for Healthy Aging, a population-based study conducted on 2,488 individuals aged 65 years and older. Study participants were selected by a two-stage random sampling from the municipal census of Toledo, covering both institutionalized and community dwelling persons from rural and urban settings. Data were collected from 2006 to 2009, and included information on social support, activities of daily living, comorbidity, physical activity, quality of life, depressive symptoms, and cognitive function. In addition, a nurse collected anthropometric data, conducted tests of physical performance (walk speed, upper and lower extremities strength, and the stand-and-sit from a chair test) and obtained a blood sample. The diagnosis of the frailty syndrome was based on the Fried criteria (weakness, low speed, low physical activity, exhaustion, and weight loss).\n\n\nRESULTS\nIn total, 41.8% (95% confidence interval [CI] 39.4-44.2%) of the study participants were prefrail, and 8.4% (95% CI 7.1-9.8%) were frail. There were no differences in the prevalence of frailty by sex, level of education, occupation, marital status, or place of residence. The frequency of the frailty syndrome increased with age, and was higher in those with disability, depression, hip fracture and other comorbidity, such as cardiovascular disease and disorders of the central nervous system.\n\n\nCONCLUSIONS\nThe prevalence of the frailty syndrome in older Spanish adults is high and similar to that reported in other populations in the Mediterranean basin."
},
{
"pmid": "6418786",
"title": "Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living.",
"abstract": "The aging of the population of the United States and a concern for the well-being of older people have hastened the emergence of measures of functional health. Among these, measures of basic activities of daily living, mobility, and instrumental activities of daily living have been particularly useful and are now widely available. Many are defined in similar terms and are built into available comprehensive instruments. Although studies of reliability and validity continue to be needed, especially of predictive validity, there is documented evidence that these measures of self-maintaining function can be reliably used in clinical evaluations as well as in program evaluations and in planning. Current scientific evidence indicates that evaluation by these measures helps to identify problems that require treatment or care. Such evaluation also produces useful information about prognosis and is important in monitoring the health and illness of elderly people."
},
{
"pmid": "8437031",
"title": "The Physical Activity Scale for the Elderly (PASE): development and evaluation.",
"abstract": "A Physical Activity Scale for the Elderly (PASE) was evaluated in a sample of community-dwelling, older adults. Respondents were randomly assigned to complete the PASE by mail or telephone before or after a home visit assessment. Item weights for the PASE were derived by regressing a physical activity principal component score on responses to the PASE. The component score was based on 3-day motion sensor counts, a 3-day physical activity dairy and a global activity self-assessment. Test-retest reliability, assessed over a 3-7 week interval, was 0.75 (95% CI = 0.69-0.80). Reliability for mail administration (r = 0.84) was higher than for telephone administration (r = 0.68). Construct validity was established by correlating PASE scores with health status and physiologic measures. As hypothesized, PASE scores were positively associated with grip strength (r = 0.37), static balance (r = +0.33), leg strength (r = 0.25) and negatively correlated with resting heart rate (r = -0.13), age (r = -0.34) and perceived health status (r = -0.34); and overall Sickness Impact Profile score (r = -0.42). The PASE is a brief, easily scored, reliable and valid instrument for the assessment of physical activity in epidemiologic studies of older people."
},
{
"pmid": "17890752",
"title": "Adolphe Quetelet (1796-1874)--the average man and indices of obesity.",
"abstract": "The quest for a practical index of relative body weight that began shortly after actuaries reported the increased mortality of their overweight policyholders culminated after World War II, when the relationship between weight and cardiovascular disease became the subject of epidemiological studies. It became evident then that the best index was the ratio of the weight in kilograms divided by the square of the height in meters, or the Quetelet Index described in 1832. Adolphe Quetelet (1796-1874) was a Belgian mathematician, astronomer and statistician, who developed a passionate interest in probability calculus that he applied to study human physical characteristics and social aptitudes. His pioneering cross-sectional studies of human growth led him to conclude that other than the spurts of growth after birth and during puberty, 'the weight increases as the square of the height', known as the Quetelet Index until it was termed the Body Mass Index in 1972 by Ancel Keys (1904-2004). For his application of comparative statistics to social conditions and moral issues, Quetelet is considered a founder of the social sciences. His principal work, 'A Treatise of Man and the development of his faculties' published in 1835 is considered 'one of the greatest books of the 19th century'. A tireless promoter of statistical data collection based on standard methods and definitions, Quetelet organized in 1853 the first International Statistical Congress, which launched the development of 'a uniform nomenclature of the causes of death applicable to all countries', progenitor of the current International Classification of Diseases."
},
{
"pmid": "24589914",
"title": "Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.",
"abstract": "Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The \"true\" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made \"missing at random,\" and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data."
},
{
"pmid": "17106103",
"title": "Analysis of reproductive performance of lactating cows on large dairy farms using machine learning algorithms.",
"abstract": "The fertility of lactating dairy cows is economically important, but the mean reproductive performance of Holstein cows has declined during the past 3 decades. Traits such as first-service conception rate and pregnancy status at 150 d in milk (DIM) are influenced by numerous explanatory factors common to specific farms or individual cows on these farms. Machine learning algorithms offer great flexibility with regard to problems of multicollinearity, missing values, or complex interactions among variables. The objective of this study was to use machine learning algorithms to identify factors affecting the reproductive performance of lactating Holstein cows on large dairy farms. This study used data from farms in the Alta Genetics Advantage progeny-testing program. Production and reproductive records from 153 farms were obtained from on-farm DHI-Plus, Dairy Comp 305, or PCDART herd management software. A survey regarding management, facilities, labor, nutrition, reproduction, genetic selection, climate, and milk production was completed by managers of 103 farms; body condition scores were measured by a single evaluator on 63 farms; and temperature data were obtained from nearby weather stations. The edited data consisted of 31,076 lactation records, 14,804 cows, and 317 explanatory variables for first-service conception rate and 17,587 lactation records, 9,516 cows, and 341 explanatory variables for pregnancy status at 150 DIM. An alternating decision tree algorithm for first-service conception rate classified 75.6% of records correctly and identified the frequency of hoof trimming maintenance, type of bedding in the dry cow pen, type of cow restraint system, and duration of the voluntary waiting period as key explanatory variables. An alternating decision tree algorithm for pregnancy status at 150 DIM classified 71.4% of records correctly and identified bunk space per cow, temperature for thawing semen, percentage of cows with low body condition scores, number of cows in the maternity pen, strategy for using a clean-up bull, and milk yield at first service as key factors."
}
] |
PLoS Computational Biology | 30986219 | PMC6483269 | 10.1371/journal.pcbi.1006866 | Competing evolutionary paths in growing populations with applications to multidrug resistance | Investigating the emergence of a particular cell type is a recurring theme in models of growing cellular populations. The evolution of resistance to therapy is a classic example. Common questions are: when does the cell type first occur, and via which sequence of steps is it most likely to emerge? For growing populations, these questions can be formulated in a general framework of branching processes spreading through a graph from a root to a target vertex. Cells have a particular fitness value on each vertex and can transition along edges at specific rates. Vertices represent cell states, say genotypes or physical locations, while possible transitions are acquiring a mutation or cell migration. We focus on the setting where cells at the root vertex have the highest fitness and transition rates are small. Simple formulas are derived for the time to reach the target vertex and for the probability that it is reached along a given path in the graph. We demonstrate our results on several scenarios relevant to the emergence of drug resistance, including: the orderings of resistance-conferring mutations in bacteria and the impact of imperfect drug penetration in cancer. | Related workMany previous studies have considered the probability of particular cell type emerging in a growing population, typically by a fixed time after the process starts or when the total population reaches a given size. The majority deal with the target population being one or two transitions away from the initial population (the root vertex in this paper) [16, 18, 23, 38, 55–58]. In particular we single out the pioneering work of Luria and Delbrück [55], which demonstrated the spontaneous nature of mutations by combining an appropriate mathematical model with bacterial experiments on phage resistance. The original model of [55], and its various incarnations [59, 60] have been extensively studied [16, 57, 61–65]. Its fully stochastic formulation, which is identical to our model for two vertices, is due to Bartlett [60]. This full model admits an explicit solution [57], and the model’s asymptotic behavior has been recently explored [16, 64]. One of the simplest quantities of interest is the probability of no resistant cells at a fixed time, often called p0. It is a closely related quantity to our target hitting time described in (5), that is no resistant cell arises by a fixed time. Understanding p0 provides a method to infer the per cell-rate at which resistance-conferring mutations are acquired, often termed the p0-method [66].Some notable exceptions which consider the target population being greater than two transitions away are [25, 31, 54, 67–70] (ref. [31] is discussed above in the Imperfect drug penetration: combination therapy section). In [25], the same model as that presented here is numerically explored when all vertices have the same fitness (α(x) = α, β(x) = β for all vertices x), and the implications on multidrug therapy failure is emphasised. An efficient numerical method to compute the distribution of target hitting time and path probabilities via the iteration of generating functions is given in [68] with a focus on cancer initiation. Both [54, 67] are motivated by the accumulation of driver mutations in cancer, and so each transition leads to a fitness increase. For [67] the mean time of the kth driver mutation derived and compared to genetic data. The distribution of the time until the kth driver is sought in [54], whose methods are the closest to those used in this paper. There the authors employed an appealing approximation of the model studied in this paper in the path graph case (the approximation is that the seeding rate into vertex x + 1 from vertex x, which is exactly ν(x, x + 1)Zx(t), is approximated by ν(x,x+1)Wx*eλ(x)t, for some sensibly chosen random variables Wx*). Notably, again for small transition rates, the functional form of the distribution of the target hitting time is the same as that given in Theorem 1, however with a different median. The altered median derived in [54] demonstrates that when transitions lead to a fitness increase, the growth rates associated with the intermediate vertex populations have a far greater effect on the target hitting time when compared to the setting considered in this paper. The same type of approximate model was further used in [69] to investigate the target hitting time when transitions bring a fitness increase which is a random variable. Target hitting times on a path graph for a branching process with general offspring distribution, were also discussed in the very recent paper [70], but their main explicit results exclude cell death, and hold when successive transition rates along the path are assumed to be getting (infinitely) stronger.The model and questions considered in this paper also arise frequently in the evolutionary emergence or evolutionary escape literature [71–75], with the notable distinction that in the evolutionary escape setting the root and intermediate vertex populations are destined to go extinct (λ(x) < 0 for 1 ≤ x ≤ N − 1). This scenario is of interest when, for example, a homogeneous, pathogenic population (all residing at the root vertex in the language used in the present study) is treated with a therapy and must acquire sufficiently many mutations away from the root so as to become resistant (populate the target vertex). As in the setup of the present paper, the target hitting time is strongly controlled by the growth of the root population, Z1(t), which has positive growth rate, λ > 0, the target hitting times in the evolutionary escape setting are distinct from those given in the Time until target vertex is populated section. However for the escape probability (which is the probability of reaching the target), if there are multiple paths from the root to the target, the contribution of each path to the escape probability (termed the path value in [71, 72]) has an expression strikingly similar to the path weights discussed here for small transition rates (compare (2) with Eqs 6a-c in [72]). We might conjecture that for a specific path, the path value, as given in [72], is the unnormalised probability of reaching the target via the specified path, as we have demonstrated is the case with the path weights in Theorem 2 (this is implied in [72] Sec 2.5). Further connections surrounding the path distribution in these differing regimes is an interesting avenue for future work. | [
"16601193",
"19285994",
"6612632",
"19394348",
"29895935",
"21231560",
"26780609",
"27526321",
"18073428",
"22265421",
"27766475",
"30190408",
"23805382",
"25183790",
"23749189",
"19269300",
"23002776",
"19725789",
"16361566",
"27023283",
"22763634",
"2687881",
"15988530",
"21102425",
"11739200",
"21220359",
"19298493",
"19896491",
"16636113",
"8589544",
"24536673",
"10610800",
"25939695",
"21406679",
"14728779",
"14643190",
"17350060"
] | [
{
"pmid": "16601193",
"title": "Darwinian evolution can follow only very few mutational paths to fitter proteins.",
"abstract": "Five point mutations in a particular beta-lactamase allele jointly increase bacterial resistance to a clinically important antibiotic by a factor of approximately 100,000. In principle, evolution to this high-resistance beta-lactamase might follow any of the 120 mutational trajectories linking these alleles. However, we demonstrate that 102 trajectories are inaccessible to Darwinian selection and that many of the remaining trajectories have negligible probabilities of realization, because four of these five mutations fail to increase drug resistance in some combinations. Pervasive biophysical pleiotropy within the beta-lactamase seems to be responsible, and because such pleiotropy appears to be a general property of missense mutations, we conclude that much protein evolution will be similarly constrained. This implies that the protein tape of life may be largely reproducible and even predictable."
},
{
"pmid": "19285994",
"title": "The rate at which asexual populations cross fitness valleys.",
"abstract": "Complex traits often involve interactions between different genetic loci. This can lead to sign epistasis, whereby mutations that are individually deleterious or neutral combine to confer a fitness benefit. In order to acquire the beneficial genotype, an asexual population must cross a fitness valley or plateau by first acquiring the deleterious or neutral intermediates. Here, we present a complete, intuitive theoretical description of the valley-crossing process across the full spectrum of possible parameter regimes. We calculate the rate at which a population crosses a fitness valley or plateau of arbitrary width, as a function of the mutation rates, the population size, and the fitnesses of the intermediates. We find that when intermediates are close to neutral, a large population can cross even wide fitness valleys remarkably quickly, so that valley-crossing dynamics may be common even when mutations that directly increase fitness are also possible. Thus the evolutionary dynamics of large populations can be sensitive to the structure of an extended region of the fitness landscape - the population may not take directly uphill paths in favor of paths across valleys and plateaus that lead eventually to fitter genotypes. In smaller populations, we find that below a threshold size, which depends on the width of the fitness valley and the strength of selection against intermediate genotypes, valley-crossing is much less likely and hence the evolutionary dynamics are less influenced by distant regions of the fitness landscape."
},
{
"pmid": "6612632",
"title": "A simple stochastic gene substitution model.",
"abstract": "If the fitnesses of n haploid alleles in a finite population are assigned at random and if the alleles can mutate to one another, and if the population is initially fixed for the kth most fit allele, then the mean number of substitutions that will occur before the most fit allele is fixed is shown to be (formula; see text) when selection is strong and mutation is weak. This result is independent of the parameters that went into the model. The result is used to provide a partial explanation for the large variance observed in the rates of molecular evolution."
},
{
"pmid": "19394348",
"title": "The pace of evolution across fitness valleys.",
"abstract": "How fast does a population evolve from one fitness peak to another? We study the dynamics of evolving, asexually reproducing populations in which a certain number of mutations jointly confer a fitness advantage. We consider the time until a population has evolved from one fitness peak to another one with a higher fitness. The order of mutations can either be fixed or random. If the order of mutations is fixed, then the population follows a metaphorical ridge, a single path. If the order of mutations is arbitrary, then there are many ways to evolve to the higher fitness state. We address the time required for fixation in such scenarios and study how it is affected by the order of mutations, the population size, the fitness values and the mutation rate."
},
{
"pmid": "29895935",
"title": "Phenotypic Switching Can Speed up Microbial Evolution.",
"abstract": "Stochastic phenotype switching has been suggested to play a beneficial role in microbial populations by leading to the division of labour among cells, or ensuring that at least some of the population survives an unexpected change in environmental conditions. Here we use a computational model to investigate an alternative possible function of stochastic phenotype switching: as a way to adapt more quickly even in a static environment. We show that when a genetic mutation causes a population to become less fit, switching to an alternative phenotype with higher fitness (growth rate) may give the population enough time to develop compensatory mutations that increase the fitness again. The possibility of switching phenotypes can reduce the time to adaptation by orders of magnitude if the \"fitness valley\" caused by the deleterious mutation is deep enough. Our work has important implications for the emergence of antibiotic-resistant bacteria. In line with recent experimental findings, we hypothesise that switching to a slower growing - but less sensitive - phenotype helps bacteria to develop resistance by providing alternative, faster evolutionary routes to resistance."
},
{
"pmid": "21231560",
"title": "Sources and sinks: a stochastic model of evolution in heterogeneous environments.",
"abstract": "We study evolution driven by spatial heterogeneity in a stochastic model of source-sink ecologies. A sink is a habitat where mortality exceeds reproduction so that a local population persists only due to immigration from a source. Immigrants can, however, adapt to conditions in the sink by mutation. To characterize the adaptation rate, we derive expressions for the first arrival time of adapted mutants. The joint effects of migration, mutation, birth, and death result in two distinct parameter regimes. These results may pertain to the rapid evolution of drug-resistant pathogens and insects."
},
{
"pmid": "26780609",
"title": "Identification of neutral tumor evolution across cancer types.",
"abstract": "Despite extraordinary efforts to profile cancer genomes, interpreting the vast amount of genomic data in the light of cancer evolution remains challenging. Here we demonstrate that neutral tumor evolution results in a power-law distribution of the mutant allele frequencies reported by next-generation sequencing of tumor bulk samples. We find that the neutral power law fits with high precision 323 of 904 cancers from 14 types and from different cohorts. In malignancies identified as evolving neutrally, all clonal selection seemingly occurred before the onset of cancer growth and not in later-arising subclones, resulting in numerous passenger mutations that are responsible for intratumoral heterogeneity. Reanalyzing cancer sequencing data within the neutral framework allowed the measurement, in each patient, of both the in vivo mutation rate and the order and timing of mutations. This result provides a new way to interpret existing cancer genomic data and to discriminate between functional and non-functional intratumoral heterogeneity."
},
{
"pmid": "27526321",
"title": "Punctuated copy number evolution and clonal stasis in triple-negative breast cancer.",
"abstract": "Aneuploidy is a hallmark of breast cancer; however, knowledge of how these complex genomic rearrangements evolve during tumorigenesis is limited. In this study, we developed a highly multiplexed single-nucleus sequencing method to investigate copy number evolution in patients with triple-negative breast cancer. We sequenced 1,000 single cells from tumors in 12 patients and identified 1-3 major clonal subpopulations in each tumor that shared a common evolutionary lineage. For each tumor, we also identified a minor subpopulation of non-clonal cells that were classified as metastable, pseudodiploid or chromazemic. Phylogenetic analysis and mathematical modeling suggest that these data are unlikely to be explained by the gradual accumulation of copy number events over time. In contrast, our data challenge the paradigm of gradual evolution, showing that the majority of copy number aberrations are acquired at the earliest stages of tumor evolution, in short punctuated bursts, followed by stable clonal expansions that form the tumor mass."
},
{
"pmid": "18073428",
"title": "The evolution of two mutations during clonal expansion.",
"abstract": "Knudson's two-hit hypothesis proposes that two genetic changes in the RB1 gene are the rate-limiting steps of retinoblastoma. In the inherited form of this childhood eye cancer, only one mutation emerges during somatic cell divisions while in sporadic cases, both alleles of RB1 are inactivated in the growing retina. Sporadic retinoblastoma serves as an example of a situation in which two mutations are accumulated during clonal expansion of a cell population. Other examples include evolution of resistance against anticancer combination therapy and inactivation of both alleles of a metastasis-suppressor gene during tumor growth. In this article, we consider an exponentially growing population of cells that must evolve two mutations to (i) evade treatment, (ii) make a step toward (invasive) cancer, or (iii) display a disease phenotype. We calculate the probability that the population has evolved both mutations before it reaches a certain size. This probability depends on the rates at which the two mutations arise; the growth and death rates of cells carrying none, one, or both mutations; and the size the cell population reaches. Further, we develop a formula for the expected number of cells carrying both mutations when the final population size is reached. Our theory establishes an understanding of the dynamics of two mutations during clonal expansion."
},
{
"pmid": "22265421",
"title": "Computational modeling of pancreatic cancer reveals kinetics of metastasis suggesting optimum treatment strategies.",
"abstract": "Pancreatic cancer is a leading cause of cancer-related death, largely due to metastatic dissemination. We investigated pancreatic cancer progression by utilizing a mathematical framework of metastasis formation together with comprehensive data of 228 patients, 101 of whom had autopsies. We found that pancreatic cancer growth is initially exponential. After estimating the rates of pancreatic cancer growth and dissemination, we determined that patients likely harbor metastases at diagnosis and predicted the number and size distribution of metastases as well as patient survival. These findings were validated in an independent database. Finally, we analyzed the effects of different treatment modalities, finding that therapies that efficiently reduce the growth rate of cells earlier in the course of treatment appear to be superior to upfront tumor resection. These predictions can be validated in the clinic. Our interdisciplinary approach provides insights into the dynamics of pancreatic cancer metastasis and identifies optimum therapeutic interventions."
},
{
"pmid": "27766475",
"title": "Universal Asymptotic Clone Size Distribution for General Population Growth.",
"abstract": "Deterministically growing (wild-type) populations which seed stochastically developing mutant clones have found an expanding number of applications from microbial populations to cancer. The special case of exponential wild-type population growth, usually termed the Luria-Delbrück or Lea-Coulson model, is often assumed but seldom realistic. In this article, we generalise this model to different types of wild-type population growth, with mutants evolving as a birth-death branching process. Our focus is on the size distribution of clones-that is the number of progeny of a founder mutant-which can be mapped to the total number of mutants. Exact expressions are derived for exponential, power-law and logistic population growth. Additionally, for a large class of population growth, we prove that the long-time limit of the clone size distribution has a general two-parameter form, whose tail decays as a power-law. Considering metastases in cancer as the mutant clones, upon analysing a data-set of their size distribution, we indeed find that a power-law tail is more likely than an exponential one."
},
{
"pmid": "30190408",
"title": "Minimal functional driver gene heterogeneity among untreated metastases.",
"abstract": "Metastases are responsible for the majority of cancer-related deaths. Although genomic heterogeneity within primary tumors is associated with relapse, heterogeneity among treatment-naïve metastases has not been comprehensively assessed. We analyzed sequencing data for 76 untreated metastases from 20 patients and inferred cancer phylogenies for breast, colorectal, endometrial, gastric, lung, melanoma, pancreatic, and prostate cancers. We found that within individual patients, a large majority of driver gene mutations are common to all metastases. Further analysis revealed that the driver gene mutations that were not shared by all metastases are unlikely to have functional consequences. A mathematical model of tumor evolution and metastasis formation provides an explanation for the observed driver gene homogeneity. Thus, single biopsies capture most of the functionally important mutations in metastases and therefore provide essential information for therapeutic decision-making."
},
{
"pmid": "23805382",
"title": "Evolutionary dynamics of cancer in response to targeted combination therapy.",
"abstract": "In solid tumors, targeted treatments can lead to dramatic regressions, but responses are often short-lived because resistant cancer cells arise. The major strategy proposed for overcoming resistance is combination therapy. We present a mathematical model describing the evolutionary dynamics of lesions in response to treatment. We first studied 20 melanoma patients receiving vemurafenib. We then applied our model to an independent set of pancreatic, colorectal, and melanoma cancer patients with metastatic disease. We find that dual therapy results in long-term disease control for most patients, if there are no single mutations that cause cross-resistance to both drugs; in patients with large disease burden, triple therapy is needed. We also find that simultaneous therapy with two drugs is much more effective than sequential therapy. Our results provide realistic expectations for the efficacy of new drug combinations and inform the design of trials for new cancer therapeutics. DOI:http://dx.doi.org/10.7554/eLife.00747.001."
},
{
"pmid": "25183790",
"title": "Resistance to chemotherapy: patient variability and cellular heterogeneity.",
"abstract": "The issue of resistance to targeted drug therapy is of pressing concern, as it constitutes a major barrier to progress in managing cancer. One important aspect is the role of stochasticity in determining the nature of the patient response. We examine two particular experiments. The first measured the maximal response of melanoma to targeted therapy before the resistance causes the tumor to progress. We analyze the data in the context of a Delbruck-Luria type scheme, wherein the continued growth of preexistent resistant cells are responsible for progression. We show that, aside from a finite fraction of resistant cell-free patients, the maximal response in such a scenario would be quite uniform. To achieve the measured variability, one is necessarily led to assume a wide variation from patient to patient of the sensitive cells' response to the therapy. The second experiment is an in vitro system of multiple myeloma cells. When subject to a spatial gradient of a chemotherapeutic agent, the cells in the middle of the system acquire resistance on a rapid (two-week) timescale. This finding points to the potential important role of cell-to-cell differences, due to differing local environments, in addition to the patient-to-patient differences encountered in the first part. See all articles in this Cancer Research section, \"Physics in Cancer Research.\""
},
{
"pmid": "23749189",
"title": "Mycobacterium tuberculosis mutation rate estimates from different lineages predict substantial differences in the emergence of drug-resistant tuberculosis.",
"abstract": "A key question in tuberculosis control is why some strains of M. tuberculosis are preferentially associated with resistance to multiple drugs. We demonstrate that M. tuberculosis strains from lineage 2 (East Asian lineage and Beijing sublineage) acquire drug resistances in vitro more rapidly than M. tuberculosis strains from lineage 4 (Euro-American lineage) and that this higher rate can be attributed to a higher mutation rate. Moreover, the in vitro mutation rate correlates well with the bacterial mutation rate in humans as determined by whole-genome sequencing of clinical isolates. Finally, using a stochastic mathematical model, we demonstrate that the observed differences in mutation rate predict a substantially higher probability that patients infected with a drug-susceptible lineage 2 strain will harbor multidrug-resistant bacteria at the time of diagnosis. These data suggest that interventions to prevent the emergence of drug-resistant tuberculosis should target bacterial as well as treatment-related risk factors."
},
{
"pmid": "19269300",
"title": "Estimating primate divergence times by using conditioned birth-and-death processes.",
"abstract": "The fossil record provides a lower bound on the primate divergence time of 54.8 million years ago, but does not provide an explicit estimate for the divergence time itself. We show how the pattern of diversification through the Cenozoic can be combined with a model for speciation to give a distribution for the age of the primates. The primate fossil record, the number of extant primate species, and information about the structure of the primate phylogenetic tree are combined to provide an estimate for the joint distribution of the primate and anthropoid divergence times. To take this information into account, we derive the structure of the birth-and-death process conditioned to have a subtree originate at a particular point in time. This process has a size-biased law and has an immortal line running from the root of the tree to the root of the subtree, with species on the spine having modified offspring and length distributions. We conclude that it is not possible, with this model, to rule out a Cretaceous origin for the primates."
},
{
"pmid": "23002776",
"title": "Mutational pathway determines whether drug gradients accelerate evolution of drug-resistant cells.",
"abstract": "Drug gradients are believed to play an important role in the evolution of bacteria resistant to antibiotics and tumors resistant to anticancer drugs. We use a statistical physics model to study the evolution of a population of malignant cells exposed to drug gradients, where drug resistance emerges via a mutational pathway involving multiple mutations. We show that a nonuniform drug distribution has the potential to accelerate the emergence of resistance when the mutational pathway involves a long sequence of mutants with increasing resistance, but if the pathway is short or crosses a fitness valley, the evolution of resistance may actually be slowed down by drug gradients. These predictions can be verified experimentally, and may help to improve strategies for combating the emergence of resistance."
},
{
"pmid": "19725789",
"title": "Vancomycin in combination with other antibiotics for the treatment of serious methicillin-resistant Staphylococcus aureus infections.",
"abstract": "Vancomycin is often combined with a second antibiotic, most often rifampin or gentamicin, for the treatment of serious methicillin-resistant Staphylococcus aureus infections. Published data from experiments evaluating these and other vancomycin-based combinations, both in vitro and in animal models of infection, often yield inconsistent results, however. More importantly, no data are available from randomized clinical trials to support their use, and some regimens are known to have potential toxicities. Clinicians should carefully reconsider the use of vancomycin-based combination therapies for the treatment of infection due to methicillin-resistant S. aureus."
},
{
"pmid": "16361566",
"title": "The distribution of the anticancer drug Doxorubicin in relation to blood vessels in solid tumors.",
"abstract": "PURPOSE\nAnticancer drugs gain access to solid tumors via the circulatory system and must penetrate the tissue to kill cancer cells. Here, we study the distribution of doxorubicin in relation to blood vessels and regions of hypoxia in solid tumors of mice.\n\n\nEXPERIMENTAL DESIGN\nThe distribution of doxorubicin was quantified by immunofluorescence in relation to blood vessels (recognized by CD31) of murine 16C and EMT6 tumors and human prostate cancer PC-3 xenografts. Hypoxic regions were identified by injection of EF5.\n\n\nRESULTS\nThe concentration of doxorubicin decreases exponentially with distance from tumor blood vessels, decreasing to half its perivascular concentration at a distance of about 40 to 50 mum, The mean distance from blood vessels to regions of hypoxia is 90 to 140 microm in these tumors. Many viable tumor cells are not exposed to detectable concentrations of drug following a single injection.\n\n\nCONCLUSIONS\nLimited distribution of doxorubicin in solid tumors is an important and neglected cause of clinical resistance that is amenable to modification. The technique described here can be adapted to studying the distribution of other drugs within solid tumors and the effect of strategies to modify their distribution."
},
{
"pmid": "27023283",
"title": "Current status and prospects of HIV treatment.",
"abstract": "Current antiviral treatments can reduce HIV-associated morbidity, prolong survival, and prevent HIV transmission. Combination antiretroviral therapy (cART) containing preferably three active drugs from two or more classes is required for durable virologic suppression. Regimen selection is based on virologic efficacy, potential for adverse effects, pill burden and dosing frequency, drug-drug interaction potential, resistance test results, comorbid conditions, social status, and cost. With prolonged virologic suppression, improved clinical outcomes, and longer survival, patients will be exposed to antiretroviral agents for decades. Therefore, maximizing the safety and tolerability of cART is a high priority. Emergence of resistance and/or lack of tolerability in individual patients require availability of a range of treatment options. Development of new drugs is focused on improving safety (e.g. tenofovir alafenamide) and/or resistance profile (e.g. doravirine) within the existing drug classes, combination therapies with improved adherence (e.g. single-tablet regimens), novel mechanisms of action (e.g. attachment inhibitors, maturation inhibitors, broadly neutralizing antibodies), and treatment simplification with infrequent dosing (e.g. long-acting injectables). In parallel with cART innovations, research and development efforts focused on agents that target persistent HIV reservoirs may lead to prolonged drug-free remission and HIV cure."
},
{
"pmid": "22763634",
"title": "Combination therapy for treatment of infections with gram-negative bacteria.",
"abstract": "Combination antibiotic therapy for invasive infections with Gram-negative bacteria is employed in many health care facilities, especially for certain subgroups of patients, including those with neutropenia, those with infections caused by Pseudomonas aeruginosa, those with ventilator-associated pneumonia, and the severely ill. An argument can be made for empiric combination therapy, as we are witnessing a rise in infections caused by multidrug-resistant Gram-negative organisms. The wisdom of continued combination therapy after an organism is isolated and antimicrobial susceptibility data are known, however, is more controversial. The available evidence suggests that the greatest benefit of combination antibiotic therapy stems from the increased likelihood of choosing an effective agent during empiric therapy, rather than exploitation of in vitro synergy or the prevention of resistance during definitive treatment. In this review, we summarize the available data comparing monotherapy versus combination antimicrobial therapy for the treatment of infections with Gram-negative bacteria."
},
{
"pmid": "2687881",
"title": "Differences in the rates of gene amplification in nontumorigenic and tumorigenic cell lines as measured by Luria-Delbrück fluctuation analysis.",
"abstract": "It has been hypothesized that genomic fluidity is an important component of tumorigenesis. Previous studies described the relationship between tumorigenicity and one marker for genomic fluidity, gene amplification. In this report, these studies are extended with the rat liver epithelial cell lines to show that: (i) the amplification in these cells arises in a spontaneous fashion in the population (i.e., the variants detected are not preexisting in the population), and (ii) the rate of spontaneous amplification (mutation), as measured by Luria-Delbrück fluctuation analysis, is significantly lower in the nontumorigenic cells than in the tumorigenic cells. The rate was estimated by using the Po method and the method of means. The rate of spontaneous amplification of the gene encoding the multifunctional protein CAD (containing the enzymatic activities carbamoyl-phosphate synthase, aspartate transcarbamylase, and dihydroorotase) in the highly tumorigenic cells was significantly greater than that for the nontumorigenic cells, reaching almost 1 x 10(-4) events per cell per generation. The rate of this mutagenic event is high compared to the rate of point mutations usually reported in mammalian cells, and its potential contribution to the tumorigenic process will be discussed."
},
{
"pmid": "15988530",
"title": "Dynamics of chronic myeloid leukaemia.",
"abstract": "The clinical success of the ABL tyrosine kinase inhibitor imatinib in chronic myeloid leukaemia (CML) serves as a model for molecularly targeted therapy of cancer, but at least two critical questions remain. Can imatinib eradicate leukaemic stem cells? What are the dynamics of relapse due to imatinib resistance, which is caused by mutations in the ABL kinase domain? The precise understanding of how imatinib exerts its therapeutic effect in CML and the ability to measure disease burden by quantitative polymerase chain reaction provide an opportunity to develop a mathematical approach. We find that a four-compartment model, based on the known biology of haematopoietic differentiation, can explain the kinetics of the molecular response to imatinib in a 169-patient data set. Successful therapy leads to a biphasic exponential decline of leukaemic cells. The first slope of 0.05 per day represents the turnover rate of differentiated leukaemic cells, while the second slope of 0.008 per day represents the turnover rate of leukaemic progenitors. The model suggests that imatinib is a potent inhibitor of the production of differentiated leukaemic cells, but does not deplete leukaemic stem cells. We calculate the probability of developing imatinib resistance mutations and estimate the time until detection of resistance. Our model provides the first quantitative insights into the in vivo kinetics of a human cancer."
},
{
"pmid": "21102425",
"title": "Seeking the causes and solutions to imatinib-resistance in chronic myeloid leukemia.",
"abstract": "Although only 5000 new cases of chronic myeloid leukemia (CML) were seen in the United States in 2009, this neoplasm continues to make scientific headlines year-after-year. Advances in understanding the molecular pathogenesis coupled with exciting developments in both drug design and development, targeting the initiating tyrosine kinase, have kept CML in the scientific limelight for more than a decade. Indeed, imatinib, a small-molecule inhibitor of the leukemia-initiating Bcr-Abl tyrosine kinase, has quickly become the therapeutic standard for newly diagnosed chronic phase-CML (CP-CML) patients. Yet, nearly one-third of patients will still have an inferior response to imatinib, either failing to respond to primary therapy or demonstrating progression after an initial response. Significant efforts geared toward understanding the molecular mechanisms of imatinib resistance have yielded valuable insights into the cellular biology of drug trafficking, enzyme structure and function, and the rational design of novel small molecule enzyme inhibitors. Indeed, new classes of kinase inhibitors have recently been investigated in imatinib-resistant CML. Understanding the pathogenesis of tyrosine kinase inhibitor resistance and the molecular rationale for the development of second and now third generation therapies for patients with CML will be keys to further disease control over the next 10 years."
},
{
"pmid": "11739200",
"title": "Restoration of sensitivity to STI571 in STI571-resistant chronic myeloid leukemia cells.",
"abstract": "STI571 induces sustained hematologic remission in patients with chronic myeloid leukemia (CML) in chronic phase. However, in advanced phases, especially blast crisis, the leukemia usually becomes resistant within months. It has been investigated whether resistance to STI571 is stable and immutable or whether it can be reversed in selected CML cell lines. Withdrawal of STI571 for varying lengths of time from cultures of 3 resistant lines (K562-r, KCL22-r, and Baf/BCR-ABL-r1) did not restore sensitivity to the inhibitor. In contrast, LAMA84-resistant cells experienced a sharp reduction in survival and proliferation during the first week of STI571 withdrawal but recovered thereafter. Moreover, when left off the inhibitor for 2 months or longer, this cell line reacquired sensitivity to STI571. It is hypothesized, therefore, that patients who have become resistant to the drug may respond again if STI571 therapy is temporarily interrupted."
},
{
"pmid": "21220359",
"title": "The fitness cost of rifampicin resistance in Pseudomonas aeruginosa depends on demand for RNA polymerase.",
"abstract": "Bacterial resistance to antibiotics usually incurs a fitness cost in the absence of selecting drugs, and this cost of resistance plays a key role in the spread of antibiotic resistance in pathogen populations. Costs of resistance have been shown to vary with environmental conditions, but the causes of this variability remain obscure. In this article, we show that the average cost of rifampicin resistance in the pathogenic bacterium Pseudomonas aeruginosa is reduced by the addition of ribosome inhibitors (chloramphenicol or streptomycin) that indirectly constrain transcription rate and therefore reduce demand for RNA polymerase activity. This effect is consistent with predictions from metabolic control theory. We also tested the alternative hypothesis that the observed trend was due to a general effect of environmental quality on the cost of resistance. To do this we measured the fitness of resistant mutants in the presence of other antibiotics (ciprofloxacin and carbenicillin) that have similar effects on bacterial growth rate but bind to different target enzymes (DNA gyrase and penicillin-binding proteins, respectively) and in 41 single-carbon source environments of varying quality. We find no consistent effect of environmental quality on the average cost of resistance in these treatments. These results show that the cost of rifampicin resistance varies with demand for the mutated target enzyme, rather than as a simple function of bacterial growth rate or stress."
},
{
"pmid": "19298493",
"title": "The cost of multiple drug resistance in Pseudomonas aeruginosa.",
"abstract": "The spread of bacterial antibiotic resistance mutations is thought to be constrained by their pleiotropic fitness costs. Here we investigate the fitness costs of resistance in the context of the evolution of multiple drug resistance (MDR), by measuring the cost of acquiring streptomycin resistance mutations (StrepR) in independent strains of the bacterium Pseudomonas aeruginosa carrying different rifampicin resistance (RifR) mutations. In the absence of antibiotics, StrepR mutations are associated with similar fitness costs in different RifR genetic backgrounds. The cost of StrepR mutations is greater in a rifampicin-sensitive (RifS) background, directly demonstrating antagonistic epistasis between resistance mutations. In the presence of rifampicin, StrepR mutations have contrasting effects in different RifR backgrounds: StrepR mutations have no detectable costs in some RifR backgrounds and massive fitness costs in others. Our results clearly demonstrate the importance of epistasis and genotype-by-environment interactions for the evolution of MDR."
},
{
"pmid": "19896491",
"title": "Evolution of resistance and progression to disease during clonal expansion of cancer.",
"abstract": "Inspired by previous work of Iwasa et al. (2006) and Haeno et al. (2007), we consider an exponentially growing population of cancerous cells that will evolve resistance to treatment after one mutation or display a disease phenotype after two or more mutations. We prove results about the distribution of the first time when k mutations have accumulated in some cell, and about the growth of the number of type-k cells. We show that our results can be used to derive the previous results about a tumor grown to a fixed size."
},
{
"pmid": "16636113",
"title": "Evolution of resistance during clonal expansion.",
"abstract": "Acquired drug resistance is a major limitation for cancer therapy. Often, one genetic alteration suffices to confer resistance to an otherwise successful therapy. However, little is known about the dynamics of the emergence of resistant tumor cells. In this article, we consider an exponentially growing population starting from one cancer cell that is sensitive to therapy. Sensitive cancer cells can mutate into resistant ones, which have relative fitness alpha prior to therapy. In the special case of no cell death, our model converges to the one investigated by Luria and Delbrück. We calculate the probability of resistance and the mean number of resistant cells once the cancer has reached detection size M. The probability of resistance is an increasing function of the detection size M times the mutation rate u. If Mu << 1, then the expected number of resistant cells in cancers with resistance is independent of the mutation rate u and increases with M in proportion to M(1-1/alpha) for advantageous mutants with relative fitness alpha>1, to l nM for neutral mutants (alpha = 1), but converges to an upper limit for deleterious mutants (alpha<1). Further, the probability of resistance and the average number of resistant cells increase with the number of cell divisions in the history of the tumor. Hence a tumor subject to high rates of apoptosis will show a higher incidence of resistance than expected on its detection size only."
},
{
"pmid": "8589544",
"title": "An exact representation for the generating function for the Moolgavkar-Venzon-Knudson two-stage model of carcinogenesis with stochastic stem cell growth.",
"abstract": "The two-stage clonal expansion model of carcinogenesis provides a convenient biologically based framework for the quantitative description of carcinogenesis data. Under this stochastic model, a cancer cell arises following the occurrence of two critical mutations in a normal stem cell. Both normal cells and initiated cells that have sustained the first mutation undergo birth-and-death processes responsible for tissue growth. In this article, a new expression for the probability generating function (pgf) for the two-stage model of carcinogenesis is derived. This characterization is obtained by solving a partial differential equation (pde) satisfied by the pgf derived from the corresponding Kolmogorov forward equation. This pde can be reduced to the hypergeometric differential equation of Gauss, which leads to a closed-form expression for the pgf requiring only the evaluation of hypergeometric functions. This result facilitates computation of the exact hazard function for the two-stage model. Several approximations that are simpler to compute are also given. Numerical examples are provided to illustrate the accuracy of these approximations."
},
{
"pmid": "10610800",
"title": "Determining mutation rates in bacterial populations.",
"abstract": "When properly determined, spontaneous mutation rates are a more accurate and biologically meaningful reflection of underlying mutagenic mechanisms than are mutant frequencies. Because bacteria grow exponentially and mutations arise stochastically, methods to estimate mutation rates depend on theoretical models that describe the distribution of mutant numbers among parallel cultures, as in the original Luria-Delbr]uck fluctuation analysis. An accurate determination of mutation rate depends on understanding the strengths and limitations of these methods, and how to design fluctuation assays to optimize a given method. In this paper we describe a number of methods to estimate mutation rates, give brief accounts of their derivations, and discuss how they behave under various experimental conditions."
},
{
"pmid": "25939695",
"title": "Repeatability of evolution on epistatic landscapes.",
"abstract": "Evolution is a dynamic process. The two classical forces of evolution are mutation and selection. Assuming small mutation rates, evolution can be predicted based solely on the fitness differences between phenotypes. Predicting an evolutionary process under varying mutation rates as well as varying fitness is still an open question. Experimental procedures, however, do include these complexities along with fluctuating population sizes and stochastic events such as extinctions. We investigate the mutational path probabilities of systems having epistatic effects on both fitness and mutation rates using a theoretical and computational framework. In contrast to previous models, we do not limit ourselves to the typical strong selection, weak mutation (SSWM)-regime or to fixed population sizes. Rather we allow epistatic interactions to also affect mutation rates. This can lead to qualitatively non-trivial dynamics. Pathways, that are negligible in the SSWM-regime, can overcome fitness valleys and become accessible. This finding has the potential to extend the traditional predictions based on the SSWM foundation and bring us closer to what is observed in experimental systems."
},
{
"pmid": "21406679",
"title": "Intratumor heterogeneity in evolutionary models of tumor progression.",
"abstract": "With rare exceptions, human tumors arise from single cells that have accumulated the necessary number and types of heritable alterations. Each such cell leads to dysregulated growth and eventually the formation of a tumor. Despite their monoclonal origin, at the time of diagnosis most tumors show a striking amount of intratumor heterogeneity in all measurable phenotypes; such heterogeneity has implications for diagnosis, treatment efficacy, and the identification of drug targets. An understanding of the extent and evolution of intratumor heterogeneity is therefore of direct clinical importance. In this article, we investigate the evolutionary dynamics of heterogeneity arising during exponential expansion of a tumor cell population, in which heritable alterations confer random fitness changes to cells. We obtain analytical estimates for the extent of heterogeneity and quantify the effects of system parameters on this tumor trait. Our work contributes to a mathematical understanding of intratumor heterogeneity and is also applicable to organisms like bacteria, agricultural pests, and other microbes."
},
{
"pmid": "14728779",
"title": "Evolutionary dynamics of escape from biomedical intervention.",
"abstract": "Viruses, bacteria, eukaryotic parasites, cancer cells, agricultural pests and other inconvenient animates have an unfortunate tendency to escape from selection pressures that are meant to control them. Chemotherapy, anti-viral drugs or antibiotics fail because their targets do not hold still, but evolve resistance. A major problem in developing vaccines is that microbes evolve and escape from immune responses. The fundamental question is the following: if a genetically diverse population of replicating organisms is challenged with a selection pressure that has the potential to eradicate it, what is the probability that this population will produce escape mutants? Here, we use multi-type branching processes to describe the accumulation of mutants in independent lineages. We calculate escape dynamics for arbitrary mutation networks and fitness landscapes. Our theory shows how to estimate the probability of success or failure of biomedical intervention, such as drug treatment and vaccination, against rapidly evolving organisms."
},
{
"pmid": "14643190",
"title": "Evolutionary dynamics of invasion and escape.",
"abstract": "Whenever life wants to invade a new habitat or escape from a lethal selection pressure, some mutations may be necessary to yield sustainable replication. We imagine situations like (i) a parasite infecting a new host, (ii) a species trying to invade a new ecological niche, (iii) cancer cells escaping from chemotherapy, (iv) viruses or microbes evading anti-microbial therapy, and also (v) the repeated attempts of combinatorial chemistry in the very beginning of life to produce self-replicating molecules. All such seemingly unrelated situations have a common structure in terms of Darwinian dynamics: a replicator with a basic reproductive ratio less than one attempts to find some mutations that allow indefinite survival. We develop a general theory, based on multitype branching processes, to describe the evolutionary dynamics of invasion and escape."
},
{
"pmid": "17350060",
"title": "Dynamics of escape mutants.",
"abstract": "We use multi-type Galton-Watson branching processes to model the evolution of populations that, due to a small reproductive ratio of the individuals, are doomed to extinction. Yet, mutations occurring during the reproduction process, may lead to the appearance of new types of individuals that are able to escape extinction. We provide examples of such populations in medical, biological and environmental contexts and give results on (i) the probability of escape/extinction, (ii) the distribution of the waiting time to produce the first individual whose lineage does not get extinct and (iii) the distribution of the time it takes for the number of mutants to reach a high level. Special attention is dedicated to the case where the probability of mutation is very small and approximations for (i)-(iii) are derived."
}
] |
Scientific Reports | 31024057 | PMC6484004 | 10.1038/s41598-019-43073-1 | Monte Carlo investigation of the characteristics of radioactive beams for heavy ion therapy | This work presents a simulation study evaluating relative biological effectiveness at 10% survival fraction (RBE10) of several different positron-emitting radionuclides in heavy ion treatment systems, and comparing these to the RBE10s of their non-radioactive counterparts. RBE10 is evaluated as a function of depth for three positron-emitting radioactive ion beams (10C, 11C and 15O) and two stable ion beams (12C and 16O) using the modified microdosimetric kinetic model (MKM) in a heterogeneous skull phantom subject to a rectangular 50 mm × 50 mm × 60 mm spread out Bragg peak. We demonstrate that the RBE10 of the positron-emitting radioactive beams is almost identical to the corresponding stable isotopes. The potential improvement in PET quality assurance image quality which is obtained when using radioactive beams is evaluated by comparing the signal to background ratios of positron annihilations at different intra- and post-irradiation time points. Finally, the incidental dose to the patient resulting from the use of radioactive beams is also quantified and shown to be negligible. | Related WorkThe use of positron-emitting radioisotopes for heavy ion therapy has been investigated by a number of authors. In 2001, Urakabe et al. demonstrated that a positron-emitting 11C scanned spot beam could be directly used as the therapeutic agent29. However, the estimate of RBE10 used to obtain a flat biological dose was based on an extrapolation of previously-reported results for 12C in water, which was assumed to extend to human tissue30. Iseki et al. at NIRS used low-intensity monoenergetic 10C probe beams with between 10 4 and 10 5 particles per spill to estimate the depth of the therapeutic 12C beam’s Bragg peak, while keeping the dose received during the range measurement under 100 mGyE (a few percent of therapeutic dose)31. RBE of the radioactive beam was estimated via simulation using the one-dimensional HIBRAC beam transportation code from Sihver et al. combined with Kanai’s RBE model30,32,33. However, this work only considered monoenergetic 11C ion beams, and ignored the effects of low-LET fragmentation products, which resulted in an overestimation of the RBE for 11C. Augusto et al. used the FLUKA Monte Carlo toolkit to investigate the use of 11C beams either alone or in conjunction with 12C34. It was found that for beams with equivalent energy per nucleon incident on the same water phantom, 11C and 12C beams produce very similar fragmentation products, with the main differences being the relative yield of helium ions and several boron isotopes. While this study demonstrated the potential of using 11C in heavy ion therapy, it only considered monoenergetic beams of 11C at a fixed depth (100 mm) in a homogeneous water phantom. The composition of the phantom, the isotope and the specific beam energy are important factors affecting the fragmentation processs and the spatial distribution of positron-emitting nuclei which results35,36.These works demonstrate the potential for using positron-emitting beams both for radiotherapy and for range verification. However, in order to conclusively establish their clinical utility, it is necessary to quantify their RBE and evaluate the quality of the resulting PET image in a clinically relevant configuration, through the use of heterogenous tissue-equivalent phantoms and polyenergetic ion beams.Relative biological effectiveness (RBE) is an empirically-derived ratio which can be used to predict the physical dose of a specific type of radiation which will result in the same cellular survival fraction as a reference dose (typically a 200 keV X-ray beam)37,38. The complex dependencies of RBE on the energy and type of radiation, as well the location of the target and the specific tissue types present, require the use of biophysical methods for accurate theoretical estimation of RBE39–41. The Microdosimetric Kinetic Model (MKM), proposed by Hawkins et al., is a widely-used method for estimating RBE in which the microdosimetric spectrum (f(y)) is measured through the use of a tissue-equivalent proportional counter (TEPC)24. It was subsequently extended by Kase et al. to relate the saturation-corrected dose-mean lineal energy (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\bar{y}}^{\ast }$$\end{document}y¯⁎) to the radiation sensitivity coefficient α of the linear quadratic model (LQM, measured in units of Gy−1 and Gy−2), such that the method can be applied to therapeutic heavy ion beams25,26,42. This modified MKM has been extensively validated for carbon ion therapy, and also extended to proton and helium ion therapy25,26,42–44.The RBE10 for an ion beam, defined as the ratio of the physical dose from a 200 kVp X-ray beam required to achieve a cellular survival fraction of 10% (D(10,R)) to the ion beam dose resulting in the same cell survival fraction, can be derived using the microdosimetric spectra f(y), using (1), (2) and (3):1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${y}^{\ast }={y}_{0}^{2}\frac{\int (1-{e}^{-{(\frac{y}{{y}_{0}})}^{2}})\,f(y)dy}{\int yf(y)dy}$$\end{document}y⁎=y02∫(1−e−(yy0)2)f(y)dy∫yf(y)dy2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\alpha ={\alpha }_{0}+\frac{{\beta }_{0}}{\rho \pi {r}_{d}^{2}}{y}^{\ast }$$\end{document}α=α0+β0ρπrd2y⁎3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$RB{E}_{10}=\frac{2\beta {D}_{10,X \mbox{-} ray}}{\sqrt{{\alpha }^{2}-4\beta \,\mathrm{log}\,(0.1)}-\alpha }$$\end{document}RBE10=2βD10,X‐rayα2−4βlog(0.1)−αFor human salivary gland (HSG) tumour cells, the dose resulting in a survival fraction of 10%, D(10,R) is 5 Gy for 200 kVp X-rays; the LQM radiation sensitivity coefficient values are α0 = 0.13 Gy−1 and β0 = 0.05 Gy−2. ρ and rd are the density and the radius of the sub-cellular domain, and assumed to be 0.42 μm and 1 g/cm3, respectively25.In this work, RBE10 is estimated using an extension to the modified MKM proposed by Bolst et al., whereby the mean path length \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ < {l}_{path} > $$\end{document}<lpath> of the charged particles that cross the sensitive volume is introduced to account for the directionality of the radiation field when deriving the microdosimetric spectra f(y) in a non-spherical sensitive volume, as opposed to the average chord length used in isotropic fields27,28.Although estimates of the RBE10 for radioactive beams have been reported previously, these have been calculated using simplified analytic models with parameters interpolated/extrapolated from limited experimental data from beams of stable isotopes in homogeneous targets45,46. The assumption that the RBE of radioactive ion species can be estimated from its stable analog has not been previously demonstrated in the literature. | [
"19949433",
"12518985",
"17544003",
"19810487",
"22471947",
"26863938",
"26932062",
"12816524",
"17007551",
"23179376",
"28151733",
"8989373",
"15357191",
"15656273",
"19060357",
"10597910",
"15798313",
"10219815",
"25533747"
] | [
{
"pmid": "19949433",
"title": "Charged particles in radiation oncology.",
"abstract": "Radiotherapy is one of the most common and effective therapies for cancer. Generally, patients are treated with X-rays produced by electron accelerators. Many years ago, researchers proposed that high-energy charged particles could be used for this purpose, owing to their physical and radiobiological advantages compared with X-rays. Particle therapy is an emerging technique in radiotherapy. Protons and carbon ions have been used for treating many different solid cancers, and several new centers with large accelerators are under construction. Debate continues on the cost:benefit ratio of this technique, that is, on whether the high costs of accelerators and beam delivery in particle therapy are justified by a clear clinical advantage. This Review considers the present clinical results in the field, and identifies and discusses the research questions that have resulted with this technique."
},
{
"pmid": "12518985",
"title": "Relative biological effectiveness of 290 MeV/u carbon ions for the growth delay of a radioresistant murine fibrosarcoma.",
"abstract": "The relative biological effectiveness (RBE) for animal tumors treated with fractionated doses of 290 MeV/u carbon ions was studied. The growth delay of NFSa fibrosarcoma in mice was investigated following various daily doses given with carbon ions or those given with cesium gamma-rays, and the RBE was determined. Animal tumors were irradiated with carbon ions of various LET (linear energy transfer) in a 6-cm SOBP (spread-out Bragg peak), and the isoeffect doses; i.e. the dose necessary to induce a tumor growth delay of 15 days were studied. The iso-effect dose for carbon ions of 14 and 20 keV/microm increased with an increase in the number of fractions up to 4 fractions. The increase in the isoeffect dose with the fraction number was small for carbon ions of 44 keV/microm, and was not observed for 74 keV/microm. The alpha and beta values of the linear-quadratic model for the radiation dose-cell survival relationship were calculated by the Fe-plot analysis method. The alpha values increased linearly with an increase in the LET, while the beta values were independent of the LET. The alpha/beta ratio was 129 +/- 10 Gy for gamma-rays, and increased with an increase in the LET, reaching 475 +/- 168 Gy for 74 keV/microm carbon ions. The RBE for carbon ions relative to Cs-137 gamma-rays increased with the LET. The RBE values for 14 and 20 keV/microm carbon ions were 1.4 and independent of the number of fractions, while those for 44 and 74 keV/microm increased from 1.8 to 2.3 and from 2.4 to 3.0, respectively, when the number of fractions increased from 1 to 4. Increasing the number of fractions further from 4 to 6 was not associated with an increase in the RBE. These results together with our earlier study on the skin reaction support the use of an RBE of 3.0 in clinical trials of 80 keV/microm carbon beams. The RBE values for low doses of carbon beams were also considered."
},
{
"pmid": "17544003",
"title": "Patient study of in vivo verification of beam delivery and range, using positron emission tomography and computed tomography imaging after proton therapy.",
"abstract": "PURPOSE\nTo investigate the feasibility and value of positron emission tomography and computed tomography (PET/CT) for treatment verification after proton radiotherapy.\n\n\nMETHODS AND MATERIALS\nThis study included 9 patients with tumors in the cranial base, spine, orbit, and eye. Total doses of 1.8-3 GyE and 10 GyE (for an ocular melanoma) per fraction were delivered in 1 or 2 fields. Imaging was performed with a commercial PET/CT scanner for 30 min, starting within 20 min after treatment. The same treatment immobilization device was used during imaging for all but 2 patients. Measured PET/CT images were coregistered to the planning CT and compared with the corresponding PET expectation, obtained from CT-based Monte Carlo calculations complemented by functional information. For the ocular case, treatment position was approximately replicated, and spatial correlation was deduced from reference clips visible in both the planning radiographs and imaging CT. Here, the expected PET image was obtained from an analytical model.\n\n\nRESULTS\nGood spatial correlation and quantitative agreement within 30% were found between the measured and expected activity. For head-and-neck patients, the beam range could be verified with an accuracy of 1-2 mm in well-coregistered bony structures. Low spine and eye sites indicated the need for better fixation and coregistration methods. An analysis of activity decay revealed as tissue-effective half-lives of 800-1,150 s.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of postradiation PET/CT for in vivo treatment verification. It also indicates some technological and methodological improvements needed for optimal clinical application."
},
{
"pmid": "19810487",
"title": "In vivo verification of proton beam path by using post-treatment PET/CT imaging.",
"abstract": "PURPOSE\nThe purpose of this study is to establish the in vivo verification of proton beam path by using proton-activated positron emission distributions.\n\n\nMETHODS\nA total of 50 PET/CT imaging studies were performed on ten prostate cancer patients immediately after daily proton therapy treatment through a single lateral portal. The PET/CT and planning CT were registered by matching the pelvic bones, and the beam path of delivered protons was defined in vivo by the positron emission distribution seen only within the pelvic bones, referred to as the PET-defined beam path. Because of the patient position correction at each fraction, the marker-defined beam path, determined by the centroid of implanted markers seen in the posttreatment (post-Tx) CT, is used for the planned beam path. The angular variation and discordance between the PET- and marker-defined paths were derived to investigate the intrafraction prostate motion. For studies with large discordance, the relative location between the centroid and pelvic bones seen in the post-Tx CT was examined. The PET/CT studies are categorized for distinguishing the prostate motion that occurred before or after beam delivery. The post-PET CT was acquired after PET imaging to investigate prostate motion due to physiological changes during the extended PET acquisition.\n\n\nRESULTS\nThe less than 2 degrees of angular variation indicates that the patient roll was minimal within the immobilization device. Thirty of the 50 studies with small discordance, referred as good cases, show a consistent alignment between the field edges and the positron emission distributions from the entrance to the distal edge. For those good cases, average displacements are 0.6 and 1.3 mm along the anterior-posterior (D(AP)) and superior-inferior (D(SI)) directions, respectively, with 1.6 mm standard deviations in both directions. For the remaining 20 studies demonstrating a large discordance (more than 6 mm in either D(AP) or D(SI)), 13 studies, referred as motion-after-Tx cases, also show large misalignment between the field edge and the positron emission distribution in lipomatous tissues around the prostate. These motion-after-Tx cases correspond to patients with large changes in volume of rectal gas between the post-Tx and the post-PET CTs. The standard deviations for D(AP) and D(SI) are 5.0 and 3.0 mm, respectively, for these motion-after-Tx cases. The final seven studies, referred to as position-error cases, which had a large discordance but no misalignment, were found to have deviations of 4.6 and 3.6 mm in D(AP) and D(SI), respectively. The position-error cases correspond to a large discrepancy on the relative location between the centroid and pelvic bones seen in post-Tx CT and recorded x-ray radiographs.\n\n\nCONCLUSIONS\nSystematic analyses of proton-activated positron emission distributions provide patient-specific information on prostate motion (sigmaM) and patient position variability (sigmap) during daily proton beam delivery. The less than 2 mm of displacement variations in the good cases indicates that population-based values of sigmap and sigmaM, used in margin algorithms for treatment planning at the authors' institution are valid for the majority of cases. However, a small fraction of PET/CT studies (approximately 14%) with -4 mm displacement variations may require different margins. Such data are useful in establishing patient-specific planning target volume margins."
},
{
"pmid": "22471947",
"title": "Monitoring of patients treated with particle therapy using positron-emission-tomography (PET): the MIRANDA study.",
"abstract": "BACKGROUND\nThe purpose of this clinical study is to investigate the clinical feasibility and effectiveness of offline Positron-Emission-Tomography (PET) quality assurance for promoting the accuracy of proton and carbon ion beam therapy.\n\n\nMETHODS/DESIGN\nA total of 240 patients will be recruited, evenly sampled among different analysis groups including tumors of the brain, skull base, head and neck region, upper gastrointestinal tract including the liver, lower gastrointestinal tract, prostate and pelvic region. From the comparison of the measured activity with the planned dose and its corresponding simulated activity distribution, conclusions on the delivered treatment will be inferred and, in case of significant deviations, correction strategies will be elaborated.\n\n\nDISCUSSION\nThe investigated patients are expected to benefit from this study, since in case of detected deviations between planned and actual treatment delivery a proper intervention (e.g., correction) could be performed in a subsequent irradiation fraction. In this way, an overall better treatment could be achieved than without any in-vivo verification. Moreover, site-specific patient-population information on the precision of the ion range at HIT might enable improvement of the CT-range calibration curve as well as safe reduction of the treatment margins to promote enhanced treatment plan conformality and dose escalation for full clinical exploitation of the promises of ion beam therapy.\n\n\nTRIAL REGISTRATION\nNCT01528670."
},
{
"pmid": "26863938",
"title": "Washout rate in rat brain irradiated by a (11)C beam after acetazolamide loading using a small single-ring OpenPET prototype.",
"abstract": "In dose verification techniques of particle therapies based on in-beam positron emission tomography (PET), the causes of washout of positron emitters by physiological effects should be clarified to correct washout for accurate verification. As well, the quantitative washout rate has a potential usefulness as a diagnostic index which should be explored. Therefore, we measured washout rates of rat brain after vasodilator acetazolamide loading to investigate the possible effects of blood flow on washout. Six rat brains were irradiated by a radioisotope (11)C beam and time activity curves on the whole brains were obtained with a small single-ring OpenPET prototype. Then, washout rates were calculated with the Mizuno model, where two washout rates (k 2m and k 2s ) were assumed, and a two-compartment model including efflux from tissue to blood (k 2) and influx (k 3) and efflux (k 4) between the two tissue compartments. Before the irradiations, we used laser-Doppler flowmetry to confirm that acetazolamide increased cerebral blood flow (CBF) of a rat. We compared means of k 2m , k 2s and k 2, k 3 and k 4 without acetazolamide loading (Rest) and with acetazolamide loading (ACZ). For all k values, ACZ values were lower than Rest values. In other words, though CBF increased, washout rates were decreased. This may be attributed to the implanted (11)C reacting to form (11)CO2. Because acetazolamide increased the concentration of CO2 in brain, suppressed diffusion of (11)CO2 and decomposition of (11)CO2 into ions were prevented."
},
{
"pmid": "26932062",
"title": "A singly charged ion source for radioactive ¹¹C ion acceleration.",
"abstract": "A new singly charged ion source using electron impact ionization has been developed to realize an isotope separation on-line system for simultaneous positron emission tomography imaging and heavy-ion cancer therapy using radioactive (11)C ion beams. Low-energy electron beams are used in the electron impact ion source to produce singly charged ions. Ionization efficiency was calculated in order to decide the geometric parameters of the ion source and to determine the required electron emission current for obtaining high ionization efficiency. Based on these considerations, the singly charged ion source was designed and fabricated. In testing, the fabricated ion source was found to have favorable performance as a singly charged ion source."
},
{
"pmid": "12816524",
"title": "A microdosimetric-kinetic model for the effect of non-Poisson distribution of lethal lesions on the variation of RBE with LET.",
"abstract": "The microdosimetric-kinetic (MK) model for cell killing by ionizing radiation is summarized. An equation based on the MK model is presented which gives the dependence of the relative biological effectiveness in the limit of zero dose (RBE1) on the linear energy transfer (LET). The relationship coincides with the linear relationship of RBE1 and LET observed for low LET, which is characteristic of a Poisson distribution of lethal lesions among the irradiated cells. It incorporates the effect of deviation from the Poisson distribution at higher LET. This causes RBE1 to be less than indicated by extrapolation of the linear relationship to higher LET, and to pass through a maximum in the range of LET of 50 to 200 keV per micrometer. The relationship is compared with several experimental studies from the literature. It is shown to approximately fit their results with a reasonable choice for the value of a cross-sectional area related to the morphology and ultrastructure of the cell nucleus. The model and the experiments examined indicate that the more sensitive cells are to radiation at low LET, the lower will be the maximum in RBE they attain as LET increases. An equation that portrays the ratio of the sensitivity of a pair of cell types as a function of LET is presented. Implications for radiotherapy with high-LET radiation are discussed."
},
{
"pmid": "17007551",
"title": "Microdosimetric measurements and estimation of human cell survival for heavy-ion beams.",
"abstract": "The microdosimetric spectra for high-energy beams of photons and proton, helium, carbon, neon, silicon and iron ions (LET = 0.5-880 keV/microm) were measured with a spherical-walled tissue-equivalent proportional counter at various depths in a plastic phantom. Survival curves for human tumor cells were also obtained under the same conditions. Then the survival curves were compared with those estimated by a microdosimetric model based on the spectra and the biological parameters for each cell line. The estimated alpha terms of the liner-quadratic model with a fixed beta value reproduced the experimental results for cell irradiation for ion beams with LETs of less than 450 keV/microm, except in the region near the distal peak."
},
{
"pmid": "23179376",
"title": "Microdosimetric calculation of relative biological effectiveness for design of therapeutic proton beams.",
"abstract": "The authors attempt to establish the relative biological effectiveness (RBE) calculation for designing therapeutic proton beams on the basis of microdosimetry. The tissue-equivalent proportional counter (TEPC) was used to measure microdosimetric lineal energy spectra for proton beams at various depths in a water phantom. An RBE-weighted absorbed dose is defined as an absorbed dose multiplied by an RBE for cell death of human salivary gland (HSG) tumor cells in this study. The RBE values were calculated by a modified microdosimetric kinetic model using the biological parameters for HSG tumor cells. The calculated RBE distributions showed a gradual increase to about 1cm short of a beam range and a steep increase around the beam range for both the mono-energetic and spread-out Bragg peak (SOBP) proton beams. The calculated RBE values were partially compared with a biological experiment in which the HSG tumor cells were irradiated by the SOBP beam except around the distal end. The RBE-weighted absorbed dose distribution for the SOBP beam was derived from the measured spectra for the mono-energetic beam by a mixing calculation, and it was confirmed that it agreed well with that directly derived from the microdosimetric spectra measured in the SOBP beam. The absorbed dose distributions to planarize the RBE-weighted absorbed dose were calculated in consideration of the RBE dependence on the prescribed absorbed dose and cellular radio-sensitivity. The results show that the microdosimetric measurement for the mono-energetic proton beam is also useful for designing RBE-weighted absorbed dose distributions for range-modulated proton beams."
},
{
"pmid": "28151733",
"title": "Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy.",
"abstract": "Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length [Formula: see text] to calculate the lineal energy was introduced as an alternative to the mean chord length [Formula: see text] because it was found that adopting Cauchy's formula for the [Formula: see text] was not appropriate for the radiation field typical of HIT as it is very directional. [Formula: see text] can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12C ion beam can be adopted as [Formula: see text]. The tissue equivalence conversion method and [Formula: see text] were adopted to determine the RBE10, calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of the determination of [Formula: see text]."
},
{
"pmid": "8989373",
"title": "Irradiation of mixed beam and design of spread-out Bragg peak for heavy-ion radiotherapy.",
"abstract": "Data on cellular inactivation resulting from mixed irradiation with charged-particle beams of different linear energy transfer (LET) are needed to design a spread-out Bragg peak (SOBP) for heavy-ion radiotherapy. The present study was designed to study the relationship between the physical (LET) and biological (cell killing) properties by using different monoenergetic beams of 3He, 4He and 12C ions (12 and 18.5 MeV/nucleon) and to attempt to apply the experimental data in the design of the SOBP (3 cm width) with a 135 MeV/nucleon carbon beam. Experimental studies of the physical and biological measurements using sequentially combined irradiation were carried out to establish a close relationship between LET and cell inactivation. The results indicated that the dose-cell survival relationship for the combined high- and low-LET beams could be described by a linear-quadratic (LQ) model, in which new coefficients alpha and beta for the combined irradiation were obtained in terms of dose-averaged alpha and square root of beta for the single irradiation with monoenergetic beams. Based on the relationship obtained, the actual SOBP designed for giving a uniform biological effect at 3 cm depth was tested with the 135 MeV/nucleon carbon beam. The results of measurements of both physical (LET) and biological (90% level of cell killing, etc.) properties clearly demonstrated that the SOBP successfully and satisfactorily retained its high dose localization and uniform depth distribution of the biological effect. Based on the application of these results, more useful refinement and development can be expected for the heavy-ion radiotherapy currently under way at the National Institute of Radiological Sciences, Japan."
},
{
"pmid": "15357191",
"title": "Range verification system using positron emitting beams for heavy-ion radiotherapy.",
"abstract": "It is desirable to reduce range ambiguities in treatment planning for making full use of the major advantage of heavy-ion radiotherapy, that is, good dose localization. A range verification system using positron emitting beams has been developed to verify the ranges in patients directly. The performance of the system was evaluated in beam experiments to confirm the designed properties. It was shown that a 10C beam could be used as a probing beam for range verification when measuring beam properties. Parametric measurements indicated the beam size and the momentum acceptance and the target volume did not influence range verification significantly. It was found that the range could be measured within an analysis uncertainty of +/-0.3 mm under the condition of 2.7 x 10(5) particle irradiation, corresponding to a peak dose of 96 mGyE (gray-equivalent dose), in a 150 mm diameter spherical polymethyl methacrylate phantom which simulated a human head."
},
{
"pmid": "15656273",
"title": "The modelling of positron emitter production and PET imaging during carbon ion therapy.",
"abstract": "At the carbon ion therapy facility of GSI Darmstadt in-beam positron emission tomography (PET) is used for imaging the beta+-activity distributions which are produced via nuclear fragmentation reactions between the carbon ions and the atomic nuclei of the irradiated tissue. On the basis of these PET images the quality of the irradiation, i.e. the position of the field, the particle range in vivo and even local deviations between the planned and the applied dose distribution, can be evaluated. However, for such an evaluation the measured beta+-activity distributions have to be compared with those predicted from the treatment plan. The predictions are calculated as follows: a Monte Carlo event generator produces list mode data files of the same format as the PET scanner in order to be processed like the measured ones for tomographic reconstruction. The event generator models the whole chain from the interaction of the projectiles with the target, i.e. their stopping and nuclear reactions, the production and the decay of positron emitters, the motion of the positrons as well as the propagation and the detection of the annihilation photons. The steps of the modelling, the experimental validation and clinical implementation are presented."
},
{
"pmid": "19060357",
"title": "Nd:YAG surgical laser effects in canine prostate tissue: temperature and damage distribution.",
"abstract": "An in vitro model was used to predict short-term, laser-induced, thermal damage in canine prostate tissue. Canine prostate tissue samples were equipped with thermocouple probes to measure tissue temperature at 3, 6, 9 and 12 mm depths. The tissue surface was irradiated with a Nd:YAG laser in contact or non-contact mode for up to 20 s, using powers from 5 to 20 W. Prediction of thermal damage using Arrhenius theory was discussed and compared to the in vitro damage threshold, determined by histological evaluation. The threshold temperature for acute thermal tissue damage was 69 +/- 6 degrees C (means +/- SD), irrespective of exposure time. Contact mode laser application caused vaporization of tissue, leaving a crater underneath the fiber tip. The mean extent of tissue damage underneath the vaporization crater floor was 0.9 +/- 0.6 mm after 5, 10 or 20 s of contact mode laser irradiation at 10 W, whereas 20 W non-contact exposure up to 20 s causes up to 4.7 +/- 0.2 mm coagulation necrosis. It was concluded that short-term acute thermal tissue damage can be comprehensively described by a single threshold temperature."
},
{
"pmid": "10597910",
"title": "RBE for carbon track-segment irradiation in cell lines of differing repair capacity.",
"abstract": "PURPOSE\nThe LET position of the RBE maximum and its dependence on the cellular repair capacity was determined for carbon ions. Hamster cell lines of differing repair capacity were irradiated with monoenergetic carbon ions. RBE values for cell inactivation at different survival levels were determined and the differences in the RBE-LET patterns were compared with the individual sensitivity to photon irradiation of the different cell lines.\n\n\nMATERIAL AND METHODS\nThree hamster cell lines, the wild-type cell lines V79 and CHO-K1 and the radiosensitive CHO mutant xrs5, were irradiated with carbon ions of different energies (2.4-266.4 MeV/u) and LET values (13.7-482.7 keV/microm) and inactivation data were measured in comparison to 250 kV x-rays.\n\n\nRESULTS\nFor the repair-proficient cell lines a RBE maximum was found at LET values between 150 and 200 keV/microm. For the repair-deficient cell line the RBE failed to show a maximum and decreased continuously for LET values above 100 keV/microm.\n\n\nCONCLUSIONS\nThe carbon RBE LET relationship for inactivation is shifted to higher LET values compared with protons and alpha-particles. RBE correlated with the repair capacity of the cells."
},
{
"pmid": "15798313",
"title": "Quantitative comparison of suitability of various beams for range monitoring with induced beta+ activity in hadron therapy.",
"abstract": "In radiation therapy with hadron beams, it is important to evaluate the range of incident ions and the deposited dose distribution in a patient body for the effective utilization of such properties as the dose concentration and the biological effect around the Bragg peak. However, there is some ambiguity in determining this range because of a conversion error from the x-ray CT number to the charged particle range. This is because the CT number is related to x-ray absorption coefficients, while the ion range is determined by the electron density of the substance. Using positron emitters produced in the patient body through fragmentation reactions during the irradiation has been proposed to overcome this problem. The activity distribution in the patient body can be deduced by detecting pairs of annihilation gamma rays emitted from the positron emitters, and information about the range of incident ions can be obtained. In this paper, we propose a quantitative comparison method to evaluate the mean range of incident ions and monitor the activity distribution related to the deposited dose distribution. The effectiveness of the method was demonstrated by evaluating the range of incident ions using the maximum likelihood estimation (MLE) method and Fisher's information was calculated under realistic conditions for irradiations with several kinds of ions. From the calculated Fisher's information, we compared the relative advantages of initial beams to determine the range of incident ions. The (16)O irradiation gave the most information among the stable heavy ions when we measured the induced activity for 500 s and 60 s just after the irradiation. Therefore, under these conditions, we concluded that the (16)O beam was the optimum beam to monitor the activity distribution and to evaluate the range. On the other hand, if the positron emitters were injected directly as a therapeutic beam, the (15)O irradiation gave the most information. Although the relative advantages of initial beams as well as the measured activity distributions slightly varied according to the measurement conditions, comparisons could be made for different conditions by using Fisher's information."
},
{
"pmid": "10219815",
"title": "Biophysical characteristics of HIMAC clinical irradiation system for heavy-ion radiation therapy.",
"abstract": "PURPOSE\nThe irradiation system and biophysical characteristics of carbon beams are examined regarding radiation therapy.\n\n\nMETHODS AND MATERIALS\nAn irradiation system was developed for heavy-ion radiotherapy. Wobbler magnets and a scatterer were used for flattening the radiation field. A patient-positioning system using X ray and image intensifiers was also installed in the irradiation system. The depth-dose distributions of the carbon beams were modified to make a spread-out Bragg peak, which was designed based on the biophysical characteristics of monoenergetic beams. A dosimetry system for heavy-ion radiotherapy was established to deliver heavy-ion doses safely to the patients according to the treatment planning. A carbon beam of 80 keV/microm in the spread-out Bragg peak was found to be equivalent in biological responses to the neutron beam that is produced at cyclotron facility in National Institute Radiological Sciences (NIRS) by bombarding 30-MeV deuteron beam on beryllium target. The fractionation schedule of the NIRS neutron therapy was adapted for the first clinical trials using carbon beams.\n\n\nRESULTS\nCarbon beams, 290, 350, and 400 MeV/u, were used for a clinical trial from June of 1994. Over 300 patients have already been treated by this irradiation system by the end of 1997."
},
{
"pmid": "25533747",
"title": "The contrast-to-noise ratio for image quality evaluation in scanning electron microscopy.",
"abstract": "The contrast-to-noise ratio (CNR) is presented and characterized as a tool for quantitative noise measurement of scanning electron microscope (SEM) images. Analogies as well as differences between the CNR and the widely used signal-to-noise ratio (SNR) are analytically and experimentally investigated. With respect to practical SEM image evaluation using the contrast-to-noise ratio, a standard specimen and an evaluation program are presented."
}
] |
JMIR Medical Informatics | 30977733 | PMC6484263 | 10.2196/12172 | Patient-Sharing Relations in the Treatment of Diabetes and Their Implications for Health Information Exchange: Claims-Based Analysis | BackgroundHealth information exchange (HIE) among care providers who cooperate in the treatment of patients with diabetes mellitus (DM) has been rated as an important aspect of successful care. Patient-sharing relations among care providers permit inferences about corresponding information-sharing relations.ObjectivesThis study aimed to obtain information for an effective HIE platform design to be used in DM care by analyzing patient-sharing relations among various types of care providers (ToCPs), such as hospitals, pharmacies, and different outpatient specialists, within a nationwide claims dataset of Austrian DM patients. We focus on 2 parameters derived from patient-sharing networks: (1) the principal HIE partners of the different ToCPs involved in the treatment of DM and (2) the required participation rate of ToCPs in HIE platforms for the purpose of effective communication.MethodsThe claims data of 7.9 million Austrian patients from 2006 to 2007 served as our data source. DM patients were identified by their medication. We established metrics for the quantification of our 2 parameters of interest. The principal HIE partners were derived from the portions of a care provider’s patient-sharing relations with different ToCPs. For the required participation rate of ToCPs in an HIE platform, we determine the concentration of patient-sharing relations among ToCPs. Our corresponding metrics are derived in analogy from existing work for the quantification of the continuity of care.ResultsWe identified 324,703 DM patients treated by 12,226 care providers; the latter were members of 16 ToCPs. On the basis of their score for 2 of our parameters, we categorized the ToCPs into low, medium, and high. For the most important HIE partner parameter, pharmacies, general practitioners (GPs), and laboratories were the representatives of the top group, that is, our care providers shared the highest numbers of DM patients with these ToCPs. For the required participation rate of type of care provide (ToCP) in HIE platform parameter, the concentration of DM patient-sharing relations with a ToCP tended to be inversely related to the ToCPs member count.ConclusionsWe conclude that GPs, pharmacies, and laboratories should be core members of any HIE platform that supports DM care, as they are the most important DM patient-sharing partners. We further conclude that, for implementing HIE with ToCPs who have many members (in Austria, particularly GPs and pharmacies), an HIE solution with high participation rates from these ToCPs (ideally a nationwide HIE platform with obligatory participation of the concerned ToCPs) seems essential. This will raise the probability of HIE being achieved with any care provider of these ToCPs. As chronic diseases are rising because of aging societies, we believe that our quantification of HIE requirements in the treatment of DM can provide valuable insights for many industrial countries. | Related WorkIn the context of diabetes-specific HIE, several authors concentrated on the patients’ role in information sharing [21,22]. HIE between DM patients and care providers was examined with a focus on sharing medication data [23], email communication [24], and patient preferences [25].Koopman and coworkers name a set of data elements that are relevant for outpatient family physicians and general internal medicine physicians in the treatment of DM patients [26]. However, they neither address how these data elements were identified nor address which ToCPs should deliver the corresponding values.Huebner-Bloder and coworkers identified 446 relevant data elements in the treatment of DM and grouped these in 9 categories [27]. They used a triangulation design that was mainly based on documentation analysis in 3 DM outpatient clinics and interviews with 6 internists specialized in DM. The identified data elements originate from GPs, internal medicine physicians, ophthalmologists, nephrologists, neurologists, gynecologists, psychiatrists, dermatologists, hospitals, laboratories, and from the patient’s self-monitoring. The ToCPs identified by them as being relevant in the treatment of DM thus constitute a subset of our ToCPs, except for nephrologists (who are a part of the ToCP internal medicine in our claims data) and patient-reported data (not considered in our claims data).According to Joshy and Simmons, HIE between systems of GPs and hospitals are crucial factors for the success of DM information systems [2]. They further state that “pharmacy data, lab measurements, retinal screening, and home blood glucose monitoring data are increasingly being linked into diabetes information systems.” This fits with this study’s results insofar as we identified GPs, pharmacies, and laboratories as high-priority HIE partners in the treatment of DM, as well as hospitals and ophthalmologists as middle-priority HIE partners. Patient-reported data were not considered in this study.Existing HIE platforms only partly cover the information needs of care providers. According to a recent study, only 58% of the analyzed DM information systems provided HIE with hospitals, 22% provided HIE with primary care, and only 3% provided HIE with hospitals and primary care [28]. In their review of regional HIE platforms, Mäenpää et al conclude that the latter provide inadequate access to patient-relevant clinical data [29]. Nationwide EHR systems, which are operated as national HIE platforms in 59% of the European World Health Organization member states [30], are typically restricted to the exchange of patient summaries or selected document types [8]. | [
"28851681",
"17037973",
"23937325",
"27318070",
"22874275",
"29726443",
"20442146",
"21521213",
"29181504",
"28000151",
"26476734",
"21892946",
"859364",
"1159765",
"16595410",
"26682218",
"28059696",
"24916569",
"29588269",
"26237200",
"30341048",
"21911758",
"21893775",
"19656719"
] | [
{
"pmid": "28851681",
"title": "Is There Evidence of Cost Benefits of Electronic Medical Records, Standards, or Interoperability in Hospital Information Systems? Overview of Systematic Reviews.",
"abstract": "BACKGROUND\nElectronic health (eHealth) interventions may improve the quality of care by providing timely, accessible information about one patient or an entire population. Electronic patient care information forms the nucleus of computerized health information systems. However, interoperability among systems depends on the adoption of information standards. Additionally, investing in technology systems requires cost-effectiveness studies to ensure the sustainability of processes for stakeholders.\n\n\nOBJECTIVE\nThe objective of this study was to assess cost-effectiveness of the use of electronically available inpatient data systems, health information exchange, or standards to support interoperability among systems.\n\n\nMETHODS\nAn overview of systematic reviews was conducted, assessing the MEDLINE, Cochrane Library, LILACS, and IEEE Library databases to identify relevant studies published through February 2016. The search was supplemented by citations from the selected papers. The primary outcome sought the cost-effectiveness, and the secondary outcome was the impact on quality of care. Independent reviewers selected studies, and disagreement was resolved by consensus. The quality of the included studies was evaluated using a measurement tool to assess systematic reviews (AMSTAR).\n\n\nRESULTS\nThe primary search identified 286 papers, and two papers were manually included. A total of 211 were systematic reviews. From the 20 studies that were selected after screening the title and abstract, 14 were deemed ineligible, and six met the inclusion criteria. The interventions did not show a measurable effect on cost-effectiveness. Despite the limited number of studies, the heterogeneity of electronic systems reported, and the types of intervention in hospital routines, it was possible to identify some preliminary benefits in quality of care. Hospital information systems, along with information sharing, had the potential to improve clinical practice by reducing staff errors or incidents, improving automated harm detection, monitoring infections more effectively, and enhancing the continuity of care during physician handoffs.\n\n\nCONCLUSIONS\nThis review identified some benefits in the quality of care but did not provide evidence that the implementation of eHealth interventions had a measurable impact on cost-effectiveness in hospital settings. However, further evidence is needed to infer the impact of standards adoption or interoperability in cost benefits of health care; this in turn requires further research."
},
{
"pmid": "17037973",
"title": "Diabetes information systems: a rapidly emerging support for diabetes surveillance and care.",
"abstract": "BACKGROUND\nWith the rapid advances in information technology in the last decade, various diabetes information systems have evolved in different parts of the world. Availability of new technologies and information systems for monitoring and treating diabetes is critical to achieving recommended metabolic control, including glycosylated hemoglobin levels. The first step is to develop a registry, including a patient identifier that can link multiple data sources, which can then serve as a springboard to electronic mechanisms for practitioners to gain information on performance and results.\n\n\nOBJECTIVE\nThe aim is to review the provisions for diabetes surveillance in different parts of the world. This is a systematic review of national and regional information systems for diabetes surveillance.\n\n\nLITERATURE REVIEW\nA comprehensive review was undertaken using Medline literature review, internet search using the Google search engine, and e-mail consultation with opinion leaders. TOPICS REVIEW: National/regional-level diabetes surveillance systems in Europe, the United States, Australia/New Zealand, and Asia have been reviewed. State-of-the-art diabetes information systems linking multiple data sources, with extensive audit and feedback capabilities, have also been looked at.\n\n\nRESULTS\nNational/regional-level audit databases have been tabulated. Diabetes information systems linking multiple data sources have been described. Most of the developed countries have now implemented systems such as diabetes registers and audits for diabetes surveillance in at least some regions, if not nationally. Developing nations are beginning to recognize the need for chronic disease management.\n\n\nCONCLUSIONS\nWith the advancements in information technology, the diabetes registers have the potential to rise beyond their traditional functions with dynamic data integration, decision support, and data access, as demonstrated by some diabetes information systems. With the rapid pace of development in electronic health records and health information systems, countries that are beginning to build their health information technology infrastructure could benefit from planning and funding along these lines."
},
{
"pmid": "23937325",
"title": "Perceived facilitators and barriers in diabetes care: a qualitative study among health care professionals in the Netherlands.",
"abstract": "BACKGROUND\nThe need to understand barriers to the implementation of health care innovations in daily practice has been widely documented, but perceived facilitators and barriers in diabetes care by Dutch health care professionals remain unknown. The aim of this study was to investigate these factors among health care professionals (HCPs) using a qualitative research design.\n\n\nMETHODS\nData were collected from 18 semi-structured interviews with HCPs from all professions relevant to diabetes care. The interviews were recorded and transcribed verbatim and the data were analyzed using NVivo 8.0.\n\n\nRESULTS\nMajor facilitators were the more prominent role of the practice nurses and diabetes nurses in diabetes care, benchmarking, the Care Standard (CS) of the Netherlands Diabetes federation and multidisciplinary collaboration, although collaboration with certain professional groups (i.e. dieticians, physical therapists and pharmacists), as well as the collaboration between primary and secondary care, could still be improved. The bundled payment system for the funding of diabetes care and the role of the health insurers were perceived as major barriers within the health care system. Other important barriers were reported to be the lack of motivation among patients and the lack of awareness of lifestyle programs and prevention initiatives for diabetes patients among professionals.\n\n\nCONCLUSIONS\nOrganizational changes in diabetes care, as a result of the increased attention given to management continuity of care, have led to an increased need for multidisciplinary collaboration within and between health care sectors (e.g. public health, primary care and secondary care). To date, daily routines for shared care are still sub-optimal and improvements in facilities, such as registration systems, should be implemented to further optimize communication and exchange of information."
},
{
"pmid": "27318070",
"title": "Improving the informational continuity of care in diabetes mellitus treatment with a nationwide Shared EHR system: Estimates from Austrian claims data.",
"abstract": "PURPOSE\nShared Electronic Health Record (EHR) systems, which provide a health information exchange (HIE) within a community of care, were found to be a key enabler of informational continuity of diabetes mellitus (DM) care. Quantitative analyses of the actual contribution of Shared EHR systems to informational continuity of care are rare. The goal of this study was to quantitatively analyze (i) the degree of fragmentation of DM care in Austria as an indicator for the need for HIE, and (ii) the quantity of information (i.e. number of documents) from Austrian DM patients that would be made available by a nationwide Shared EHR system for HIE.\n\n\nMETHODS\nOur analyses are based on social security claims data of 7.9 million Austrians from 2006 and 2007. DM patients were identified through medication data and inpatient diagnoses. The degree of fragmentation was determined by the number of different healthcare providers per patient. The amount of information that would be made available by a nationwide Shared EHR system was estimated by the number of documents that would have been available to a healthcare provider if he had access to information on the patient's visits to any of the other healthcare providers. As a reference value we determined the number of locally available documents that would have originated from the patient's visits to the healthcare provider himself. We performed our analysis for two types of systems: (i) a \"comprehensive\" Shared EHR system (SEHRS), where each visit of a patient results in a single document (progress note), and (ii) the Austrian ELGA system, which allows four specific document types to be shared.\n\n\nRESULTS\n391,630 DM patients were identified, corresponding to 4.7% of the Austrian population. More than 90% of the patients received health services from more than one healthcare provider in one year. Both, the SEHRS as well as ELGA would have multiplied the available information during a patient visit in comparison to an isolated local EHR system; the median ratio of external to local medical documents was between 1:1 for a typical visit at a primary care provider (SEHRS as well as ELGA) and 39:1 (SEHRS) respectively 28:1 (ELGA) for a typical visit at a hospital.\n\n\nCONCLUSIONS\nDue to the high degree of care fragmentation, there is an obvious need for HIE for Austrian DM patients. Both, the SEHRS as well as ELGA could provide a substantial contribution to informational continuity of care in Austrian DM treatment. Hospitals and specialists would have gained the most amount of external information, primary care providers and pharmacies would have at least doubled their available information. Despite being the most important potential feeders of a national Shared EHR system according to our analysis, primary care providers will not tap their full corresponding potential under the current implementation scenario of ELGA."
},
{
"pmid": "22874275",
"title": "Fragmentation of diabetes treatment in Austria - an indicator for the need for shared electronic health record systems.",
"abstract": "Shared electronic health record (EHR) systems aim to support continuity of care within the joint treatment of a patient by a community of cooperating care providers. By analyzing the fragmentation of care of Austrian diabetes patients, we aim to find evidence whether there is actually a need for shared EHR systems in this context. Our results show that almost three quarters of the observed diabetes patients visit two or more different care providers during their diabetes-related visits. Overall, our findings strongly support the demand for shared EHR systems for the treatment of diabetes patients."
},
{
"pmid": "29726443",
"title": "Towards Designing a Secure Exchange Platform for Diabetes Monitoring and Therapy.",
"abstract": "BACKGROUND\nDiabetes mellitus is one of the most prominent examples of chronic conditions that requires an active patient self-management and a network of specialists.\n\n\nOBJECTIVES\nThe aim of this study was to analyze the user and legal requirements and develop a rough technology concept for a secure and patient-centered exchange platform.\n\n\nMETHODS\nTo this end, 14 experts representing different stakeholders were interviewed and took part in group discussions at three workshops, the pertinent literature and legal texts were analyzed.\n\n\nRESULTS\nThe user requirements embraced a comprehensive set of use cases and the demand for \"one platform for all\" which is underlined by the right for data portability according to new regulations. In order to meet these requirements a distributed ledger technology was proposed.\n\n\nCONCLUSION\nWe will therefore focus on a patient-centered application that showcases self-management and exchange with health specialists."
},
{
"pmid": "20442146",
"title": "Health information exchange: persistent challenges and new strategies.",
"abstract": "Recent federal policies and actions support the adoption of health information exchange (HIE) in order to improve healthcare by addressing fragmented personal health information. However, concerted efforts at facilitating HIE have existed for over two decades in this country. The lessons of these experiences include a recurrence of barriers and challenges beyond those associated with technology. Without new strategies, the current support and methods of facilitating HIE may not address these barriers."
},
{
"pmid": "21521213",
"title": "Mapping physician networks with self-reported and administrative data.",
"abstract": "OBJECTIVE\nTo assess whether connections between physicians based on shared patients in administrative data correspond with professional relationships between physicians.\n\n\nDATA SOURCES/STUDY SETTING\nSurvey of physicians affiliated with a large academic and community physicians' organization and 2006 Medicare data from a 100 percent sample of patients in the Boston Hospital referral region.\n\n\nSTUDY DESIGN/DATA COLLECTION\nWe administered a web-based survey to 616 physicians (response rate: 63 percent) about referral and advice relationships with physician colleagues. Relationships measured by this questionnaire were compared with relationships assessed by patient sharing, measured using 2006 Medicare data. Each physician was presented with an individualized roster of physicians' names with whom they did and did not share patients based on the Medicare data.\n\n\nPRINCIPAL FINDINGS\nThe probability of two physicians having a recognized professional relationship increased with the number of Medicare patients shared, with up to 82 percent of relationships recognized with nine shared patients, overall representing a diagnostic test with an area under the receiver-operating characteristic curve of 0.73 (95 percent CI: 0.70-0.75). Primary care physicians were more likely to recognize relationships than medical or surgical specialists (p<.001).\n\n\nCONCLUSIONS\nPatient sharing identified using administrative data is an informative \"diagnostic test\" for predicting the existence of relationships between physicians. This finding validates a method that can be used for future research to map networks of physicians."
},
{
"pmid": "29181504",
"title": "Patient-Sharing Networks of Physicians and Health Care Utilization and Spending Among Medicare Beneficiaries.",
"abstract": "Importance\nPhysicians are embedded in informal networks in which they share patients, information, and behaviors.\n\n\nObjective\nWe examined the association between physician network properties and health care spending, utilization, and quality of care among Medicare beneficiaries.\n\n\nDesign, Setting, and Participants\nIn this cross-sectional study, we applied methods from social network analysis to Medicare administrative data from 2006 to 2010 for an average of 3 761 223 Medicare beneficiaries per year seen by 40 241 physicians practicing in 51 hospital referral regions (HRRs) to identify networks of physicians linked by shared patients. We improved on prior methods by restricting links to physicians who shared patients for distinct episodes of care, thereby excluding potentially spurious linkages between physicians treating common patients but for unrelated reasons. We also identified naturally occurring communities of more tightly linked physicians in each region. We examined the relationship between network properties measured in the prior year and outcomes in the subsequent year using regression models.\n\n\nMain Outcomes and Measures\nSpending on total medical services, hospital, physician, and other services, use of services, and quality of care.\n\n\nResults\nThe mean patient age across the 5 years of study was 72.3 years and 58.5% of the participants were women. The mean age across communities of included physicians was 49 years and approximately 78% were men. Mean total annual spending per patient was $10 051. Total spending was higher for patients of physicians with more connections to other physicians ($1009 for a 1-standard deviation increase, P < .001) and more shared care outside of their community ($172, P < .001). Spending on inpatient care was slightly lower for patients of physicians whose communities had higher proportions of primary care physicians (-$38, P < .001). Patients cared for by physicians linked to more physicians also had more hospital admissions and days (0.02 and 0.18, respectively; both P < .001 for a 1-standard deviation increase in the number of connected physicians), more emergency visits (0.02, P < .001), more visits to specialists (0.37, P < .001), and more primary care visits (0.11, P < .001). Patients whose physicians' networks had more primary care physicians had more primary care visits (0.44, P < .001) and fewer specialist and emergency visits (-0.33 [P < .001] and -0.008 [P = .008], respectively). The various measures of quality were inconsistently related to the network measures.\n\n\nConclusions and Relevance\nCharacteristics of physicians' networks and the position of physicians in the network were associated with overall spending and utilization of services for Medicare beneficiaries."
},
{
"pmid": "28000151",
"title": "The Impact of Provider Networks on the Co-Prescriptions of Interacting Drugs: A Claims-Based Analysis.",
"abstract": "INTRODUCTION\nMultiple provider prescribing of interacting drugs is a preventable cause of morbidity and mortality, and fragmented care is a major contributing factor. We applied social network analysis to examine the impact of provider patient-sharing networks on the risk of multiple provider prescribing of interacting drugs.\n\n\nMETHODS\nWe performed a retrospective analysis of commercial healthcare claims (years 2008-2011), including all non-elderly adult beneficiaries (n = 88,494) and their constellation of care providers. Patient-sharing networks were derived based on shared patients, and care constellation cohesion was quantified using care density, defined as the ratio between the total number of patients shared by provider pairs and the total number of provider pairs within the care constellation around each patient.\n\n\nRESULTS\nIn our study, 2% (n = 1796) of patients were co-prescribed interacting drugs by multiple providers. Multiple provider prescribing of interacting drugs was associated with care density (odds ratio per unit increase in the natural logarithm of the value for care density 0.78; 95% confidence interval 0.74-0.83; p < 0.0001). The effect of care density was more pronounced with increasing constellation size: when constellation size exceeded ten providers, the risk of multiple provider prescribing of interacting drugs decreased by nearly 37% with each unit increase in the natural logarithm of care density (p < 0.0001). Other predictors included increasing age of patients, increasing number of providers, and greater morbidity.\n\n\nCONCLUSION\nImproved care cohesion may mitigate unsafe prescribing practices, especially in larger care constellations. There is further potential to leverage network analytics to implement large-scale surveillance applications for monitoring prescribing safety."
},
{
"pmid": "26476734",
"title": "Formal Professional Relationships Between General Practitioners and Specialists in Shared Care: Possible Associations with Patient Health and Pharmacy Costs.",
"abstract": "BACKGROUND\nShared care in chronic disease management aims at improving service delivery and patient outcomes, and reducing healthcare costs. The introduction of shared-care models is coupled with mixed evidence in relation to both patient health status and cost of care. Professional interactions among health providers are critical to a successful and efficient shared-care model.\n\n\nOBJECTIVE\nThis article investigates whether the strength of formal professional relationships between general practitioners (GPs) and specialists (SPs) in shared care affects either the health status of patients or their pharmacy costs. In strong GP-SP relationships, the patient health status is expected to be high, due to efficient care coordination, and the pharmacy costs low, due to effective use of resources.\n\n\nMETHODS\nThis article measures the strength of formal professional relationships between GPs and SPs through the number of shared patients and proxies the patient health status by the number of comorbidities diagnosed and treated. To test the hypotheses and compare the characteristics of the strongest GP-SP connections with those of the weakest, this article concentrates on diabetes-a chronic condition where patient care coordination is likely important. Diabetes generates the largest shared patient cohort in Hungary, with the highest frequency of specialist medication prescriptions.\n\n\nRESULTS\nThis article finds that stronger ties result in lower pharmacy costs, but not in higher patient health status.\n\n\nCONCLUSION\nOverall drug expenditure may be reduced by lowering patient care fragmentation through channelling a GP's patients to a small number of SPs."
},
{
"pmid": "21892946",
"title": "Can we use the pharmacy data to estimate the prevalence of chronic conditions? a comparison of multiple data sources.",
"abstract": "BACKGROUND\nThe estimate of the prevalence of the most common chronic conditions (CCs) is calculated using direct methods such as prevalence surveys but also indirect methods using health administrative databases.The aim of this study is to provide estimates prevalence of CCs in Lazio region of Italy (including Rome), using the drug prescription's database and to compare these estimates with those obtained using other health administrative databases.\n\n\nMETHODS\nPrevalence of CCs was estimated using pharmacy data (PD) using the Anathomical Therapeutic Chemical Classification System (ATC).Prevalences estimate were compared with those estimated by hospital information system (HIS) using list of ICD9-CM diagnosis coding, registry of exempt patients from health care cost for pathology (REP) and national health survey performed by the Italian bureau of census (ISTAT).\n\n\nRESULTS\nFrom the PD we identified 20 CCs. About one fourth of the population received a drug for treating a cardiovascular disease, 9% for treating a rheumatologic conditions.The estimated prevalences using the PD were usually higher that those obtained with one of the other sources. Regarding the comparison with the ISTAT survey there was a good agreement for cardiovascular disease, diabetes and thyroid disorder whereas for rheumatologic conditions, chronic respiratory illnesses, migraine and Alzheimer's disease, the prevalence estimates were lower than those estimated by ISTAT survey. Estimates of prevalences derived by the HIS and by the REP were usually lower than those of the PD (but malignancies, chronic renal diseases).\n\n\nCONCLUSION\nOur study showed that PD can be used to provide reliable prevalence estimates of several CCs in the general population."
},
{
"pmid": "1159765",
"title": "Continuity of care in a university-based practice.",
"abstract": "Effects of changes in a pediatric practice--expansion of the number of pediatricians and incorporation into a university hospital setting--on continuity of care and utilization were examined by means of a longitudinal study of a sample of 63 families. Continuity of care was measured by the following index: the number of visits with own physician divided by the total number of pediatric visits per year. Although continuity of well-child visits remained unchanged at the university setting, the continuity of sick visits declined markedly. An increased use of doctor visits for illness care was observed; its relationship with the decline in continuity is analyzed and discussed. While continuity is inherent in a small partnership practice, it is not so in a larger medical organization, particularly when involvement in patient care is part time. In such an organization, deliberate arrangements that enable patients with acute needs to receive care from their own doctors are needed."
},
{
"pmid": "16595410",
"title": "Indices for continuity of care: a systematic review of the literature.",
"abstract": "This article systematically reviews published literature on different continuity of care (COC) indices that assess the physician-patient relationship and the applicability of such indices to pediatric and chronic-disease patient populations. Frequency and visit type may vary for pediatric and chronically ill patients versus healthy adult patients. Two investigators independently examined 5,070 candidate articles and identified 246 articles related to COC. Forty-four articles were identified that include 32 different indices used to measure COC. Indices were classified into those that calculated COC primarily based on duration of provider relationship (n=2), density of visits (n=17), dispersion of providers (n=8), sequence of providers (n=1), or subjective estimates (n=4). The diversity of COC indices reflect differences in how this measure is conceptualized. No index takes into account the visit type. A unique index that reflects continuity in the physician patient relationship for pediatric and chronic disease populations is needed."
},
{
"pmid": "26682218",
"title": "Effects of Shared Electronic Health Record Systems on Drug-Drug Interaction and Duplication Warning Detection.",
"abstract": "Shared electronic health records (EHRs) systems can offer a complete medication overview of the prescriptions of different health care providers. We use health claims data of more than 1 million Austrians in 2006 and 2007 with 27 million prescriptions to estimate the effect of shared EHR systems on drug-drug interaction (DDI) and duplication warnings detection and prevention. The Austria Codex and the ATC/DDD information were used as a knowledge base to detect possible DDIs. DDIs are categorized as severe, moderate, and minor interactions. In comparison to the current situation where only DDIs between drugs issued by a single health care provider can be checked, the number of warnings increases significantly if all drugs of a patient are checked: severe DDI warnings would be detected for 20% more persons, and the number of severe DDI warnings and duplication warnings would increase by 17%. We show that not only do shared EHR systems help to detect more patients with warnings but DDIs are also detected more frequently. Patient safety can be increased using shared EHR systems."
},
{
"pmid": "28059696",
"title": "Can I help you? Information sharing in online discussion forums by people living with a long-term condition.",
"abstract": "BACKGROUND\nPeer-to-peer health care is increasing, especially amongst people living with a long-term condition. How information is shared is, however, sometimes of concern to health care professionals.\n\n\nOBJECTIVE\nThis study explored what information is being shared on health-related discussion boards and identified the approaches people used to signpost their peers to information.\n\n\nMETHODS\nThis study was conducted using a qualitative content analysis methodology to explore information shared on discussion boards for people living with diabetes. Whilst there is debate about the best ethical lens to view research carried out on data posted on online discussion boards, the researchers chose to adopt the stance of treating this type of information as \"personal health text\", a specific type of research data in its own right.\n\n\nRESULTS\nQualitative content analysis and basic descriptive statistics were used to analyse the selected posts. Two major themes were identified: 'Information Sharing from Experience' and 'Signposting Other Sources of Information'.Conclusions People were actively engaging in information sharing in online discussion forums, mainly through direct signposting. The quality of the information shared was important, with reasons for recommendations being given. Much of the information sharing was based on experience, which also brought in information from external sources such as health care professionals and other acknowledged experts in the field.With the rise in peer-to-peer support networks, the nature of health knowledge and expertise needs to be redefined. People online are combining external information with their own personal experiences and sharing that for others to take and develop as they wish."
},
{
"pmid": "24916569",
"title": "Information and decision support needs in patients with type 2 diabetes.",
"abstract": "Diabetes and its sequelae cause a growing burden of morbidity and mortality. For many patients living with diabetes, the Internet is an important source of health information and support. In the course of the development of an Interactive Health Communication Application, combining evidence-based information with behavior change and decision support, we assessed the characteristics, information, and decision support needs of patients with type 2 diabetes.The needs assessment was performed in two steps. First, we conducted semi-structured interviews with 10 patients and seven physicians. In the second step, we developed a self-assessment questionnaire based on the results of the interviews and administered it to a new and larger sample of diabetes patients (N = 178). The questionnaire comprised four main sections: (1) Internet use and Internet experience, (2) diabetes knowledge, (3) relevant decisions and decision preferences, and (4) online health information needs. Descriptive data analyses were performed.In the questionnaire study, the patient sample was heterogeneous in terms of age, time since diagnosis, and glycemic control. (1) Most participants (61.7%) have searched the web for health information at least once. The majority (62%) of those who have used the web use it at least once per month. (2) Diabetes knowledge was scarce: Only a small percentage (1.9%) of the respondents answered all items of the knowledge questionnaire correctly. (3) The most relevant treatment decisions concerned glycemic control, oral medication, and acute complications. The most difficult treatment decision was whether to start insulin treatment. Of the respondents, 69.4 percent thought that medical decisions should be made by them and their doctor together. (4) The most important information needs concerned sequelae of diabetes, blood glucose control, and basic diabetes information.The Internet seems to be a feasible way to reach people with type 2 diabetes. The heterogeneity of the sample, especially with respect to diabetes knowledge, makes it clear that the projected Interactive Health Communication Application should tailor the content to the individual user, taking account of individual characteristics and preferences. A wide range of topics should be covered. Special attention should be paid to the advantages and disadvantages of insulin treatment and the fears and hopes associated with it. These results were taken into account when developing the Interactive Health Communication Application that is currently being evaluated in a randomized controlled trial (International Clinical Trials Registry DRKS00003322)."
},
{
"pmid": "29588269",
"title": "Developing a Shared Patient-Centered, Web-Based Medication Platform for Type 2 Diabetes Patients and Their Health Care Providers: Qualitative Study on User Requirements.",
"abstract": "BACKGROUND\nInformation technology tools such as shared patient-centered, Web-based medication platforms hold promise to support safe medication use by strengthening patient participation, enhancing patients' knowledge, helping patients to improve self-management of their medications, and improving communication on medications among patients and health care professionals (HCPs). However, the uptake of such platforms remains a challenge also due to inadequate user involvement in the development process. Employing a user-centered design (UCD) approach is therefore critical to ensure that user' adoption is optimal.\n\n\nOBJECTIVE\nThe purpose of this study was to identify what patients with type 2 diabetes mellitus (T2DM) and their HCPs regard necessary requirements in terms of functionalities and usability of a shared patient-centered, Web-based medication platform for patients with T2DM.\n\n\nMETHODS\nThis qualitative study included focus groups with purposeful samples of patients with T2DM (n=25), general practitioners (n=13), and health care assistants (n=10) recruited from regional health care settings in southwestern Germany. In total, 8 semistructured focus groups were conducted. Sessions were audio- and video-recorded, transcribed verbatim, and subjected to a computer-aided qualitative content analysis.\n\n\nRESULTS\nAppropriate security and access methods, supported data entry, printing, and sending information electronically, and tracking medication history were perceived as the essential functionalities. Although patients wanted automatic interaction checks and safety alerts, HCPs on the contrary were concerned that unspecific alerts confuse patients and lead to nonadherence. Furthermore, HCPs were opposed to patients' ability to withhold or restrict access to information in the platform. To optimize usability, there was consensus among participants to display information in a structured, chronological format, to provide information in lay language, to use visual aids and customize information content, and align the platform to users' workflow.\n\n\nCONCLUSIONS\nBy employing a UCD, this study provides insight into the desired functionalities and usability of patients and HCPs regarding a shared patient-centered, Web-based medication platform, thus increasing the likelihood to achieve a functional and useful system. Substantial and ongoing engagement by all intended user groups is necessary to reconcile differences in requirements of patients and HCPs, especially regarding medication safety alerts and access control. Moreover, effective training of patients and HCPs on medication self-management (support) and optimal use of the tool will be a prerequisite to unfold the platform's full potential."
},
{
"pmid": "26237200",
"title": "Use of an Online Patient Portal and Glucose Control in Primary Care Patients with Diabetes.",
"abstract": "The objective was to assess the effect of online use of a patient portal on improvement of glycohemoglobin (HbA1c) in patients with type 2 diabetes presenting to primary care clinics. This retrospective cohort design used data from a primary care patient data registry that captured all ambulatory visits to the academic medical center's primary care clinics. A total of 1510 patients with diabetes were included because they had at least 1 visit with a documented HbA1c value between January 1, 2010, and June 30, 2013. Degree of patient portal use was defined as no use, read only, and read and write. Linear regression models were computed to measure the association between degree of patient portal use and HbA1c control before and after adjusting for demographics, comorbidity, and volume of health care use. Patients who were nonusers of the patient portal's e-mail function had consistently higher average HbA1c values than patients who read and wrote e-mails. After adjusting for demographics, health services utilization, and comorbid conditions, patients who read and wrote e-mails still had significantly (P<0.001) lower average HbA1c values compared to nonusers (ß=-0.455; 95% confidence interval [CI]:-.632-.277). In adjusted analysis, patients who only read e-mail also had significantly (P<0.05) lower mean HbA1c values compared to nonusers (ß=-0.311, 95%CI:-.61--0.012). Patients with more active e-mail communication via a patient portal appeared to have the greatest likelihood of HbA1c control. Patients should be encouraged to use this resource as a means of communication with providers and not merely a passive source of information. (Population Health Management 2016;19:125-131)."
},
{
"pmid": "30341048",
"title": "Preferences for Health Information Technologies Among US Adults: Analysis of the Health Information National Trends Survey.",
"abstract": "BACKGROUND\nEmerging health technologies are increasingly being used in health care for communication, data collection, patient monitoring, education, and facilitating adherence to chronic disease management. However, there is a lack of studies on differences in the preference for using information exchange technologies between patients with chronic and nonchronic diseases and factors affecting these differences.\n\n\nOBJECTIVE\nThe purpose of this paper is to understand the preferences and use of information technology for information exchange among a nationally representative sample of adults with and without 3 chronic disease conditions (ie, cardiovascular disease [CVD], diabetes, and hypertension) and to assess whether these preferences differ according to varying demographic variables.\n\n\nMETHODS\nWe utilized data from the 2012 and 2014 iteration of the Health Information National Trends Survey (N=7307). We used multiple logistic regressions, adjusting for relevant demographic covariates, to identify the independent factors associated with lower odds of using health information technology (HIT), thus, identifying targets for awareness. Analyses were weighted for the US population and adjusted for the sociodemographic variables of age, gender, race, and US census region.\n\n\nRESULTS\nOf 7307 participants, 3529 reported CVD, diabetes, or hypertension. In the unadjusted models, individuals with diabetes, CVD, or hypertension were more likely to report using email to exchange medical information with their provider and less likely to not use any of the technology in health information exchange, as well as more likely to say it was not important for them to access personal medical information electronically. In the unadjusted model, additional significant odds ratio (OR) values were observed. However, after adjustment, most relationships regarding the use and interest in exchanging information with the provider were no longer significant. In the adjusted model, individuals with CVD, diabetes, or hypertension were more likely to access Web-based personal health information through a website or app. Furthermore, we assessed adjusted ORs for demographic variables. Those aged >65 years and Hispanic people were more likely to report no use of email to exchange medical information with their provider. Minorities (Hispanic, non-Hispanic black, and Asian people) were less likely to indicate they had no interest in exchanging general health tips with a provider electronically.\n\n\nCONCLUSIONS\nThe analysis did not show any significant association among those with comorbidities and their proclivity toward health information, possibly implying that HIT-related interventions, particularly design of information technologies, should focus more on demographic factors, including race, age, and region, than on comorbidities or chronic disease status to increase the likelihood of use. Future research is needed to understand and explore more patient-friendly use and design of information technologies, which can be utilized by diverse age, race, and education or health literacy groups efficiently to further bridge the patient-provider communication gap."
},
{
"pmid": "21911758",
"title": "A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care.",
"abstract": "PURPOSE\nWe compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care.\n\n\nMETHODS\nWe performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and \"think-aloud\" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology.\n\n\nRESULTS\nThe mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P <.001). Physicians correctly identified 94% of the data requested using the conventional method, vs 100% with the dashboard (P <.01). The mean number of mouse clicks was 60 for conventional searching vs 3 clicks with the diabetes dashboard (P <.001). A common theme was that in everyday practice, if physicians had to spend too much time searching for data, they would either continue without it or order a test again.\n\n\nCONCLUSIONS\nUsing a patient-specific diabetes dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies."
},
{
"pmid": "21893775",
"title": "Clinical situations and information needs of physicians during treatment of diabetes mellitus patients: a triangulation study.",
"abstract": "Physicians should have access to the information they need to provide the most effective health care. Medical knowledge and patient-oriented information is dynamic and expanding rapidly so there is a rising risk of information overload. We investigated the information needs of physicians during treatment of Diabetes mellitus patients, using a combination of interviews, observations, literature research and analysis of recorded medical information in hospitals as part of a methodical triangulation. 446 information items were identified, structured in a set of 9 main categories each, as well as 6 time windows, 10 clinical situations and 68 brief queries. The physician's information needs as identified in this study will now be used to develop sophisticated query tools to efficiently support finding of information in an electronic health record."
},
{
"pmid": "19656719",
"title": "The outcomes of regional healthcare information systems in health care: a review of the research literature.",
"abstract": "The resulting regional healthcare information systems were expected to have effects and impacts on health care procedures, work practices and treatment outcomes. The aim is to find out how health information systems have been investigated, what has been investigated and what are the outcomes. A systematic review was carried out of the research on the regional health information systems or organizations. The literature search was conducted on four electronic Cinahl Medline, Medline/PubMed and Cochrane. The common type of study design was the survey research and case study, and the data collection was carried out via different methodologies. They found out different types of regional health information systems (RHIS). The systems were heterogeneous and were in different phases of these developments. The RHIS outcomes focused on the five main areas: flow of information, collaboration, process redesign, system usability and organization culture. The RHIS improved the clinical data access, timely information, and clinical data exchange and improvement in communication and coordination within a region between professionals but also there was inadequate access to patient relevant clinical data. There were differences in organization culture, vision and expectations of leadership and consistency of strategic plan. Nevertheless, there were widespread participation by both healthcare providers and patients."
}
] |
BMC Medical Informatics and Decision Making | 31023322 | PMC6485069 | 10.1186/s12911-019-0809-9 | “OPTImAL”: an ontology for patient adherence modeling in physical activity domain | BackgroundMaintaining physical fitness is a crucial component of the therapeutic process for patients with cardiovascular disease (CVD). Despite the known importance of being physically active, patient adherence to exercise, both in daily life and during cardiac rehabilitation (CR), is low. Patient adherence is frequently composed of numerous determinants associated with different patient aspects (e.g., psychological, clinical, etc.). Understanding the influence of such determinants is a central component of developing personalized interventions to improve or maintain patient adherence. Medical research produced evidence regarding factors affecting patients’ adherence to physical activity regimen. However, the heterogeneity of the available data is a significant challenge for knowledge reusability. Ontologies constitute one of the methods applied for efficient knowledge sharing and reuse. In this paper, we are proposing an ontology called OPTImAL, focusing on CVD patient adherence to physical activity and exercise training.MethodsOPTImAL was developed following the Ontology Development 101 methodology and refined based on the NeOn framework. First, we defined the ontology specification (i.e., purpose, scope, target users, etc.). Then, we elicited domain knowledge based on the published studies. Further, the model was conceptualized, formalized and implemented, while the developed ontology was validated for its consistency. An independent cardiologist and three CR trainers evaluated the ontology for its appropriateness and usefulness.ResultsWe developed a formal model that includes 142 classes, ten object properties, and 371 individuals, that describes the relations of different factors of CVD patient profile to adherence and adherence quality, as well as the associated types and dimensions of physical activity and exercise. 2637 logical axioms were constructed to comprise the overall concepts that the ontology defines. The ontology was successfully validated for its consistency and preliminary evaluated for its appropriateness and usefulness in medical practice.ConclusionsOPTImAL describes relations of 320 factors originated from 60 multidimensional aspects (e.g., social, clinical, psychological, etc.) affecting CVD patient adherence to physical activity and exercise. The formal model is evidence-based and can serve as a knowledge tool in the practice of cardiac rehabilitation experts, supporting the process of activity regimen recommendation for better patient adherence.Electronic supplementary materialThe online version of this article (10.1186/s12911-019-0809-9) contains supplementary material, which is available to authorized users. | Research data reusability through ontologies and related workKnowledge regarding patient adherence to physical activity may support the design of successful strategies for recommendations and interventions [3]. However, the heterogeneity of data available from the conducted studies regarding patient populations, study contexts, and description of study results is a significant challenge for researchers in reusability of available data results [12]. Ontologies constitute one of the methods applied for efficient knowledge sharing and reuse [13]. An ontology allows combining conceptual knowledge with quantitative and qualitative data, supporting interoperability and flexibility of its underlying model [14].To provide an overview of existing formal models related to the domain of our interest, we explored scientific databases and ontology repositories for existing models using the keywords “adherence ontology,” “physical activity adherence ontology,” and “exercise adherence ontology.” In particular, we searched MEDLINE and IEEE Xplore [15]. The ontology repositories searched included BioPortal [16], the Open Biological and Biomedical Ontology (OBO) Foundry [17], and the Ontology Lookup Service (OLS) [18]. We analyzed two additional ontological resources found separately [19, 20]. We then reviewed the purpose and domain of the ontologies to consider their relevance to our work and potential reuse. The identified, related ontological resources are listed in Table 1.Table 1Review of physical activity and exercise-related ontologiesResourcePurpose of the ontological resourceDomainKostopoulos et al. [21], 2011To support personalized exercise prescriptionExercise in cardiac rehabilitationFaiz et al. [22], 2014To recommend diet and exercise based on the user profileDiet and exercise in diabetes patientsFoust [23], 2013To provide a reference for describing an exercise regarding functional movements, engaged musculoskeletal system parts, related equipment or monitoring devices, and intended health outcomesAnatomy of exercise and health outcomesBickmore & Schulman [19], 2013To describe health behavior change interventions (exercise and diet promotion)Health behavior change (exercise and diet)Colantonio et al. [20], 2007To model the domain knowledge base and represent formalism, knowledge sharing, and reuseHeart failure patient clinical profileWe found of particular interest, the ontology-based framework developed for personalized exercise prescription for patients with heart disease by Kostopoulos et al. [21]. The framework combines medical domain-related knowledge and inference logic to propose exercise plans for each patient as a decision support tool for healthcare professionals. The personalization of the framework is based on the patient’s preferences for exercise types (e.g., cycling, jogging), the time of the day for planned activity, and lifestyle aspects (e.g., previously sedentary lifestyle). The study associated an improved adherence with a positive attitude and acceptance of a prescribed plan by a patient. As a part of personalization inference, no more factors influencing patient adherence were included. Two other ontologies supported personalized exercise recommendation based on the patient’s profile [22, 23]; however, the concept of adherence to physical fitness was not addressed. The ontology promoting health behavior change in the domain of exercise did not include concepts related to adherence behavior [19]. The ontological resource in heart failure domain was found to be a comprehensive representation of a clinical patient-related perspective but lacking in the relevance of patient adherence in the physical activity domain [20].Despite that all the reviewed ontologies helped us to develop our original approach for the ontology design, none of them supported the combination of a multidimensional patient profile (e.g., personal, environmental, clinical, and other factors) and patient adherence to physical activity-related behavior, or other concepts that could support the interpretation of the relationships between the adherence and patient-related factors. Therefore, we found it necessary to develop a new ontology that will incorporate patient profile and its association with adherence to physical activity regimen in the domain of CVD patient adherence to physical activity and exercise. | [
"26139859",
"22499542",
"20161356",
"26076950",
"28867025",
"8505815",
"28629436",
"24944781",
"22254621",
"18360559",
"27345947"
] | [
{
"pmid": "26139859",
"title": "Exercise and the cardiovascular system: clinical science and cardiovascular outcomes.",
"abstract": "Substantial evidence has established the value of high levels of physical activity, exercise training (ET), and overall cardiorespiratory fitness in the prevention and treatment of cardiovascular diseases. This article reviews some basics of exercise physiology and the acute and chronic responses of ET, as well as the effect of physical activity and cardiorespiratory fitness on cardiovascular diseases. This review also surveys data from epidemiological and ET studies in the primary and secondary prevention of cardiovascular diseases, particularly coronary heart disease and heart failure. These data strongly support the routine prescription of ET to all patients and referrals for patients with cardiovascular diseases, especially coronary heart disease and heart failure, to specific cardiac rehabilitation and ET programs."
},
{
"pmid": "22499542",
"title": "Adherence of heart failure patients to exercise: barriers and possible solutions: a position statement of the Study Group on Exercise Training in Heart Failure of the Heart Failure Association of the European Society of Cardiology.",
"abstract": "The practical management of heart failure remains a challenge. Not only are heart failure patients expected to adhere to a complicated pharmacological regimen, they are also asked to follow salt and fluid restriction, and to cope with various procedures and devices. Furthermore, physical training, whose benefits have been demonstrated, is highly recommended by the recent guidelines issued by the European Society of Cardiology, but it is still severely underutilized in this particular patient population. This position paper addresses the problem of non-adherence, currently recognized as a main obstacle to a wide implementation of physical training. Since the management of chronic heart failure and, even more, of training programmes is a multidisciplinary effort, the current manuscript intends to reach cardiologists, nurses, physiotherapists, as well as psychologists working in the field."
},
{
"pmid": "20161356",
"title": "The Effectiveness of Lifestyle Physical Activity Interventions to Reduce Cardiovascular Disease.",
"abstract": "Lifestyle interventions have evolved from proof of concept pilot studies to efficacy and effectiveness studies and have now moved toward translation and dissemination studies because of their demonstrated ability to improve cardiovascular diseases (CVD) outcomes including blood pressure. When combined with diet, they also have demonstrated the ability to normalize blood glucose and help to regulate weight. This review highlights the converging lines of evidence that led to lifestyle physical activity interventions beginning with early epidemiology studies and provides evidence for the efficacy and effectiveness of lifestyle interventions. However, if lifestyle interventions are to play a role in preventing CVD and improving CVD outcomes, their use must be more widespread. This will require translational and dissemination research in order to understand how to move into real world settings. Successful examples of translational studies will be highlighted and issues related to theoretical and practical issues as well as capacity building will be discussed. Building bridges between research and practice must be done if lifestyle interventions are to deliver on their public health promise."
},
{
"pmid": "26076950",
"title": "Contributions of risk factors and medical care to cardiovascular mortality trends.",
"abstract": "Ischaemic heart disease, stroke, and other cardiovascular diseases (CVDs) lead to 17.5 million deaths worldwide per year. Taking into account population ageing, CVD death rates are decreasing steadily both in regions with reliable trend data and globally. The declines in high-income countries and some Latin American countries have been ongoing for decades without slowing. These positive trends have broadly coincided with, and benefited from, declines in smoking and physiological risk factors, such as blood pressure and serum cholesterol levels. These declines have also coincided with, and benefited from, improvements in medical care, including primary prevention, diagnosis, and treatment of acute CVDs, as well as post-hospital care, especially in the past 40 years. These variables, however, explain neither why the decline began when it did, nor the similarities and differences in the start time and rate of the decline between countries and sexes. In Russia and some other former Soviet countries, changes in volume and patterns of alcohol consumption have caused sharp rises in CVD mortality since the early 1990s. An important challenge in reaching firm conclusions about the drivers of these remarkable international trends is the paucity of time-trend data on CVD incidence, risk factors throughout the life-course, and clinical care."
},
{
"pmid": "8505815",
"title": "Use of MEDLINE by physicians for clinical problem solving.",
"abstract": "OBJECTIVE\nTo understand the ways in which computer-mediated searching of the biomedical literature affects patient care and other professional activities. Undertaken to determine the ways in which on-line access to the biomedical literature via the National Library of Medicine's MEDLINE database \"makes a difference\" in what physicians do when confronted with a medical problem requiring new or additional information.\n\n\nDESIGN\nAn adaptation of the Critical Incident Technique used to gather detailed reports of MEDLINE search results that were especially helpful (or not helpful) in carrying out the individual's professional activities. The individual physician was the source of the patient care incident reports. One thousand one hundred fifty-eight reports were systematically analyzed from three different perspectives: (1) why the information was sought; (2) the effect of having (or not having) the needed information on professional decisions and actions; and (3) the outcome of the search.\n\n\nPARTICIPANTS AND SETTING\nTelephone interviews were carried out with a purposive sample of 552 physicians, scientists, and other professionals working in a variety of clinical care and other settings. Of these, 65% were direct users of MEDLINE throughout the United States, and 35% had MEDLINE searches conducted for them either at a major health sciences center or in community hospitals.\n\n\nRESULTS\nThree comprehensive and detailed inventories that describe the motivation for the searches, how search results affected the actions and decisions of the individual who initiated the search, and how they affected the outcome of the situation that motivated the search.\n\n\nCONCLUSIONS\nMEDLINE searches are being carried out by and for physicians to meet a wide diversity of clinical information needs. Physicians report that in situations involving individual patients, rapid access to the biomedical literature via MEDLINE is at times critical to sound patient care and favorably influences patient outcomes."
},
{
"pmid": "28629436",
"title": "Disease Compass- a navigation system for disease knowledge based on ontology and linked data techniques.",
"abstract": "BACKGROUND\nMedical ontologies are expected to contribute to the effective use of medical information resources that store considerable amount of data. In this study, we focused on disease ontology because the complicated mechanisms of diseases are related to concepts across various medical domains. The authors developed a River Flow Model (RFM) of diseases, which captures diseases as the causal chains of abnormal states. It represents causes of diseases, disease progression, and downstream consequences of diseases, which is compliant with the intuition of medical experts. In this paper, we discuss a fact repository for causal chains of disease based on the disease ontology. It could be a valuable knowledge base for advanced medical information systems.\n\n\nMETHODS\nWe developed the fact repository for causal chains of diseases based on our disease ontology and abnormality ontology. This section summarizes these two ontologies. It is developed as linked data so that information scientists can access it using SPARQL queries through an Resource Description Framework (RDF) model for causal chain of diseases.\n\n\nRESULTS\nWe designed the RDF model as an implementation of the RFM for the fact repository based on the ontological definitions of the RFM. 1554 diseases and 7080 abnormal states in six major clinical areas, which are extracted from the disease ontology, are published as linked data (RDF) with SPARQL endpoint (accessible API). Furthermore, the authors developed Disease Compass, a navigation system for disease knowledge. Disease Compass can browse the causal chains of a disease and obtain related information, including abnormal states, through two web services that provide general information from linked data, such as DBpedia, and 3D anatomical images.\n\n\nCONCLUSIONS\nDisease Compass can provide a complete picture of disease-associated processes in such a way that fits with a clinician's understanding of diseases. Therefore, it supports user exploration of disease knowledge with access to pertinent information from a variety of sources."
},
{
"pmid": "24944781",
"title": "An ontological modeling approach for abnormal states and its application in the medical domain.",
"abstract": "BACKGROUND\nRecently, exchanging data and information has become a significant challenge in medicine. Such data include abnormal states. Establishing a unified representation framework of abnormal states can be a difficult task because of the diverse and heterogeneous nature of these states. Furthermore, in the definition of diseases found in several textbooks or dictionaries, abnormal states are not directly associated with the corresponding quantitative values of clinical test data, making the processing of such data by computers difficult.\n\n\nRESULTS\nWe focused on abnormal states in the definition of diseases and proposed a unified form to describe an abnormal state as a \"property,\" which can be decomposed into an \"attribute\" and a \"value\" in a qualitative representation. We have developed a three-layer ontological model of abnormal states from the generic to disease-specific level. By developing an is-a hierarchy and combining causal chains of diseases, 21,000 abnormal states from 6000 diseases have been captured as generic causal relations and commonalities have been found among diseases across 13 medical departments.\n\n\nCONCLUSIONS\nOur results showed that our representation framework promotes interoperability and flexibility of the quantitative raw data, qualitative information, and generic/conceptual knowledge of abnormal states. In addition, the results showed that our ontological model have found commonalities in abnormal states among diseases across 13 medical departments."
},
{
"pmid": "22254621",
"title": "An ontology-based framework aiming to support personalized exercise prescription: application in cardiac rehabilitation.",
"abstract": "Exercise constitutes an important intervention aiming to improve health and quality of life for several categories of patients. Personalized exercise prescription is a rather complicated issue, requiring several aspects to be taken into account, e.g. patient's medical history and response to exercise, medication treatment, personal preferences, etc. The present work proposes an ontology-based framework designed to facilitate healthcare professionals in personalized exercise prescription. The framework encapsulates the necessary domain knowledge and the appropriate inference logic, so as to generate exercise plan suggestions based on patient's profile. It also supports readjustments of a prescribed plan according to the patient's response with respect to goal achievement and changes in physical-medical status. An instantiation of the proposed framework for cardiac rehabilitation illustrates the virtue and the applicability of this work."
},
{
"pmid": "18360559",
"title": "The challenge of patient adherence.",
"abstract": "Quality healthcare outcomes depend upon patients' adherence to recommended treatment regimens. Patient nonadherence can be a pervasive threat to health and wellbeing and carry an appreciable economic burden as well. In some disease conditions, more than 40% of patients sustain significant risks by misunderstanding, forgetting, or ignoring healthcare advice. While no single intervention strategy can improve the adherence of all patients, decades of research studies agree that successful attempts to improve patient adherence depend upon a set of key factors. These include realistic assessment of patients' knowledge and understanding of the regimen, clear and effective communication between health professionals and their patients, and the nurturance of trust in the therapeutic relationship. Patients must be given the opportunity to tell the story of their unique illness experiences. Knowing the patient as a person allows the health professional to understand elements that are crucial to the patient's adherence: beliefs, attitudes, subjective norms, cultural context, social supports, and emotional health challenges, particularly depression. Physician-patient partnerships are essential when choosing amongst various therapeutic options to maximize adherence. Mutual collaboration fosters greater patient satisfaction, reduces the risks of nonadherence, and improves patients' healthcare outcomes."
},
{
"pmid": "27345947",
"title": "A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.",
"abstract": "Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving \"live partial-area taxonomies\" is demonstrated."
}
] |
BMC Medical Informatics and Decision Making | 31023325 | PMC6485152 | 10.1186/s12911-019-0807-y | Using the distance between sets of hierarchical taxonomic clinical concepts to measure patient similarity | BackgroundMany clinical concepts are standardized under a categorical and hierarchical taxonomy such as ICD-10, ATC, etc. These taxonomic clinical concepts provide insight into semantic meaning and similarity among clinical concepts and have been applied to patient similarity measures. However, the effects of diverse set sizes of taxonomic clinical concepts contributing to similarity at the patient level have not been well studied.MethodsIn this paper the most widely used taxonomic clinical concepts system, ICD-10, was studied as a representative taxonomy. The distance between ICD-10-coded diagnosis sets is an integrated estimation of the information content of each concept, the similarity between each pairwise concepts and the similarity between the sets of concepts. We proposed a novel method at the set-level similarity to calculate the distance between sets of hierarchical taxonomic clinical concepts to measure patient similarity. A real-world clinical dataset with ICD-10 coded diagnoses and hospital length of stay (HLOS) information was used to evaluate the performance of various algorithms and their combinations in predicting whether a patient need long-term hospitalization or not. Four subpopulation prototypes that were defined based on age and HLOS with different diagnoses set sizes were used as the target for similarity analysis. The F-score was used to evaluate the performance of different algorithms by controlling other factors. We also evaluated the effect of prototype set size on prediction precision.ResultsThe results identified the strengths and weaknesses of different algorithms to compute information content, code-level similarity and set-level similarity under different contexts, such as set size and concept set background. The minimum weighted bipartite matching approach, which has not been fully recognized previously showed unique advantages in measuring the concepts-based patient similarity.ConclusionsThis study provides a systematic benchmark evaluation of previous algorithms and novel algorithms used in taxonomic concepts-based patient similarity, and it provides the basis for selecting appropriate methods under different clinical scenarios.Electronic supplementary materialThe online version of this article (10.1186/s12911-019-0807-y) contains supplementary material, which is available to authorized users. | Related workTaxonomic concepts imply semantic relationships and distances. As shown in Fig. 1a, taxonomic concepts are usually organized hierarchically. Intuitively, concepts under the same branch will be more similar than concepts from different branches. Generally, the semantic similarity [11, 12] between two taxonomic concepts can be measured by two approaches [13]: the probabilistic approach and the information-theoretic approach. Probabilistic approaches are traditional data-driven methods proposed for categorical data and they address the frequency distribution of the concept in the patient set. Information-theoretic approaches consider the information content (IC) of concepts. The IC of a concept is a fundamental dimension stating the amount of embedded information in computational linguistics [14, 15]. Concrete and specialized entities in a discourse are generally considered to present more IC than general and abstract ones. Boriah [13] proved that the information-theoretic approach performs better than the probabilistic approach when explaining observed groups in clinical data. In this paper, we restrict out discussion to the information-theoretic approaches.Fig. 1Taxonomic clinical concepts and patient similarity. a Taxonomic concepts and concepts semantic similarity. b Patient similarity based on the concept set-level similarityThere are many approaches to calculate the IC of a taxonomic concept. A simple way is to assign different IC values to different levels of concepts (as shown in Table 1 IC #1 Formula). Therefore, a specific concept has a higher IC value than a general concept. Considering ICD-10 as an example, the IC of a virtual root is 1, then, the IC of a chapter of ICD is 2, and so on, so that the IC of the full range of ICD expansion nodes is 5 [16]. The other more complicated ontology-based IC computation model is proposed by Sanchez [14]. As shown in Table 1 IC #2 Formula, this method calculates the IC of a concept depending on the count of taxonomic leaves of a concept’s hyponym tree (|leaves(a)|) and the number of taxonomic subsumers (|subsumers(a)|).Table 1The formula used in the taxonomic concept-based patient similarity#FormulaReferenceInformation Content (IC)1levels(a → r)[13]2
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ -\log \left(\frac{\frac{\left| leaves(a)\right|}{\left| subsumers(a)\right|}+1}{\left| leaves(r)\right|+1}\right) $$\end{document}−logleavesasubsumersa+1leavesr+1
[14]Code-level Similarity (CS)1
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \left\{\begin{array}{c}0,\kern0.5em if\ a=b\\ {}1,\kern0.5em otherwise\end{array}\right. $$\end{document}0,ifa=b1,otherwise
–2
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-\frac{2 IC(c)}{IC(a)+ IC(b)} $$\end{document}1−2ICcICa+ICb
[16, 23]3
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-{e}^{\alpha \left( IC(a)+ IC(b)-2 IC(c)\right)}\bullet \frac{e^{\beta IC(c)}-{e}^{-\beta IC(c)}}{e^{\beta IC(c)}+{e}^{-\beta IC(c)}} $$\end{document}1−eαICa+ICb−2ICc∙eβICc−e−βICceβICc+e−βICc
[17]4
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \frac{IC(l)- IC(c)}{IC(l)} $$\end{document}ICl−ICcICl
–Set-level similarity (SS)1Dice
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-\frac{2\mid A\cap B\mid }{\mid A\mid +\mid B\mid } $$\end{document}1−2|A∩B||A|+|B|
–2Jaccard
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-\frac{\mid A\cap B\mid }{\mid A\cup B\mid } $$\end{document}1−|A∩B||A∪B|
–3Cosine
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-\frac{\mid A\cap B\mid }{\sqrt{\mid A\mid \bullet \mid B\mid }} $$\end{document}1−|A∩B||A|∙|B|
–4Overlap
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ 1-\frac{\mid A\cap B\mid }{\min \left\{|A|,|B|\right\}} $$\end{document}1−|A∩B|min{|A|,|B|}
–5
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \frac{1}{\mid A\mid +\mid B\mid}\bullet \left(\sum \limits_{a\in A}\underset{b\in B}{\min } CS\left(a,b\right)+\sum \limits_{b\in B}\underset{a\in A}{\min } CS\left(b,a\right)\right) $$\end{document}1|A|+|B|∙(∑a∈Aminb∈BCS(a,b)+∑b∈Bmina∈ACS(b,a))
[17]6
\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \frac{1}{\mid A\cup B\mid}\bullet \left(\sum \limits_{a\in A\setminus B}\frac{1}{\mid B\mid}\sum \limits_{b\in B} CS\left(a,b\right)+\sum \limits_{b\in B\setminus A}\frac{1}{\mid A\mid}\sum \limits_{a\in A} CS\left(b,a\right)\right) $$\end{document}1|A∪B|∙(∑a∈A∖B1|B|∑b∈BCS(a,b)+∑b∈B∖A1|A|∑a∈ACS(b,a))
[18]With the IC of concepts, there are several ways to measure the similarity of two concepts. Four representative code-level similarity (CS) formulas are listed in Table 1. For the sake of notation, a and b are two concepts such that their similarity will be measured, as shown in Fig. 1a. c is defined as the least common ancestor (LCA) of a and b in the taxonomy. r and l represent the root and the total levels in the taxonomy, respectively. The CS #1 Formula, which is a binary similarity judgment, is efficient and simple to implement but cannot provide enough discrimination power in many applications. The CS #2 Formula is based on the information-theoretic definition of similarity proposed by Wu [16]. The CS #3 Formula by Li [17] introduced two parameters to scale the contribution of the IC of LCA and the IC of two concepts. On a benchmark data set, the author obtained the optimal parameters settings as α = 0.2 and β = 0.6, respectively. The CS #4 Formula is a simplified form of CS #2 Formula when a and b are in the deepest level. While it is not suitable when a and b are in other cases.A patient usually suffers from multiple health problems and is diagnosed with a group of ICD codes, i.e., an ICD-10 set (as shown in Fig. 1b). The patient similarity is measured by the resemblance of two concept sets. Considering that A and B are two sets of taxonomic concepts, a is one of the concepts in A and b belongs to B. Six formulas to calculate set-level similarity (SS) are listed in.Table 1. For the binary code-level similarity, some classical methods, such as Dice, Jaccard, Cosine, and Overlap, can be used to calculate set-level similarity. The other two formulas measure the resemblance of two concept sets through different approaches. The SS #5 Formula uses the most similar concept pair’s average value to represent the set-level similarity. The SS #6 Formula considers all the concept pair’s average similarity value as the set-level similarity.ICD is a widely used taxonomy in clinical classification systems. Several patient similarity measures have been developed to detect similarity in patient records by referring to the ICD codes of diagnoses in the past few years. Gottlieb [7] used discharge ICD codes of past and current hospitalizations to construct a patient medical history profile to compute the similarity of patients. In Zhang’s research [6], the patient similarity was evaluated by the Tanimoto coefficient of co-occurring ICD-9 diagnosis codes. A novel distance measurement method for categorical values such as ICD-10 that takes the path distance between concepts in a hierarchy into account was proposed in Girardi’s research [18]. In Rivault’s research [19], diagnoses (ICD-10), drugs (ATC), and medical acts (CCAM) are used to reconstruct the care trajectories. The longest similar subsequence that accounts for the semantic similarity between events is proposed to compare medical episodes. However, all of these algorithms still lack a system evaluation. The strengths and weaknesses of various combinations of these algorithms under different clinical applications are not clear.In different clinical scenarios, the taxonomic concept set sizes are different. According to the observation of clinical data in a one-year EMR dataset, the average number of distinct drugs used for each patient visit is approximately 13, and the average number of distinct diagnoses is approximately 8. However, the procedures during a patient visit may vary from tens to hundreds. Even when dressing the same taxonomy, there are different approaches to perform the patient similarity analysis. The patient-patient diagnosis similarity analysis addresses a relatively small set size. However, for patient-subpopulation or subpopulation-subpopulation diagnosis similarity analysis, the method may need to cope with different scenarios in which the concepts’ set sizes may be relatively large and unbalanced. Choosing appropriate formulas to measure the distance of concept sets to assist patient similarity analysis under different scenarios remains a challenge.In this study, we create a more complicated clinical scenario for patient similarity analysis. The study cohort data were collected from the same nephrology department with basically similar conditions but with different complications. We systematically evaluated the previous algorithms and two new set-level similarity algorithms with different evaluation approaches, such as data visualization and the F-score measure of a specific prediction task. | [
"25910264",
"26306255",
"24004670",
"26262174",
"25991168",
"23225256",
"24269894",
"27477837",
"10928714"
] | [
{
"pmid": "25910264",
"title": "PSF: A Unified Patient Similarity Evaluation Framework Through Metric Learning With Weak Supervision.",
"abstract": "Patient similarity is an important analytic operation in healthcare applications. At the core, patient similarity takes an index patient as the input and retrieves a ranked list of similar patients that are relevant in a specific clinical context. It takes patient information such as their electronic health records as input and computes the distance between a pair of patients based on those information. To construct a clinically valid similarity measure, physician input often needs to be incorporated. However, obtaining physicians' input is difficult and expensive. As a result, typically only limited physician feedbacks can be obtained on a small portion of patients. How to leverage all unlabeled patient data and limited supervision information from physicians to construct a clinically meaningful distance metric? In this paper, we present a patient similarity framework (PSF) that unifies and significantly extends existing supervised patient similarity metric learning methods. PSF is a general framework that can learn an appropriate distance metric through supervised and unsupervised information. Within PSF framework, we propose a novel patient similarity algorithm that uses local spline regression to capture the unsupervised information. To speedup the incorporation of physician feedback or newly available clinical information, we introduce a general online update algorithm for an existing PSF distance metric."
},
{
"pmid": "26306255",
"title": "Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.",
"abstract": "Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors."
},
{
"pmid": "24004670",
"title": "A method for inferring medical diagnoses from patient similarities.",
"abstract": "BACKGROUND\nClinical decision support systems assist physicians in interpreting complex patient data. However, they typically operate on a per-patient basis and do not exploit the extensive latent medical knowledge in electronic health records (EHRs). The emergence of large EHR systems offers the opportunity to integrate population information actively into these tools.\n\n\nMETHODS\nHere, we assess the ability of a large corpus of electronic records to predict individual discharge diagnoses. We present a method that exploits similarities between patients along multiple dimensions to predict the eventual discharge diagnoses.\n\n\nRESULTS\nUsing demographic, initial blood and electrocardiography measurements, as well as medical history of hospitalized patients from two independent hospitals, we obtained high performance in cross-validation (area under the curve >0.88) and correctly predicted at least one diagnosis among the top ten predictions for more than 84% of the patients tested. Importantly, our method provides accurate predictions (>0.86 precision in cross validation) for major disease categories, including infectious and parasitic diseases, endocrine and metabolic diseases and diseases of the circulatory systems. Our performance applies to both chronic and acute diagnoses.\n\n\nCONCLUSIONS\nOur results suggest that one can harness the wealth of population-based information embedded in electronic health records for patient-specific predictive tasks."
},
{
"pmid": "26262174",
"title": "A Hybrid Approach Using Case-Based Reasoning and Rule-Based Reasoning to Support Cancer Diagnosis: A Pilot Study.",
"abstract": "Recently there has been an increasing interest in applying information technology to support the diagnosis of diseases such as cancer. In this paper, we present a hybrid approach using case-based reasoning (CBR) and rule-based reasoning (RBR) to support cancer diagnosis. We used symptoms, signs, and personal information from patients as inputs to our model. To form specialized diagnoses, we used rules to define the input factors' importance according to the patient's characteristics. The model's output presents the probability of the patient having a type of cancer. To carry out this research, we had the approval of the ethics committee at Napoleão Laureano Hospital, in João Pessoa, Brazil. To define our model's cases, we collected real patient data at Napoleão Laureano Hospital. To define our model's rules and weights, we researched specialized literature and interviewed health professional. To validate our model, we used K-fold cross validation with the data collected at Napoleão Laureano Hospital. The results showed that our approach is an effective CBR system to diagnose cancer."
},
{
"pmid": "25991168",
"title": "Using EHRs for Heart Failure Therapy Recommendation Using Multidimensional Patient Similarity Analytics.",
"abstract": "Electronic Health Records (EHRs) contain a wealth of information about an individual patient's diagnosis, treatment and health outcomes. This information can be leveraged effectively to identify patients who are similar to each for disease diagnosis and prognosis. In recent years, several machine learning methods have been proposed to assessing patient similarity, although the techniques have primarily focused on the use of patient diagnoses data from EHRs for the learning task. In this study, we develop a multidimensional patient similarity assessment technique that leverages multiple types of information from the EHR and predicts a medication plan for each new patient based on prior knowledge and data from similar patients. In our algorithm, patients have been clustered into different groups using a hierarchical clustering approach and subsequently have been assigned a medication plan based on the similarity index to the overall patient population. We evaluated the performance of our approach on a cohort of heart failure patients (N=1386) identified from EHR data at Mayo Clinic and achieved an AUC of 0.74. Our results suggest that it is feasible to harness population-based information from EHRs for an individual patient-specific assessment."
},
{
"pmid": "23225256",
"title": "Verbal and physical aggression directed at nursing home staff by residents.",
"abstract": "CONTEXT\nLittle research has been conducted on aggression directed at staff by nursing home residents.\n\n\nOBJECTIVE\nTo estimate the prevalence of resident-to-staff aggression (RSA) over a 2-week period.\n\n\nDESIGN\nPrevalent cohort study.\n\n\nSETTING\nLarge urban nursing homes.\n\n\nPARTICIPANTS\nPopulation-based sample of 1,552 residents (80 % of eligible residents) and 282 certified nursing assistants.\n\n\nMAIN OUTCOME MEASURES\nMeasures of resident characteristics and staff reports of physical, verbal, or sexual behaviors directed at staff by residents.\n\n\nRESULTS\nThe staff response rate was 89 %. Staff reported that 15.6 % of residents directed aggressive behaviors toward them (2.8 % physical, 7.5 % verbal, 0.5 % sexual, and 4.8 % both verbal and physical). The most commonly reported type was verbal (12.4 %), particularly screaming at the certified nursing assistant (9.0 % of residents). Overall, physical aggression toward staff was reported for 7.6 % of residents, the most common being hitting (3.9 % of residents). Aggressive behaviors occurred most commonly in resident rooms (77.2 %) and in the morning (84.3 %), typically during the provision of morning care. In a logistic regression model, three clinical factors were significantly associated with resident-to-staff aggression: greater disordered behavior (OR = 6.48, 95 % CI: 4.55, 9.21), affective disturbance (OR = 2.29, 95 % CI: 1.68, 3.13), and need for activities of daily living morning assistance (OR = 2.16, 95 % CI: 1.53, 3.05). Hispanic (as contrasted with White) residents were less likely to be identified as aggressors toward staff (OR = 0.57, 95 % CI: 0.36, 0.91).\n\n\nCONCLUSION\nResident-to-staff aggression in nursing homes is common, particularly during morning care. A variety of demographic and clinical factors was associated with resident-to-staff aggression; this could serve as the basis for evidence-based interventions. Because RSA may negatively affect the quality of care, resident and staff safety, and staff job satisfaction and turnover, further research is needed to understand its causes and consequences and to develop interventions to mitigate its potential impact."
},
{
"pmid": "24269894",
"title": "A framework for unifying ontology-based semantic similarity measures: a study in the biomedical domain.",
"abstract": "Ontologies are widely adopted in the biomedical domain to characterize various resources (e.g. diseases, drugs, scientific publications) with non-ambiguous meanings. By exploiting the structured knowledge that ontologies provide, a plethora of ad hoc and domain-specific semantic similarity measures have been defined over the last years. Nevertheless, some critical questions remain: which measure should be defined/chosen for a concrete application? Are some of the, a priori different, measures indeed equivalent? In order to bring some light to these questions, we perform an in-depth analysis of existing ontology-based measures to identify the core elements of semantic similarity assessment. As a result, this paper presents a unifying framework that aims to improve the understanding of semantic measures, to highlight their equivalences and to propose bridges between their theoretical bases. By demonstrating that groups of measures are just particular instantiations of parameterized functions, we unify a large number of state-of-the-art semantic similarity measures through common expressions. The application of the proposed framework and its practical usefulness is underlined by an empirical analysis of hundreds of semantic measures in a biomedical context."
},
{
"pmid": "27477837",
"title": "Using concept hierarchies to improve calculation of patient similarity.",
"abstract": "OBJECTIVE\nWe introduce a new distance measure that is better suited than traditional methods at detecting similarities in patient records by referring to a concept hierarchy.\n\n\nMATERIALS AND METHODS\nThe new distance measure improves on distance measures for categorical values by taking the path distance between concepts in a hierarchy into account. We evaluate and compare the new measure on a data set of 836 patients.\n\n\nRESULTS\nThe new measure shows marked improvements over the standard measures, both qualitatively and quantitatively. Using the new measure for clustering patient data reveals structure that is otherwise not visible. Statistical comparisons of distances within patient groups with similar diagnoses shows that the new measure is significantly better at detecting these similarities than the standard measures.\n\n\nCONCLUSION\nThe new distance measure is an improvement over the current standard whenever a hierarchical arrangement of categorical values is available."
}
] |
PLoS Computational Biology | 30995214 | PMC6488101 | 10.1371/journal.pcbi.1006713 | Learning and forgetting using reinforced Bayesian change detection | Agents living in volatile environments must be able to detect changes in contingencies while refraining to adapt to unexpected events that are caused by noise. In Reinforcement Learning (RL) frameworks, this requires learning rates that adapt to past reliability of the model. The observation that behavioural flexibility in animals tends to decrease following prolonged training in stable environment provides experimental evidence for such adaptive learning rates. However, in classical RL models, learning rate is either fixed or scheduled and can thus not adapt dynamically to environmental changes. Here, we propose a new Bayesian learning model, using variational inference, that achieves adaptive change detection by the use of Stabilized Forgetting, updating its current belief based on a mixture of fixed, initial priors and previous posterior beliefs. The weight given to these two sources is optimized alongside the other parameters, allowing the model to adapt dynamically to changes in environmental volatility and to unexpected observations. This approach is used to implement the “critic” of an actor-critic RL model, while the actor samples the resulting value distributions to choose which action to undertake. We show that our model can emulate different adaptation strategies to contingency changes, depending on its prior assumptions of environmental stability, and that model parameters can be fit to real data with high accuracy. The model also exhibits trade-offs between flexibility and computational costs that mirror those observed in real data. Overall, the proposed method provides a general framework to study learning flexibility and decision making in RL contexts. | Related workThe adaptation of learning to contingency changes and noise has numerous connections to various scientific fields from cognitive psychology to machine learning. A classical finding in behavioural neuroscience is that instrumental behaviours tend to be less and less flexible as subjects repeatedly receive positive reinforcement after selecting a certain action in a certain context, both in animals [5–8] and humans [9–13]. This suggests that biological agents indeed adapt their learning rate to inferred environmental stability: when the environment appears stable (e.g. after prolonged experience of a rewarded stimulus-response association), they show increased tendency to maintain their model of the environment unchanged despite reception of unexpected data.Most studies on such automatization of behaviour have focused on action selection. However, weighting new evidence against previous belief is also a fundamental problem for perception and cognition [14–16]. Predictive coding [17–22] provides a rich, global, framework that has the potential to tackle this problem, but an explicit formulation of cognitive flexibility is still lacking. For example, whereas [22] provides an elegant Kalman-like Bayesian filter that learns the current state of the environment based on its past observations and predicts the effect of its actions, it assumes a stable environment and cannot, therefore, adapt dynamically to contingency changes. The Hierarchical Gaussian Filter (HGF) proposed by Mathys and colleagues [23, 24] provides a mathematical framework that implements learning of a sensory input in a hierarchical manner, and that can account for the emergence of inflexibility in various situations. This model deals with the problem of flexibility (framed as expected “volatility”) by building a hierarchy of random variables: each of these variables is distributed with a Gaussian distribution with a mean equal to this variable at the trial before and the variance equal to a non-linear transform of the variable at superior level. Each level encodes the distribution of the volatility of the level below. Although it has shown its efficiency in numerous applications [25–30], a major limitation of this model, within the context of our present concern, is that While the HGF accommodates a dynamically varying volatility, it assumes that the precision of the likelihood at the lowest level is static. To understand why it is the case, one should first observe that in the HGF the variance at each level is the product of two factors: a first “tonic” component, which is constant throughout the experiment, and a “phasic” component that is time-varying and controlled by the level above. These terms recall the concepts of “expected” and “unexpected” uncertainty [31, 32], and in the present paper, we will refer to these as variance (of the observation) and volatility (of the contingency). Now consider an experiment with two distinct successive signals, one with a low variance and one with a high variance. When fitted to this dataset, the HGF will consider the lower variance as the first tonic component, and all the extra variance in the second part of the signal will be assigned to the “phasic” part of the volatility, thus wrongfully considering noise of the signal as a change of contingency (see Fig 1). In summary, the HGF will have difficulties accounting for changes in the variance of the observations. Moreover, the HGF model cannot forget past experience after changes of contingency, but can only adapt its learning to the current contingency. This contrasts with the approach we propose, where the assessment of a change of contingency is made with the use of a reference, naive prior that plays the role of a “null hypothesis”. This way of making the learning process gravitate around a naive prior allows the model to actively forget past events and to eventually come back to a stable learning state even after very surprising events. These caveats limit the applicability of the HGF to a certain class of datasets in which contingency changes affect the mean rather than the variance of observations and in which the training set contains all possible future changes that the model may encounter at testing.10.1371/journal.pcbi.1006713.g001Fig 1Fitting of HGF model on dataset with changing variance.Two signals with a low (0.1) and high (1) variance were successively simulated for 200 trials each. A two-level HGF and the HAFVF were fitted to this simple dataset. A. The HGF considered the lower variance component as a “tonic” factor whereas all the additional variance of the second part of the signal was assigned to the “phasic” (time-varying) volatility component. This corresponded to a high second-level activation during the second phase of the experiment (B.) reflecting a low estimate of signal stability. The corresponding Maximum a Posteriori (MAP) estimate of the HAFVF had a much better variance estimate for both the first and second part of the experiment (A.), and, in contrast to the HGF, the stability measure (B.) decreased only at the time of the change of contingency. Shaded areas represent the 95% (approximate) posterior confidence interval of the mean. Green dots represent the value of the observations.As will be shown in detail below, in the model proposed in the present paper, volatility is not only a function of the variance of the observations: if a new observation falls close enough to previous estimates then the agent will refine its posterior estimate of the variance and will decrease its forgetting factor (i.e. will move its prior away from the fixed initial prior and closer to the learned posterior from the previous trial), but if the new observation is not likely given this posterior estimate, the forgetting factor will increase (i.e. will move closer to the fixed initial prior) and the model will tend to update to a novel state (because of the low precision of the initial prior). In the results of this manuscript, we show that our model outperforms the HGF in such situations.In Machine Learning and in Statistics, too, the question of whether new unexpected data should be classified as outlier or environmental change is important [33]. This problem of “denoising” or “filtering” the data is ubiquitous in science, and usually relies on arbitrary assumptions about environmental stability. In signal processing and system identification, adaptive forgetting is a broad field where optimality is highly context (and prior)-dependant [2]. Bayesian Filtering (BF) [34], and in particular the Kalman Filter [35] often lack the necessary flexibility to model real-life signals that are, by nature, changing. One can discriminate two approaches to deal with this problem: whereas Particle Filtering (PF) [36–38] is computationally expensive, the SF family of algorithms [2, 39], from which our model is a special case, usually has greater accuracy for a given amount of resources [36] (for more information, we refer to [35] where SF is reviewed). Most previous approaches in SF have used a truncated exponential prior [40, 41] or a fixed, linear mixture prior to account for the stability of the process [37]. Our approach is innovative in this field in two ways: first, we use a Beta prior on the mixing coefficient (unusual but not unique [42]), and we adapt the posterior of this forgetting factor on the basis of past observations, the prior of this parameter and its own adaptive forgetting factor. Second, we introduce a hierarchy of forgetting that stabilizes the learning when the training length is long.We therefore intend to focus our present research on the very question of flexibility. We will show how flexibility can be implemented in a Bayesian framework using an adaptive forgetting factor, and what prediction this framework makes when applied to learning and decision making in Model-Free paradigms. | [
"16286932",
"16715055",
"21909324",
"22487034",
"26673945",
"2034749",
"23663408",
"19528002",
"26360579",
"24474914",
"25688217",
"25767445",
"21283556",
"24474914",
"24139048",
"25411501",
"25142296",
"25187943",
"26564686",
"15944135",
"17676057",
"5010404",
"24487030",
"24659960",
"23267662",
"24446502",
"15990243",
"26542975",
"22487035",
"21637741",
"10973778",
"17055746",
"12412886",
"19362448",
"24139036",
"16536645",
"26301468",
"21316475",
"8832893",
"12417672",
"20010823",
"22855817",
"27870610",
"25589744",
"27966103",
"28653668",
"28581478",
"28175922",
"21946325",
"22396408",
"24478635",
"25459409",
"22959354",
"21435563",
"22884326",
"24474945",
"28731839",
"20510862"
] | [
{
"pmid": "16286932",
"title": "Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control.",
"abstract": "A broad range of neural and behavioral data suggests that the brain contains multiple systems for behavioral choice, including one associated with prefrontal cortex and another with dorsolateral striatum. However, such a surfeit of control raises an additional choice problem: how to arbitrate between the systems when they disagree. Here, we consider dual-action choice systems from a normative perspective, using the computational theory of reinforcement learning. We identify a key trade-off pitting computational simplicity against the flexible and statistically efficient use of experience. The trade-off is realized in a competition between the dorsolateral striatal and prefrontal systems. We suggest a Bayesian principle of arbitration between them according to uncertainty, so each controller is deployed when it should be most accurate. This provides a unifying account of a wealth of experimental evidence about the factors favoring dominance by either system."
},
{
"pmid": "16715055",
"title": "The role of the basal ganglia in habit formation.",
"abstract": "Many organisms, especially humans, are characterized by their capacity for intentional, goal-directed actions. However, similar behaviours often proceed automatically, as habitual responses to antecedent stimuli. How are goal-directed actions transformed into habitual responses? Recent work combining modern behavioural assays and neurobiological analysis of the basal ganglia has begun to yield insights into the neural basis of habit formation."
},
{
"pmid": "21909324",
"title": "A critical review of habit learning and the Basal Ganglia.",
"abstract": "The current paper briefly outlines the historical development of the concept of habit learning and discusses its relationship to the basal ganglia. Habit learning has been studied in many different fields of neuroscience using different species, tasks, and methodologies, and as a result it has taken on a wide range of definitions from these various perspectives. We identify five common but not universal, definitional features of habit learning: that it is inflexible, slow or incremental, unconscious, automatic, and insensitive to reinforcer devaluation. We critically evaluate for each of these how it has been defined, its utility for research in both humans and non-human animals, and the evidence that it serves as an accurate description of basal ganglia function. In conclusion, we propose a multi-faceted approach to habit learning and its relationship to the basal ganglia, emphasizing the need for formal definitions that will provide directions for future research."
},
{
"pmid": "22487034",
"title": "Habits, action sequences and reinforcement learning.",
"abstract": "It is now widely accepted that instrumental actions can be either goal-directed or habitual; whereas the former are rapidly acquired and regulated by their outcome, the latter are reflexive, elicited by antecedent stimuli rather than their consequences. Model-based reinforcement learning (RL) provides an elegant description of goal-directed action. Through exposure to states, actions and rewards, the agent rapidly constructs a model of the world and can choose an appropriate action based on quite abstract changes in environmental and evaluative demands. This model is powerful but has a problem explaining the development of habitual actions. To account for habits, theorists have argued that another action controller is required, called model-free RL, that does not form a model of the world but rather caches action values within states allowing a state to select an action based on its reward history rather than its consequences. Nevertheless, there are persistent problems with important predictions from the model; most notably the failure of model-free RL correctly to predict the insensitivity of habitual actions to changes in the action-reward contingency. Here, we suggest that introducing model-free RL in instrumental conditioning is unnecessary, and demonstrate that reconceptualizing habits as action sequences allows model-based RL to be applied to both goal-directed and habitual actions in a manner consistent with what real animals do. This approach has significant implications for the way habits are currently investigated and generates new experimental predictions."
},
{
"pmid": "26673945",
"title": "Fronto-striatal organization: Defining functional and microstructural substrates of behavioural flexibility.",
"abstract": "Discrete yet overlapping frontal-striatal circuits mediate broadly dissociable cognitive and behavioural processes. Using a recently developed multi-echo resting-state functional MRI (magnetic resonance imaging) sequence with greatly enhanced signal compared to noise ratios, we map frontal cortical functional projections to the striatum and striatal projections through the direct and indirect basal ganglia circuit. We demonstrate distinct limbic (ventromedial prefrontal regions, ventral striatum - VS, ventral tegmental area - VTA), motor (supplementary motor areas - SMAs, putamen, substantia nigra) and cognitive (lateral prefrontal and caudate) functional connectivity. We confirm the functional nature of the cortico-striatal connections, demonstrating correlates of well-established goal-directed behaviour (involving medial orbitofrontal cortex - mOFC and VS), probabilistic reversal learning (lateral orbitofrontal cortex - lOFC and VS) and attentional shifting (dorsolateral prefrontal cortex - dlPFC and VS) while assessing habitual model-free (SMA and putamen) behaviours on an exploratory basis. We further use neurite orientation dispersion and density imaging (NODDI) to show that more goal-directed model-based learning (MBc) is also associated with higher mOFC neurite density and habitual model-free learning (MFc) implicates neurite complexity in the putamen. This data highlights similarities between a computational account of MFc and conventional measures of habit learning. We highlight the intrinsic functional and structural architecture of parallel systems of behavioural control."
},
{
"pmid": "23663408",
"title": "Whatever next? Predictive brains, situated agents, and the future of cognitive science.",
"abstract": "Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this \"hierarchical prediction machine\" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency."
},
{
"pmid": "19528002",
"title": "Predictive coding under the free-energy principle.",
"abstract": "This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs."
},
{
"pmid": "26360579",
"title": "Computational psychiatry: the brain as a phantastic organ.",
"abstract": "In this Review, we discuss advances in computational neuroscience that relate to psychiatry. We review computational psychiatry in terms of the ambitions of investigators, emerging domains of application, and future work. Our focus is on theoretical formulations of brain function that put subjective beliefs and behaviour within formal (computational) frameworks-frameworks that can be grounded in neurophysiology down to the level of synaptic mechanisms. Understanding the principles that underlie the brain's functional architecture might be essential for an informed phenotyping of psychopathology in terms of its pathophysiological underpinnings. We focus on active (Bayesian) inference and predictive coding. Specifically, we show how basic principles of neuronal computation can be used to explain psychopathology, ranging from impoverished theory of mind in autism to abnormalities of smooth pursuit eye movements in schizophrenia."
},
{
"pmid": "24474914",
"title": "Negative learning bias is associated with risk aversion in a genetic animal model of depression.",
"abstract": "The lateral habenula (LHb) is activated by aversive stimuli and the omission of reward, inhibited by rewarding stimuli and is hyperactive in helpless rats-an animal model of depression. Here we test the hypothesis that congenital learned helpless (cLH) rats are more sensitive to decreases in reward size and/or less sensitive to increases in reward than wild-type (WT) control rats. Consistent with the hypothesis, we found that cLH rats were slower to switch preference between two responses after a small upshift in reward size on one of the responses but faster to switch their preference after a small downshift in reward size. cLH rats were also more risk-averse than WT rats-they chose a response delivering a constant amount of reward (\"safe\" response) more often than a response delivering a variable amount of reward (\"risky\" response) compared to WT rats. Interestingly, the level of bias toward negative events was associated with the rat's level of risk aversion when compared across individual rats. cLH rats also showed impaired appetitive Pavlovian conditioning but more accurate responding in a two-choice sensory discrimination task. These results are consistent with a negative learning bias and risk aversion in cLH rats, suggesting abnormal processing of rewarding and aversive events in the LHb of cLH rats."
},
{
"pmid": "25688217",
"title": "Two routes to actorhood: lexicalized potency to act and identification of the actor role.",
"abstract": "The inference of causality is a crucial cognitive ability and language processing is no exception: recent research suggests that, across different languages, the human language comprehension system attempts to identify the primary causer of the state of affairs described (the \"actor\") quickly and unambiguously (Bornkessel-Schlesewsky and Schlesewsky, 2009). This identification can take place verb-independently based on certain prominence cues (e.g., case, word order, animacy). Here, we present two experiments demonstrating that actor potential is also encoded at the level of individual nouns (a king is a better actor than a beggar). Experiment 1 collected ratings for 180 German nouns on 12 scales defined by adjective oppositions and deemed relevant for actorhood potential. By means of structural equation modeling, an actor potential (ACT) value was calculated for each noun. Experiment 2, an event-related potential study, embedded nouns from Experiment 1 in verb-final sentences, in which they were either actors or non-actors. N400 amplitude increased with decreasing ACT values and this modulation was larger for highly frequent nouns and for actor versus non-actor nouns. We argue that potency to act is lexically encoded for individual nouns and, since it modulates the N400 even for non-actor participants, it should be viewed as a property that modulates ease of lexical access (akin, for example, to lexical frequency). We conclude that two separate dimensions of actorhood computation are crucial to language comprehension: an experience-based, lexically encoded (bottom-up) representation of actorhood potential, and a prominence-based, computational mechanism for calculating goodness-of-fit to the actor role in a particular (top-down) sentence context."
},
{
"pmid": "25767445",
"title": "State-dependencies of learning across brain scales.",
"abstract": "Learning is a complex brain function operating on different time scales, from milliseconds to years, which induces enduring changes in brain dynamics. The brain also undergoes continuous \"spontaneous\" shifts in states, which, amongst others, are characterized by rhythmic activity of various frequencies. Besides the most obvious distinct modes of waking and sleep, wake-associated brain states comprise modulations of vigilance and attention. Recent findings show that certain brain states, particularly during sleep, are essential for learning and memory consolidation. Oscillatory activity plays a crucial role on several spatial scales, for example in plasticity at a synaptic level or in communication across brain areas. However, the underlying mechanisms and computational rules linking brain states and rhythms to learning, though relevant for our understanding of brain function and therapeutic approaches in brain disease, have not yet been elucidated. Here we review known mechanisms of how brain states mediate and modulate learning by their characteristic rhythmic signatures. To understand the critical interplay between brain states, brain rhythms, and learning processes, a wide range of experimental and theoretical work in animal models and human subjects from the single synapse to the large-scale cortical level needs to be integrated. By discussing results from experiments and theoretical approaches, we illuminate new avenues for utilizing neuronal learning mechanisms in developing tools and therapies, e.g., for stroke patients and to devise memory enhancement strategies for the elderly."
},
{
"pmid": "21283556",
"title": "Risk-sensitivity in sensorimotor control.",
"abstract": "Recent advances in theoretical neuroscience suggest that motor control can be considered as a continuous decision-making process in which uncertainty plays a key role. Decision-makers can be risk-sensitive with respect to this uncertainty in that they may not only consider the average payoff of an outcome, but also consider the variability of the payoffs. Although such risk-sensitivity is a well-established phenomenon in psychology and economics, it has been much less studied in motor control. In fact, leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost. Here we review evidence that humans exhibit risk-sensitivity in their motor behaviors, thereby demonstrating sensitivity to the variability of \"motor costs.\" Furthermore, we discuss how risk-sensitivity can be incorporated into optimal feedback control models of motor control. We conclude that risk-sensitivity is an important concept in understanding individual motor behavior under uncertainty."
},
{
"pmid": "24474914",
"title": "Negative learning bias is associated with risk aversion in a genetic animal model of depression.",
"abstract": "The lateral habenula (LHb) is activated by aversive stimuli and the omission of reward, inhibited by rewarding stimuli and is hyperactive in helpless rats-an animal model of depression. Here we test the hypothesis that congenital learned helpless (cLH) rats are more sensitive to decreases in reward size and/or less sensitive to increases in reward than wild-type (WT) control rats. Consistent with the hypothesis, we found that cLH rats were slower to switch preference between two responses after a small upshift in reward size on one of the responses but faster to switch their preference after a small downshift in reward size. cLH rats were also more risk-averse than WT rats-they chose a response delivering a constant amount of reward (\"safe\" response) more often than a response delivering a variable amount of reward (\"risky\" response) compared to WT rats. Interestingly, the level of bias toward negative events was associated with the rat's level of risk aversion when compared across individual rats. cLH rats also showed impaired appetitive Pavlovian conditioning but more accurate responding in a two-choice sensory discrimination task. These results are consistent with a negative learning bias and risk aversion in cLH rats, suggesting abnormal processing of rewarding and aversive events in the LHb of cLH rats."
},
{
"pmid": "24139048",
"title": "Hierarchical prediction errors in midbrain and basal forebrain during sensory learning.",
"abstract": "In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities."
},
{
"pmid": "25411501",
"title": "Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.",
"abstract": "The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision."
},
{
"pmid": "25142296",
"title": "Role of the medial prefrontal cortex in impaired decision making in juvenile attention-deficit/hyperactivity disorder.",
"abstract": "IMPORTANCE\nAttention-deficit/hyperactivity disorder (ADHD) has been associated with deficient decision making and learning. Models of ADHD have suggested that these deficits could be caused by impaired reward prediction errors (RPEs). Reward prediction errors are signals that indicate violations of expectations and are known to be encoded by the dopaminergic system. However, the precise learning and decision-making deficits and their neurobiological correlates in ADHD are not well known.\n\n\nOBJECTIVE\nTo determine the impaired decision-making and learning mechanisms in juvenile ADHD using advanced computational models, as well as the related neural RPE processes using multimodal neuroimaging.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nTwenty adolescents with ADHD and 20 healthy adolescents serving as controls (aged 12-16 years) were examined using a probabilistic reversal learning task while simultaneous functional magnetic resonance imaging and electroencephalogram were recorded.\n\n\nMAIN OUTCOMES AND MEASURES\nLearning and decision making were investigated by contrasting a hierarchical Bayesian model with an advanced reinforcement learning model and by comparing the model parameters. The neural correlates of RPEs were studied in functional magnetic resonance imaging and electroencephalogram.\n\n\nRESULTS\nAdolescents with ADHD showed more simplistic learning as reflected by the reinforcement learning model (exceedance probability, Px = .92) and had increased exploratory behavior compared with healthy controls (mean [SD] decision steepness parameter β: ADHD, 4.83 [2.97]; controls, 6.04 [2.53]; P = .02). The functional magnetic resonance imaging analysis revealed impaired RPE processing in the medial prefrontal cortex during cue as well as during outcome presentation (P < .05, family-wise error correction). The outcome-related impairment in the medial prefrontal cortex could be attributed to deficient processing at 200 to 400 milliseconds after feedback presentation as reflected by reduced feedback-related negativity (ADHD, 0.61 [3.90] μV; controls, -1.68 [2.52] μV; P = .04).\n\n\nCONCLUSIONS AND RELEVANCE\nThe combination of computational modeling of behavior and multimodal neuroimaging revealed that impaired decision making and learning mechanisms in adolescents with ADHD are driven by impaired RPE processing in the medial prefrontal cortex. This novel, combined approach furthers the understanding of the pathomechanisms in ADHD and may advance treatment strategies."
},
{
"pmid": "25187943",
"title": "Inferring on the intentions of others by hierarchical Bayesian learning.",
"abstract": "Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to \"player\" or \"adviser\" roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition."
},
{
"pmid": "26564686",
"title": "Evidence for surprise minimization over value maximization in choice behavior.",
"abstract": "Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations."
},
{
"pmid": "15944135",
"title": "Uncertainty, neuromodulation, and attention.",
"abstract": "Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain's implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic."
},
{
"pmid": "17676057",
"title": "Learning the value of information in an uncertain world.",
"abstract": "Our decisions are guided by outcomes that are associated with decisions made in the past. However, the amount of influence each past outcome has on our next decision remains unclear. To ensure optimal decision-making, the weight given to decision outcomes should reflect their salience in predicting future outcomes, and this salience should be modulated by the volatility of the reward environment. We show that human subjects assess volatility in an optimal manner and adjust decision-making accordingly. This optimal estimate of volatility is reflected in the fMRI signal in the anterior cingulate cortex (ACC) when each trial outcome is observed. When a new piece of information is witnessed, activity levels reflect its salience for predicting future outcomes. Furthermore, variations in this ACC signal across the population predict variations in subject learning rates. Our results provide a formal account of how we weigh our different experiences in guiding our future actions."
},
{
"pmid": "24487030",
"title": "On several factors that control rates of discounting.",
"abstract": "Discounting occurs when the subjective value of an outcome decreases because its delivery is either delayed or uncertain. Discounting has been widely studied because of its ubiquitous nature. Research from our laboratory has demonstrated that rates of discounting are systematically altered by several different factors. This paper outlines how the type of data-collection method (i.e., multiple choice vs. fill in the blank), how one frames the outcome being discounted (i.e., won vs. owed), and the type of outcome (i.e., money vs. medical treatment) by magnitude of the outcome (i.e., small vs. large) by type of discounting (i.e., delay vs. probability) interaction can potentially control observed rates of discounting. Such findings should not only be of interest to individuals who study the quantitative analyses of discounting, but also to researchers and theoreticians trying to understand and generalize findings from studies on discounting."
},
{
"pmid": "24659960",
"title": "Does temporal discounting explain unhealthy behavior? A systematic review and reinforcement learning perspective.",
"abstract": "The tendency to make unhealthy choices is hypothesized to be related to an individual's temporal discount rate, the theoretical rate at which they devalue delayed rewards. Furthermore, a particular form of temporal discounting, hyperbolic discounting, has been proposed to explain why unhealthy behavior can occur despite healthy intentions. We examine these two hypotheses in turn. We first systematically review studies which investigate whether discount rates can predict unhealthy behavior. These studies reveal that high discount rates for money (and in some instances food or drug rewards) are associated with several unhealthy behaviors and markers of health status, establishing discounting as a promising predictive measure. We secondly examine whether intention-incongruent unhealthy actions are consistent with hyperbolic discounting. We conclude that intention-incongruent actions are often triggered by environmental cues or changes in motivational state, whose effects are not parameterized by hyperbolic discounting. We propose a framework for understanding these state-based effects in terms of the interplay of two distinct reinforcement learning mechanisms: a \"model-based\" (or goal-directed) system and a \"model-free\" (or habitual) system. Under this framework, while discounting of delayed health may contribute to the initiation of unhealthy behavior, with repetition, many unhealthy behaviors become habitual; if health goals then change, habitual behavior can still arise in response to environmental cues. We propose that the burgeoning development of computational models of these processes will permit further identification of health decision-making phenotypes."
},
{
"pmid": "23267662",
"title": "Updating dopamine reward signals.",
"abstract": "Recent work has advanced our knowledge of phasic dopamine reward prediction error signals. The error signal is bidirectional, reflects well the higher order prediction error described by temporal difference learning models, is compatible with model-free and model-based reinforcement learning, reports the subjective rather than physical reward value during temporal discounting and reflects subjective stimulus perception rather than physical stimulus aspects. Dopamine activations are primarily driven by reward, and to some extent risk, whereas punishment and salience have only limited activating effects when appropriate controls are respected. The signal is homogeneous in terms of time course but heterogeneous in many other aspects. It is essential for synaptic plasticity and a range of behavioural learning situations."
},
{
"pmid": "24446502",
"title": "Timing in reward and decision processes.",
"abstract": "Sensitivity to time, including the time of reward, guides the behaviour of all organisms. Recent research suggests that all major reward structures of the brain process the time of reward occurrence, including midbrain dopamine neurons, striatum, frontal cortex and amygdala. Neuronal reward responses in dopamine neurons, striatum and frontal cortex show temporal discounting of reward value. The prediction error signal of dopamine neurons includes the predicted time of rewards. Neurons in the striatum, frontal cortex and amygdala show responses to reward delivery and activities anticipating rewards that are sensitive to the predicted time of reward and the instantaneous reward probability. Together these data suggest that internal timing processes have several well characterized effects on neuronal reward processing."
},
{
"pmid": "15990243",
"title": "Loss of self-control in intertemporal choice may be attributable to logarithmic time-perception.",
"abstract": "Impulsivity and loss of self-control in drug-dependent patients have been associated with the manner in which they discount delayed rewards. Although drugs of abuse have been shown to modify perceived time-duration, little is known regarding the relationship between impulsive decision-making in intertemporal choice and estimation of time-duration. In classical economic theory, it has been hypothesized that people discount future reward value exponentially. In exponential discounting, a temporal discounting rate is constant over time, which has been referred to as dynamic consistency. However, accumulating empirical evidence in biology, psychopharmacology, behavioral neuroscience, and neuroeconomics does not support the hypothesis. Rather, dynamically inconsistent manners of discounting delayed rewards, e.g., hyperbolic discounting, have been repeatedly observed in humans and non-human animals. In spite of recent advances in neuroimaging and neuropsychopharmacological study, the reason why humans and animals discount delayed rewards hyperbolically is unknown. In this study, we hypothesized that empirically-observed dynamical inconsistency in intertemporal choice may result from errors in the perception of time-duration. It is proposed that perception of temporal duration following Weber's law might explain the dynamical inconsistency. Possible future study directions for elucidating neural mechanisms underlying inconsistent intertemporal choice are discussed."
},
{
"pmid": "26542975",
"title": "Hierarchical Bayesian estimation and hypothesis testing for delay discounting tasks.",
"abstract": "A state-of-the-art data analysis procedure is presented to conduct hierarchical Bayesian inference and hypothesis testing on delay discounting data. The delay discounting task is a key experimental paradigm used across a wide range of disciplines from economics, cognitive science, and neuroscience, all of which seek to understand how humans or animals trade off the immediacy verses the magnitude of a reward. Bayesian estimation allows rich inferences to be drawn, along with measures of confidence, based upon limited and noisy behavioural data. Hierarchical modelling allows more precise inferences to be made, thus using sometimes expensive or difficult to obtain data in the most efficient way. The proposed probabilistic generative model describes how participants compare the present subjective value of reward choices on a trial-to-trial basis, estimates participant- and group-level parameters. We infer discount rate as a function of reward size, allowing the magnitude effect to be measured. Demonstrations are provided to show how this analysis approach can aid hypothesis testing. The analysis is demonstrated on data from the popular 27-item monetary choice questionnaire (Kirby, Psychonomic Bulletin & Review, 16(3), 457-462 2009), but will accept data from a range of protocols, including adaptive procedures. The software is made freely available to researchers."
},
{
"pmid": "22487035",
"title": "A theoretical account of cognitive effects in delay discounting.",
"abstract": "Although delay discounting, the attenuation of the value of future rewards, is a robust finding, the mechanism of discounting is not known. We propose a potential mechanism for delay discounting such that discounting emerges from a search process that is trying to determine what rewards will be available in the future. In this theory, the delay dependence of the discounting of future expected rewards arises from three assumptions. First, that the evaluation of outcomes involves a search process. Second, that the value is assigned to an outcome proportionally to how easy it is to find. Third, that outcomes that are less delayed are typically easier for the search process to find. By relaxing this third assumption (e.g. by assuming that episodically-cued outcomes are easier to find), our model suggests that it is possible to dissociate discounting from delay. Our theory thereby explains the empirical result that discounting is slower to episodically-imagined outcomes, because these outcomes are easier for the search process to find. Additionally, the theory explains why improving cognitive resources such as working memory slows discounting, by improving searches and thereby making rewards easier to find. The three assumptions outlined here are likely to be instantiated during deliberative decision-making, but are unlikely in habitual decision-making. We model two simple implementations of this theory and show that they unify empirical results about the role of cognitive function in delay discounting, and make new neural, behavioral, and pharmacological predictions."
},
{
"pmid": "21637741",
"title": "Speed/accuracy trade-off between the habitual and the goal-directed processes.",
"abstract": "Instrumental responses are hypothesized to be of two kinds: habitual and goal-directed, mediated by the sensorimotor and the associative cortico-basal ganglia circuits, respectively. The existence of the two heterogeneous associative learning mechanisms can be hypothesized to arise from the comparative advantages that they have at different stages of learning. In this paper, we assume that the goal-directed system is behaviourally flexible, but slow in choice selection. The habitual system, in contrast, is fast in responding, but inflexible in adapting its behavioural strategy to new conditions. Based on these assumptions and using the computational theory of reinforcement learning, we propose a normative model for arbitration between the two processes that makes an approximately optimal balance between search-time and accuracy in decision making. Behaviourally, the model can explain experimental evidence on behavioural sensitivity to outcome at the early stages of learning, but insensitivity at the later stages. It also explains that when two choices with equal incentive values are available concurrently, the behaviour remains outcome-sensitive, even after extensive training. Moreover, the model can explain choice reaction time variations during the course of learning, as well as the experimental observation that as the number of choices increases, the reaction time also increases. Neurobiologically, by assuming that phasic and tonic activities of midbrain dopamine neurons carry the reward prediction error and the average reward signals used by the model, respectively, the model predicts that whereas phasic dopamine indirectly affects behaviour through reinforcing stimulus-response associations, tonic dopamine can directly affect behaviour through manipulating the competition between the habitual and the goal-directed systems and thus, affect reaction time."
},
{
"pmid": "10973778",
"title": "Stochastic Dynamic Models of Response Time and Accuracy: A Foundational Primer.",
"abstract": "A large class of statistical decision models for performance in simple information processing tasks can be described by linear, first-order, stochastic differential equations (SDEs), whose solutions are diffusion processes. In such models, the first passage time for the diffusion process through a response criterion determines the time at which an observer makes a decision about the identity of a stimulus. Because the assumptions of many cognitive models lead to SDEs that are time inhomogeneous, classical methods for solving such first passage time problems are usually inapplicable. In contrast, recent integral equation methods often yield solutions to both the one-sided and the two-sided first passage time problems, even in the presence of time inhomogeneity. These methods, which are of particular relevance to the cognitive modeler, are described in detail, together with illustrative applications. Copyright 2000 Academic Press."
},
{
"pmid": "17055746",
"title": "Variational free energy and the Laplace approximation.",
"abstract": "This note derives the variational free energy under the Laplace approximation, with a focus on accounting for additional model complexity induced by increasing the number of model parameters. This is relevant when using the free energy as an approximation to the log-evidence in Bayesian model averaging and selection. By setting restricted maximum likelihood (ReML) in the larger context of variational learning and expectation maximisation (EM), we show how the ReML objective function can be adjusted to provide an approximation to the log-evidence for a particular model. This means ReML can be used for model selection, specifically to select or compare models with different covariance components. This is useful in the context of hierarchical models because it enables a principled selection of priors that, under simple hyperpriors, can be used for automatic model selection and relevance determination (ARD). Deriving the ReML objective function, from basic variational principles, discloses the simple relationships among Variational Bayes, EM and ReML. Furthermore, we show that EM is formally identical to a full variational treatment when the precisions are linear in the hyperparameters. Finally, we also consider, briefly, dynamic models and how these inform the regularisation of free energy ascent schemes, like EM and ReML."
},
{
"pmid": "12412886",
"title": "Estimating parameters of the diffusion model: approaches to dealing with contaminant reaction times and parameter variability.",
"abstract": "Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time."
},
{
"pmid": "19362448",
"title": "Goal-directed control and its antipodes.",
"abstract": "In instrumental conditioning, there is a rather precise definition of goal-directed control, and therefore an acute boundary between it and the somewhat more amorphous category comprising its opposites. Here, we review this division in terms of the various distinctions that accompany it in the fields of reinforcement learning and cognitive architectures, considering issues such as declarative and procedural control, the effect of prior distributions over environments, the neural substrates involved, and the differing views about the relative rationality of the various forms of control. Our overall aim is to reconnect some presently far-flung relations."
},
{
"pmid": "24139036",
"title": "Goals and habits in the brain.",
"abstract": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation."
},
{
"pmid": "16536645",
"title": "Automaticity: a theoretical and conceptual analysis.",
"abstract": "Several theoretical views of automaticity are discussed. Most of these suggest that automaticity should be diagnosed by looking at the presence of features such as unintentional, uncontrolled/uncontrollable, goal independent, autonomous, purely stimulus driven, unconscious, efficient, and fast. Contemporary views further suggest that these features should be investigated separately. The authors examine whether features of automaticity can be disentangled on a conceptual level, because only then is the separate investigation of them worth the effort. They conclude that the conceptual analysis of features is to a large extent feasible. Not all researchers agree with this position, however. The authors show that assumptions of overlap among features are determined by the other researchers' views of automaticity and by the models they endorse for information processing in general."
},
{
"pmid": "26301468",
"title": "Automaticity and multiple memory systems.",
"abstract": "A large number of criteria have been proposed for determining when a behavior has become automatic. Almost all of these were developed before the widespread acceptance of multiple memory systems. Consequently, popular frameworks for studying automaticity often neglect qualitative differences in how different memory systems guide initial learning. Unfortunately, evidence suggests that automaticity criteria derived from these frameworks consistently misclassify certain sets of initial behaviors as automatic. Specifically, criteria derived from cognitive science mislabel much behavior still under the control of procedural memory as automatic, and criteria derived from animal learning mislabel some behaviors under the control of declarative memory as automatic. Even so, neither set of criteria make the opposite error-that is, both sets correctly identify any automatic behavior as automatic. In fact, evidence suggests that although there are multiple memory systems and therefore multiple routes to automaticity, there might nevertheless be only one common representation for automatic behaviors. A number of possible cognitive and cognitive neuroscience models of this single automaticity system are reviewed. WIREs Cogn Sci 2012, 3:363-376. doi: 10.1002/wcs.1172 For further resources related to this article, please visit the WIREs website."
},
{
"pmid": "21316475",
"title": "Cortical and striatal contributions to automaticity in information-integration categorization.",
"abstract": "In information-integration categorization, accuracy is maximized only if information from two or more stimulus components is integrated at some pre-decisional stage. In many cases the optimal strategy is difficult or impossible to describe verbally. Evidence suggests that success in information-integration tasks depends on procedural learning that is mediated largely within the striatum. Although many studies have examined initial information-integration learning, little is known about how automaticity develops in information-integration tasks. To address this issue, each of ten human participants received feedback training on the same information-integration categories for more than 11,000 trials spread over 20 different training sessions. Sessions 2, 4, 10, and 20 were performed inside an MRI scanner. The following results stood out. 1) Automaticity developed between sessions 10 and 20. 2) Pre-automatic performance depended on the putamen, but not on the body and tail of the caudate nucleus. 3) Automatic performance depended only on cortical regions, particularly the supplementary and pre-supplementary motor areas. 4) Feedback processing was mainly associated with deactivations in motor and premotor regions of cortex, and in the ventral lateral prefrontal cortex. 5) The overall effects of practice were consistent with the existing literature on the development of automaticity."
},
{
"pmid": "8832893",
"title": "Neural control of voluntary movement initiation.",
"abstract": "When humans respond to sensory stimulation, their reaction times tend to be long and variable relative to neural transduction and transmission times. The neural processes responsible for the duration and variability of reaction times are not understood. Single-cell recordings in a motor area of the cerebral cortex in behaving rhesus monkeys (Macaca mulatta) were used to evaluate two alternative mathematical models of the processes that underlie reaction times. Movements were initiated if and only if the neural activity reached a specific and constant threshold activation level. Stochastic variability in the rate at which neural activity grew toward that threshold resulted in the distribution of reaction times. This finding elucidates a specific link between motor behavior and activation of neurons in the cerebral cortex."
},
{
"pmid": "12417672",
"title": "Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task.",
"abstract": "Decisions about the visual world can take time to form, especially when information is unreliable. We studied the neural correlate of gradual decision formation by recording activity from the lateral intraparietal cortex (area LIP) of rhesus monkeys during a combined motion-discrimination reaction-time task. Monkeys reported the direction of random-dot motion by making an eye movement to one of two peripheral choice targets, one of which was within the response field of the neuron. We varied the difficulty of the task and measured both the accuracy of direction discrimination and the time required to reach a decision. Both the accuracy and speed of decisions increased as a function of motion strength. During the period of decision formation, the epoch between onset of visual motion and the initiation of the eye movement response, LIP neurons underwent ramp-like changes in their discharge rate that predicted the monkey's decision. A steeper rise in spike rate was associated with stronger stimulus motion and shorter reaction times. The observations suggest that neurons in LIP integrate time-varying signals that originate in the extrastriate visual cortex, accumulating evidence for or against a specific behavioral response. A threshold level of LIP activity appears to mark the completion of the decision process and to govern the tradeoff between accuracy and speed of perception."
},
{
"pmid": "20010823",
"title": "Synaptic computation underlying probabilistic inference.",
"abstract": "We propose that synapses may be the workhorse of the neuronal computations that underlie probabilistic reasoning. We built a neural circuit model for probabilistic inference in which information provided by different sensory cues must be integrated and the predictive powers of individual cues about an outcome are deduced through experience. We found that bounded synapses naturally compute, through reward-dependent plasticity, the posterior probability that a choice alternative is correct given that a cue is presented. Furthermore, a decision circuit endowed with such synapses makes choices on the basis of the summed log posterior odds and performs near-optimal cue combination. The model was validated by reproducing salient observations of, and provides insights into, a monkey experiment using a categorization task. Our model thus suggests a biophysical instantiation of the Bayesian decision rule, while predicting important deviations from it similar to the 'base-rate neglect' observed in human studies when alternatives have unequal prior probabilities."
},
{
"pmid": "22855817",
"title": "Deciding when to decide: time-variant sequential sampling models explain the emergence of value-based decisions in the human brain.",
"abstract": "The cognitive and neuronal mechanisms of perceptual decision making have been successfully linked to sequential sampling models. These models describe the decision process as a gradual accumulation of sensory evidence over time. The temporal evolution of economic choices, however, remains largely unexplored. We tested whether sequential sampling models help to understand the formation of value-based decisions in terms of behavior and brain responses. We used functional magnetic resonance imaging (fMRI) to measure brain activity while human participants performed a buying task in which they freely decided upon how and when to choose. Behavior was accurately predicted by a time-variant sequential sampling model that uses a decreasing rather than fixed decision threshold to estimate the time point of the decision. Presupplementary motor area, caudate nucleus, and anterior insula activation was associated with the accumulation of evidence over time. Furthermore, at the beginning of the decision process the fMRI signal in these regions accounted for trial-by-trial deviations from behavioral model predictions: relatively high activation preceded relatively early responses. The updating of value information was correlated with signals in the ventromedial prefrontal cortex, left and right orbitofrontal cortex, and ventral striatum but also in the primary motor cortex well before the response itself. Our results support a view of value-based decisions as emerging from sequential sampling of evidence and suggest a close link between the accumulation process and activity in the motor system when people are free to respond at any time."
},
{
"pmid": "27870610",
"title": "Neural Circuits Trained with Standard Reinforcement Learning Can Accumulate Probabilistic Information during Decision Making.",
"abstract": "Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given the two options. Several models have been proposed for how neural circuits can learn these log-likelihood ratios from experience, but all of these models introduced novel and specially dedicated synaptic plasticity rules. Here we show that for a certain wide class of tasks, the log-likelihood ratios are approximately linearly proportional to the expected rewards for selecting actions. Therefore, a simple model based on standard reinforcement learning rules is able to estimate the log-likelihood ratios from experience and on each trial accumulate the log-likelihood ratios associated with presented stimuli while selecting an action. The simulations of the model replicate experimental data on both behavior and neural activity in tasks requiring accumulation of probabilistic cues. Our results suggest that there is no need for the brain to support dedicated plasticity rules, as the standard mechanisms proposed to describe reinforcement learning can enable the neural circuits to perform efficient probabilistic inference."
},
{
"pmid": "25589744",
"title": "fMRI and EEG predictors of dynamic decision parameters during human reinforcement learning.",
"abstract": "What are the neural dynamics of choice processes during reinforcement learning? Two largely separate literatures have examined dynamics of reinforcement learning (RL) as a function of experience but assuming a static choice process, or conversely, the dynamics of choice processes in decision making but based on static decision values. Here we show that human choice processes during RL are well described by a drift diffusion model (DDM) of decision making in which the learned trial-by-trial reward values are sequentially sampled, with a choice made when the value signal crosses a decision threshold. Moreover, simultaneous fMRI and EEG recordings revealed that this decision threshold is not fixed across trials but varies as a function of activity in the subthalamic nucleus (STN) and is further modulated by trial-by-trial measures of decision conflict and activity in the dorsomedial frontal cortex (pre-SMA BOLD and mediofrontal theta in EEG). These findings provide converging multimodal evidence for a model in which decision threshold in reward-based tasks is adjusted as a function of communication from pre-SMA to STN when choices differ subtly in reward values, allowing more time to choose the statistically more rewarding option."
},
{
"pmid": "27966103",
"title": "The drift diffusion model as the choice rule in reinforcement learning.",
"abstract": "Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups."
},
{
"pmid": "28653668",
"title": "Reminders of past choices bias decisions for reward in humans.",
"abstract": "We provide evidence that decisions are made by consulting memories for individual past experiences, and that this process can be biased in favour of past choices using incidental reminders. First, in a standard rewarded choice task, we show that a model that estimates value at decision-time using individual samples of past outcomes fits choices and decision-related neural activity better than a canonical incremental learning model. In a second experiment, we bias this sampling process by incidentally reminding participants of individual past decisions. The next decision after a reminder shows a strong influence of the action taken and value received on the reminded trial. These results provide new empirical support for a decision architecture that relies on samples of individual past choice episodes rather than incrementally averaged rewards in evaluating options and has suggestive implications for the underlying cognitive and neural mechanisms."
},
{
"pmid": "28581478",
"title": "Reinstated episodic context guides sampling-based decisions for reward.",
"abstract": "How does experience inform decisions? In episodic sampling, decisions are guided by a few episodic memories of past choices. This process can yield choice patterns similar to model-free reinforcement learning; however, samples can vary from trial to trial, causing decisions to vary. Here we show that context retrieved during episodic sampling can cause choice behavior to deviate sharply from the predictions of reinforcement learning. Specifically, we show that, when a given memory is sampled, choices (in the present) are influenced by the properties of other decisions made in the same context as the sampled event. This effect is mediated by fMRI measures of context retrieval on each trial, suggesting a mechanism whereby cues trigger retrieval of context, which then triggers retrieval of other decisions from that context. This result establishes a new avenue by which experience can guide choice and, as such, has broad implications for the study of decisions."
},
{
"pmid": "28175922",
"title": "Fundamentals and Recent Developments in Approximate Bayesian Computation.",
"abstract": "Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.]"
},
{
"pmid": "21946325",
"title": "Subthalamic nucleus stimulation reverses mediofrontal influence over decision threshold.",
"abstract": "It takes effort and time to tame one's impulses. Although medial prefrontal cortex (mPFC) is broadly implicated in effortful control over behavior, the subthalamic nucleus (STN) is specifically thought to contribute by acting as a brake on cortico-striatal function during decision conflict, buying time until the right decision can be made. Using the drift diffusion model of decision making, we found that trial-to-trial increases in mPFC activity (EEG theta power, 4-8 Hz) were related to an increased threshold for evidence accumulation (decision threshold) as a function of conflict. Deep brain stimulation of the STN in individuals with Parkinson's disease reversed this relationship, resulting in impulsive choice. In addition, intracranial recordings of the STN area revealed increased activity (2.5-5 Hz) during these same high-conflict decisions. Activity in these slow frequency bands may reflect a neural substrate for cortico-basal ganglia communication regulating decision processes."
},
{
"pmid": "22396408",
"title": "Bias in the brain: a diffusion model analysis of prior probability and potential payoff.",
"abstract": "In perceptual decision-making, advance knowledge biases people toward choice alternatives that are more likely to be correct and more likely to be profitable. Accumulation-to-bound models provide two possible explanations for these effects: prior knowledge about the relative attractiveness of the alternatives at hand changes either the starting point of the decision process, or the rate of evidence accumulation. Here, we used model-based functional MRI to investigate whether these effects are similar for different types of prior knowledge, and whether there is a common neural substrate underlying bias in simple perceptual choices. We used two versions of the random-dot motion paradigm in which we manipulated bias by: (1) changing the prior likelihood of occurrence for two alternatives (\"prior probability\") and (2) assigning a larger reward to one of two alternatives (\"potential payoff\"). Human subjects performed the task inside and outside a 3T MRI scanner. For each manipulation, bias was quantified by fitting the drift diffusion model to the behavioral data. Individual measurements of bias were then used in the imaging analyses to identify regions involved in biasing choice behavior. Behavioral results showed that subjects tended to make more and faster choices toward the alternative that was most probable or had the largest payoff. This effect was primarily due to a change in the starting point of the accumulation process. Imaging results showed that, at cue level, regions of the frontoparietal network are involved in changing the starting points in both manipulations, suggesting a common mechanism underlying the biasing effects of prior knowledge."
},
{
"pmid": "24478635",
"title": "N2A: a computational tool for modeling from neurons to algorithms.",
"abstract": "The exponential increase in available neural data has combined with the exponential growth in computing (\"Moore's law\") to create new opportunities to understand neural systems at large scale and high detail. The ability to produce large and sophisticated simulations has introduced unique challenges to neuroscientists. Computational models in neuroscience are increasingly broad efforts, often involving the collaboration of experts in different domains. Furthermore, the size and detail of models have grown to levels for which understanding the implications of variability and assumptions is no longer trivial. Here, we introduce the model design platform N2A which aims to facilitate the design and validation of biologically realistic models. N2A uses a hierarchical representation of neural information to enable the integration of models from different users. N2A streamlines computational validation of a model by natively implementing standard tools in sensitivity analysis and uncertainty quantification. The part-relationship representation allows both network-level analysis and dynamical simulations. We will demonstrate how N2A can be used in a range of examples, including a simple Hodgkin-Huxley cable model, basic parameter sensitivity of an 80/20 network, and the expression of the structural plasticity of a growing dendrite and stem cell proliferation and differentiation."
},
{
"pmid": "25459409",
"title": "Functionally dissociable influences on learning rate in a dynamic environment.",
"abstract": "Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics, and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals."
},
{
"pmid": "22959354",
"title": "The ubiquity of model-based reinforcement learning.",
"abstract": "The reward prediction error (RPE) theory of dopamine (DA) function has enjoyed great success in the neuroscience of learning and decision-making. This theory is derived from model-free reinforcement learning (RL), in which choices are made simply on the basis of previously realized rewards. Recently, attention has turned to correlates of more flexible, albeit computationally complex, model-based methods in the brain. These methods are distinguished from model-free learning by their evaluation of candidate actions using expected future outcomes according to a world model. Puzzlingly, signatures from these computations seem to be pervasive in the very same regions previously thought to support model-free learning. Here, we review recent behavioral and neural evidence about these two systems, in attempt to reconcile their enigmatic cohabitation in the brain."
},
{
"pmid": "21435563",
"title": "Model-based influences on humans' choices and striatal prediction errors.",
"abstract": "The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making."
},
{
"pmid": "22884326",
"title": "Dopamine enhances model-based over model-free choice behavior.",
"abstract": "Decision making is often considered to arise out of contributions from a model-free habitual system and a model-based goal-directed system. Here, we investigated the effect of a dopamine manipulation on the degree to which either system contributes to instrumental behavior in a two-stage Markov decision task, which has been shown to discriminate model-free from model-based control. We found increased dopamine levels promote model-based over model-free choice."
},
{
"pmid": "24474945",
"title": "Quality-space theory in olfaction.",
"abstract": "Quality-space theory (QST) explains the nature of the mental qualities distinctive of perceptual states by appeal to their role in perceiving. QST is typically described in terms of the mental qualities that pertain to color. Here we apply QST to the olfactory modalities. Olfaction is in various respects more complex than vision, and so provides a useful test case for QST. To determine whether QST can deal with the challenges olfaction presents, we show how a quality space (QS) could be constructed relying on olfactory perceptible properties and the olfactory mental qualities then defined by appeal to that QS of olfactory perceptible properties. We also consider how to delimit the olfactory QS from other modalities. We further apply QST to the role that experience plays in refining our olfactory discriminative abilities and the occurrence of olfactory mental qualities in non-conscious olfactory states. QST is shown to be fully applicable to and useful for understanding the complex domain of olfaction."
},
{
"pmid": "28731839",
"title": "Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.",
"abstract": "Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis."
},
{
"pmid": "20510862",
"title": "States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.",
"abstract": "Reinforcement learning (RL) uses sequential experience with situations (\"states\") and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior."
}
] |
Scientific Reports | 31043650 | PMC6494868 | 10.1038/s41598-019-43250-2 | Dense Quantum Measurement Theory | Quantum measurement is a fundamental cornerstone of experimental quantum computations. The main issues in current quantum measurement strategies are the high number of measurement rounds to determine a global optimal measurement output and the low success probability of finding a global optimal measurement output. Each measurement round requires preparing the quantum system and applying quantum operations and measurements with high-precision control in the physical layer. These issues result in extremely high-cost measurements with a low probability of success at the end of the measurement rounds. Here, we define a novel measurement for quantum computations called dense quantum measurement. The dense measurement strategy aims at fixing the main drawbacks of standard quantum measurements by achieving a significant reduction in the number of necessary measurement rounds and by radically improving the success probabilities of finding global optimal outputs. We provide application scenarios for quantum circuits with arbitrary unitary sequences, and prove that dense measurement theory provides an experimentally implementable solution for gate-model quantum computer architectures. | Related WorksThe related works on quantum measurement theory, gate-model quantum computers and compressed sensing are summarized as follows.Quantum Measurement TheoryQuantum measurement has a fundamental role in quantum mechanics with several different theoretical interpretations32–44. The measurement of a quantum system collapses of the quantum system into an eigenstate of the operator corresponding to the measurement. The measurement of a quantum system produces a measurement result, the expected values of measurement are associated with a particular probability distribution.In quantum mechanics several different measurement techniques exist. In a projective measurement32–39, the measurement of the quantum system is mathematically interpreted by projectors that project any initial quantum state onto one of the basis states. The projective measurement is also known as von Neumann measurement32. In our manuscript the projective measurement with no post-processing on the measurement results is referred to as standard measurement (It is motivated by the fact, that in a gate-model quantum computer environment the output quantum system is measured with respect to a particular computational basis).The von Neumann measurements are a special case of a more general measurement, the POVM measurement35,40–44. Without loss of generality, the POVM is a generalized measurement that can be interpreted as a von Neumann measurement that utilizes an additional quantum system (called ancilla). The POVM measurement is mathematically described by a set of positive operators such that their sum is the identity operator51–53. The POVM measurements therefore can be expressed in terms of projective measurements (see also Neumark’s dilation theorem54–56).Another subject connected to quantum measurement theory is quantum-state discrimination57–61 that covers the distinguishability of quantum states, and the problem of differentiation between non-orthogonal quantum states.Gate-Model Quantum ComputersThe theoretical background of the gate-model quantum computer environment utilized in our manuscript can be found in12 and13.In13, the authors studied the subject of objective function evaluation of computational problems fed into a gate-model quantum computer environment. The work focuses on a qubit architectures with a fixed hardware structure in the physical layout. In the system model of a gate-model quantum computer, the quantum computer is modeled as a sequence of unitary operators (quantum gates). The quantum gates are associated with a particular control parameter called the gate parameter. The quantum gates can process one-qubit length and multi-qubit length quantum systems. The input quantum system (particularly a superposed quantum system) of the quantum circuit is transformed via a sequence of unitaries controlled via the gate parameters, and the output qubits are measured by a measurement array. The measurement in the model is realized by a projective measurement applied on a qubits that outputs a logical bit with value zero or one for each measured qubit. The result of the measurement is therefore a classical bitstring. The output bitstring is processed further to estimate the objective function of the quantum computer. The work also induces and opens several important optimization questions, such as the optimization of quantum circuits of gate-model quantum computers, optimization of objective function estimation, measurement optimization and optimization of post-processing in a gate-model quantum computer environment. In our particular work we are focusing on the optimization of the measurement phase.An optimization algorithm related to gate-model quantum computer architectures is defined in12. The optimization algorithm is called “Quantum Approximate Optimization Algorithm” (QAOA). The aim of the algorithm is to output approximate solutions for combinatorial optimization problems fed into the quantum computer. The algorithm is implementable via gate-model quantum computers such that the depth of the quantum circuit grows linearly with a particular control parameter. The work also proposed the performance of the algorithm at the utilization of different gate parameter values for the unitaries of the gate-model computer environment.In62, the authors studied some attributes of the QAOA algorithm. The authors showed that the output distribution provided by QAOA cannot be efficiently simulated on any classical device. A comparison with the “Quantum Adiabatic Algorithm” (QADI)63,64 is also proposed in the work. The work concluded that the QAOA can be implemented on near-term gate-model quantum computers for optimization problems.An application of the QAOA algorithm to a bounded occurrence constraint problem “Max E3LIN2” can be found in15. In the analyzed problem, the input is a set of linear equations each of which has three boolean variables, and each equation outputs whether the sum of the variables is 0 or is 1 in a mod 2 representation. The work is aimed to demonstrate the capabilities of the QAOA algorithm in a gate-model quantum computer environment.In65, the authors studied the objective function value distributions of the QAOA algorithm. The work concluded, at some particular setting and conditions the objective function values could become concentrated. A conclusion of the work, the number of running sequences of the quantum computer can be reduced.In66, the authors analyzed the experimental implementation of the QAOA algorithm on near-term gate-model quantum devices. The work also defined an optimization method for the QAOA, and studied the performance of QAOA. As the authors found, the QAOA can learn via optimization to utilize non-adiabatic mechanisms.In67, the authors studied the implementation of QAOA with parallelizable gates. The work introduced a scheme to parallelize the QAOA for arbitrary all-to-all connected problem graphs in a layout of qubits. The proposed method was defined by single qubit operations and the interactions were set by pair-wise CNOT gates among nearest neighbors. As the work concluded, this structure allows for a parallelizable implementation in quantum devices with a square lattice geometry.In14, the authors defined a gate-model quantum neural network. The gate-model quantum neural network describes a quantum neural network implemented on gate-model quantum computer. The work focuses on the architectural attributes of a gate-model quantum neural network, and studies the training methods. A particular problem studied in the work is the classification of classical data sets which consist of bitstrings with binary labels. In the architectural model of a gate-model quantum neural network, the weights are represented by the gate parameters of the unitaries of the network, and the training method acts these gate parameters. As the authors stated, the gate-model quantum neural networks represent a practically implementable solution for the realization of quantum neural networks on near-term gate-model quantum computer architectures.In68, the authors defined a quantum algorithm that is realized via a quantum Markov process. The analyzed process of the work was a quantum version of a classical probabilistic algorithm for k-SAT defined in69. The work also studied the performance of the proposed quantum algorithm and compared it with the classical algorithm.For a review on the noisy intermediate-scale quantum (NISQ) era and its technological effects and impacts on quantum computing, see1.The subject of quantum computational supremacy (tasks and problems that quantum computers can solve but are beyond the capability of any classical computer) and its practical implications are studied in2. For a work on the complexity-theoretic foundations of quantum supremacy, see3.A comprehensive survey on quantum channels can be found in23, while for a survey on quantum computing technology, see70.Compressed SensingIn traditional information processing, compressed sensing47 is a technique to reduce the sampling rate to recover a signal from fewer samples than it is stated by the Shannon-Nyquist sampling theorem (that states that the sampling rate of a continuous-time signal must be twice its highest frequency for the reconstruction)47–50. In the framework of compressed sensing, the signal reconstruction process exploits the sparsity of signals (in the context of compressed sensing, a signal is called sparse if most of its components are zero)50,71–75. Along with the sparsity, the restricted isometry property50,71,75 is also an important concept of compressed sensing, since, without loss of generality, this property makes it possible to yield unique outputs from the measurements of the sparse inputs. The restricted isometry property is also a well-studied problem in the field of compressed sensing76–80.A special technique within compressed sensing is the so-called “1-bit” compressed sensing81–83, where 1-bit measurements are applied that preserve only the sign information of the measurements.The application of compressed sensing covers the fields of traditional signal processing, image processing and several different fields of computational mathematics84–91.The dense quantum measurement theory proposed in our manuscript also utilizes the fundamental concepts of compressed sensing. However, in our framework the primary aims are the reduction of the measurement rounds required to determine a global optimal output at arbitrary unitaries, and the boosting of the success probability of finding a global optimal output at a particular measurement round. The results are illustrated through a gate-model quantum computer environment. | [
"28905912",
"27488798",
"24759412",
"27437573",
"12066177",
"28905917",
"26941315",
"9912632"
] | [
{
"pmid": "28905912",
"title": "Quantum computational supremacy.",
"abstract": "The field of quantum algorithms aims to find ways to speed up the solution of computational problems by using a quantum computer. A key milestone in this field will be when a universal quantum computer performs a computational task that is beyond the capability of any classical computer, an event known as quantum supremacy. This would be easier to achieve experimentally than full-scale quantum computing, but involves new theoretical challenges. Here we present the leading proposals to achieve quantum supremacy, and discuss how we can reliably compare the power of a classical computer to the power of a quantum computer."
},
{
"pmid": "27488798",
"title": "Demonstration of a small programmable quantum computer with atomic qubits.",
"abstract": "Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels."
},
{
"pmid": "24759412",
"title": "Superconducting quantum circuits at the surface code threshold for fault tolerance.",
"abstract": "A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits."
},
{
"pmid": "27437573",
"title": "Extending the lifetime of a quantum bit with error correction in superconducting circuits.",
"abstract": "Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system."
},
{
"pmid": "12066177",
"title": "Architecture for a large-scale ion-trap quantum computer.",
"abstract": "Among the numerous types of architecture being explored for quantum computers are systems utilizing ion traps, in which quantum bits (qubits) are formed from the electronic states of trapped ions and coupled through the Coulomb interaction. Although the elementary requirements for quantum computation have been demonstrated in this system, there exist theoretical and technical obstacles to scaling up the approach to large numbers of qubits. Therefore, recent efforts have been concentrated on using quantum communication to link a number of small ion-trap quantum systems. Developing the array-based approach, we show how to achieve massively parallel gate operation in a large-scale quantum computer, based on techniques already demonstrated for manipulating small quantum registers. The use of decoherence-free subspaces significantly reduces decoherence during ion transport, and removes the requirement of clock synchronization between the interaction regions."
},
{
"pmid": "28905917",
"title": "Quantum machine learning.",
"abstract": "Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable."
},
{
"pmid": "26941315",
"title": "Realization of a scalable Shor algorithm.",
"abstract": "Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four \"cache qubits\" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%."
}
] |
Scientific Reports | 31043666 | PMC6494992 | 10.1038/s41598-019-42516-z | Electrocardiogram generation with a bidirectional LSTM-CNN generative adversarial network | Heart disease is a malignant threat to human health. Electrocardiogram (ECG) tests are used to help diagnose heart disease by recording the heart’s activity. However, automated medical-aided diagnosis with computers usually requires a large volume of labeled clinical data without patients' privacy to train the model, which is an empirical problem that still needs to be solved. To address this problem, we propose a generative adversarial network (GAN), which is composed of a bidirectional long short-term memory(LSTM) and convolutional neural network(CNN), referred as BiLSTM-CNN,to generate synthetic ECG data that agree with existing clinical data so that the features of patients with heart disease can be retained. The model includes a generator and a discriminator, where the generator employs the two layers of the BiLSTM networks and the discriminator is based on convolutional neural networks. The 48 ECG records from individuals of the MIT-BIH database were used to train the model. We compared the performance of our model with two other generative models, the recurrent neural network autoencoder(RNN-AE) and the recurrent neural network variational autoencoder (RNN-VAE). The results showed that the loss function of our model converged to zero the fastest. We also evaluated the loss of the discriminator of GANs with different combinations of generator and discriminator. The results indicated that BiLSTM-CNN GAN could generate ECG data with high morphological similarity to real ECG recordings. | Related WorkGenerative Adversarial NetworkThe GAN is a deep generative model that differs from other generative models such as autoencoder in terms of the methods employed for generating data and is mainly comprised of a generator and a discriminator. The generator produces data based on sampled noise data points that follow a Gaussian distribution and learns from the feedback given by the discriminator. The discriminator learns the probability distribution of the real data and gives a true-or-false value to judge whether the generated data are real ones. The two sub-models comprising the generator and discriminator reach a convergence state by playing a zero-sum game. Figure 1 illustrates the architecture of GAN.Figure 1Architecture of the GAN.The solution obtained by GAN can be viewed as a min-max optimization process. The objective function is:1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathop{min}\limits_{G}\,\mathop{max}\limits_{D}\,V(D,G)={E}_{x\sim {p}_{data}(x)}[\,{\rm{l}}{\rm{o}}{\rm{g}}\,D(x)]+{E}_{z\sim {p}_{z}(z)}[\,{\rm{l}}{\rm{o}}{\rm{g}}(1-D(G(z)))],$$\end{document}minGmaxDV(D,G)=Ex∼pdata(x)[logD(x)]+Ez∼pz(z)[log(1−D(G(z)))],where D is the discriminator and G is the generator. When the distribution of the real data is equivalent to the distribution of the generated data, the output of the discriminator can be regarded as the optimal result.GAN has been successfully applied in several areas such as natural language processing16,17, latent space learning18, morphological studies19, and image-to-image translation20.RNNRecurrent neural network has been widely used to solve tasks of processing time series data21, speech recognition22, and image generation23. Recently, it has also been applied to ECG signal denoising and ECG classification for detecting obstructions in sleep apnea24. RNN typically includes an input layer, a hidden layer, and an output layer, where the hidden state at a certain time t is determined by the input at the current time as well as by the hidden state at a previous time:2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${h}_{t}=f({W}_{ih}{x}_{t}+{W}_{hh}{h}_{t-1}+{b}_{h}),$$\end{document}ht=f(Wihxt+Whhht−1+bh),3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${o}_{t}=g({W}_{ho}{h}_{t}+{b}_{o}),$$\end{document}ot=g(Whoht+bo),where f and g are the activation functions, xt and ot are the input and output at time t, respectively, ht is the hidden state at time t, W{ih,hh,ho} represent the weight matrices that connect the input layer, hidden layer, and output layer, and b{h,o} denote the basis of the hidden layer and output layer.RNN is highly suitable for short-term dependent problems but is ineffective in dealing with long-term dependent problems. The long short-term memory (LSTM)25 and gated recurrent unit (GRU)26 were introduced to overcome the shortcomings of RNN, including gradient expansion or gradient disappearance during training. The LSTM is a variation of an RNN and is suitable for processing and predicting important events with long intervals and delays in time series data by using an extra architecture called the memory cell to store previously captured information. LSTM has been applied to tasks based on time series data such as anomaly detection in ECG signals27. However, LSTM is not part of the generative models and no studies have employed LSTM to generate ECG data yet. The GRU is also a variation of an RNN, which combines the forget gate and input gate into an update gate to control the amount of information considered from previous time flows at the current time. The reset gate of the GRU is used to control how much information from previous times is ignored. GRUs have been applied in some areas in recent years, such as speech recognition28.RNN-AE and RNN-VAEThe autoencoder and variational autoencoder (VAE) are generative models proposed before GAN. Besides used for generating data29, they were utilized to dimensionality reduction30,31.RNN-AE is an expansion of the autoencoder model where both the encoder and decoder employ RNNs. The encoder outputs a hidden latent code d, which is one of the input values for the decoder. In contrast to the encoder, the output and hidden state of the decoder at the current time depend on the output at the current time and the hidden state of the decoder at the previous time as well as on the latent code d. The goal of RNN-AE is to make the raw data and output for the decoder as similar as possible. Figure 2 illustrates the RNN-AE architecture14.Figure 2Illustration of the RNN-AE architecture.VAE is a variant of autoencoder where the decoder no longer outputs a hidden vector, but instead yields two vectors comprising the mean vector and variance vector. A skill called the re-parameterization trick32 is used to re-parameterize the random code z as a deterministic code, and the hidden latent code d is obtained by combining the mean vector and variance vector:4\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\bf{d}}{\boldsymbol{=}}\mu {\boldsymbol{+}}\sigma \odot \varepsilon {\boldsymbol{,}}$$\end{document}d=μ+σ⊙ε,where μ is the mean vector, σ is the variance vector, and ε ~ N(0, 1).RNN-VAE is a variant of VAE where a single-layer RNN is used in both the encoder and decoder. This model is suitable for discrete tasks such as sequence-to-sequence learning and sentence generation.Generation of Time Series DataTo the best of our knowledge, there is no reported study adopting the relevant techniques of deep learning to generate or synthesize ECG signals, but there are some related works on the generation of audio and classic music signals.Methods for generating raw audio waveforms were principally based on the training autoregressive models, such as Wavenet33 and SampleRNN34, both of them using conditional probability models, which means that at time t each sample is generated according to all samples at previous time steps. However, autoregressive settings tend to result in slow generation because the output audio samples have to be fed back into the model once each time, while GAN is able to avoid this disadvantage by constantly adversarial training to make the distribution of generated results and real data as approximate as possible.Mogren et al. proposed a method called C-RNN-GAN35 and applied it on a set of classic music. In their work, tones are represented as quadruplets of frequency, length, intensity and timing. Both the generator and the discriminator use a deep LSTM layer and a fully connected layer. Inspired by their work, in our research, each point sampled from ECG is denoted by a one-dimensional vector of the time-step and leads. Donahue et al. applied WaveGANs36 from aspects of time and frequency to audio synthesis in an unsupervised background. WaveGAN uses a one-dimensional filter of length 25 and a great up-sampling factor. However, it is essential that these two operations have the same number of hyper parameters and numerical calculations. According to the above analysis, our architecture of GAN will adopt deep LSTM layers and CNNs to optimize generation of time series sequence. | [
"22187440",
"19273030",
"20703646",
"12669985",
"9377276"
] | [
{
"pmid": "22187440",
"title": "Computerized extraction of electrocardiograms from continuous 12-lead holter recordings reduces measurement variability in a thorough QT study.",
"abstract": "Continuous Holter recordings are often used in thorough QT studies (TQTS), with multiple 10-second electrocardiograms (ECGs) visually selected around predesignated time points. The authors hypothesized that computer-automated ECG selection would reduce within-subject variability, improve study data precision, and increase study power. Using the moxifloxacin and placebo arms of a Holter-based crossover TQTS, the authors compared interval duration measurements (IDMs) from manually selected to computer-selected ECGs. All IDMs were made with a fully automated computer algorithm. Moxifloxacin-induced changes in baseline- and placebo-subtracted QT intervals were similar for manual and computer ECG selection. Mean 90% confidence intervals were narrower, and within-subject variability by mixed-model covariance was lower for computer-selected than for manual-selected ECGs. Computer ECG selection reduced the number of subjects needed to achieve 80% power by 40% to 50% over manual. Computer ECG selection returns accurate ddQTcF values with less measurement variability than manual ECG selection by a variety of metrics. This results in increased study power and reduces the number of subjects needed to achieve desired power, which represents a significant potential source cost savings in clinical drug trials."
},
{
"pmid": "19273030",
"title": "Heartbeat time series classification with support vector machines.",
"abstract": "In this study, heartbeat time series are classified using support vector machines (SVMs). Statistical methods and signal analysis techniques are used to extract features from the signals. The SVM classifier is favorably compared to other neural network-based classification approaches by performing leave-one-out cross validation. The performance of the SVM with respect to other state-of-the-art classifiers is also confirmed by the classification of signals presenting very low signal-to-noise ratio. Finally, the influence of the number of features to the classification rate was also investigated for two real datasets. The first dataset consists of long-term ECG recordings of young and elderly healthy subjects. The second dataset consists of long-term ECG recordings of normal subjects and subjects suffering from coronary artery disease."
},
{
"pmid": "20703646",
"title": "Automatic classification of heartbeats using wavelet neural network.",
"abstract": "The electrocardiogram (ECG) signal is widely employed as one of the most important tools in clinical practice in order to assess the cardiac status of patients. The classification of the ECG into different pathologic disease categories is a complex pattern recognition task. In this paper, we propose a method for ECG heartbeat pattern recognition using wavelet neural network (WNN). To achieve this objective, an algorithm for QRS detection is first implemented, then a WNN Classifier is developed. The experimental results obtained by testing the proposed approach on ECG data from the MIT-BIH arrhythmia database demonstrate the efficiency of such an approach when compared with other methods existing in the literature."
},
{
"pmid": "12669985",
"title": "A dynamical model for generating synthetic electrocardiogram signals.",
"abstract": "A dynamical model based on three coupled ordinary differential equations is introduced which is capable of generating realistic synthetic electrocardiogram (ECG) signals. The operator can specify the mean and standard deviation of the heart rate, the morphology of the PQRST cycle, and the power spectrum of the RR tachogram. In particular, both respiratory sinus arrhythmia at the high frequencies (HFs) and Mayer waves at the low frequencies (LFs) together with the LF/HF ratio are incorporated in the model. Much of the beat-to-beat variation in morphology and timing of the human ECG, including QT dispersion and R-peak amplitude modulation are shown to result. This model may be employed to assess biomedical signal processing techniques which are used to compute clinical statistics from the ECG."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
}
] |
Frontiers in Neurorobotics | 31130854 | PMC6509616 | 10.3389/fnbot.2019.00018 | Supervised Learning in SNN via Reward-Modulated Spike-Timing-Dependent Plasticity for a Target Reaching Vehicle | Spiking neural networks (SNNs) offer many advantages over traditional artificial neural networks (ANNs) such as biological plausibility, fast information processing, and energy efficiency. Although SNNs have been used to solve a variety of control tasks using the Spike-Timing-Dependent Plasticity (STDP) learning rule, existing solutions usually involve hard-coded network architectures solving specific tasks rather than solving different kinds of tasks generally. This results in neglecting one of the biggest advantages of ANNs, i.e., being general-purpose and easy-to-use due to their simple network architecture, which usually consists of an input layer, one or multiple hidden layers and an output layer. This paper addresses the problem by introducing an end-to-end learning approach of spiking neural networks constructed with one hidden layer and reward-modulated Spike-Timing-Dependent Plasticity (R-STDP) synapses in an all-to-all fashion. We use the supervised reward-modulated Spike-Timing-Dependent-Plasticity learning rule to train two different SNN-based sub-controllers to replicate a desired obstacle avoiding and goal approaching behavior, provided by pre-generated datasets. Together they make up a target-reaching controller, which is used to control a simulated mobile robot to reach a target area while avoiding obstacles in its path. We demonstrate the performance and effectiveness of our trained SNNs to achieve target reaching tasks in different unknown scenarios. | 2. Related WorkFor many mobile robots, the ability to navigate in its environment is considered as the core function, which requires a robot to plan its path toward the goal location and avoid obstacles at the same time. In this study, performing navigation tasks on a mobile robot are used as a case study for evaluating our proposed SNN learning method.Various model-based control methods for robotic navigation tasks have been widely investigated few decades ago (DeSouza and Kak, 2002; Kruse et al., 2013). For example, Brooks (1986) proposed a robust layered control architecture for mobile robots based on task-achieving behaviors. Bicho et al. (1998) presented an attractor dynamic approach to path planning, which only used low-level distance sensors to implement autonomous vehicle motion. Huang et al. (2006) proposed a steering potential function for vision-guided navigation tasks by using a single camera without recovering depth. Friudenberg and Koziol (2018) presented a new guidance method, which can allow a mobile robot interceptor to guide to, and rendezvous with, a moving target while avoiding obstacles in its path.Meanwhile, the navigation behavior achieved by the biological intelligence in animal kingdoms exhibit excellent performance to avoid unpredictable obstacles agilely even in complex environments and outperform state-of-the-art robots in almost every aspects, such as agility, stability, and energy-efficiency.In order to achieve similar outstanding performances, SNN architectures are increasingly being implemented for solving robotic navigation tasks using different training algorithms or running on neuromorphic hardware, due to those aforementioned advantages of SNNs.Wang et al. (2008, 2014) constructed a single-layer SNN using a proximity sensor as the input and then trained it in tasks such as obstacle avoidance and target reaching. In this work, the propagation of the spikes through the network was precisely planned, such that the controlled robot car managed to avoid obstacles and long term plasticity was limited to only few synapses through STDP.Beyeler et al. (2015) implemented a large-scale cortical neural network on a physical robot to achieve visual-guided navigation tasks, which produced similar trajectories as human behavioral data. However, most of the neurons in their network were still used as refined planar representations of the visual field by manually setting all the synaptic weights rather than training them. In the work of Cyr and Boukadoum (2012), where they used the classical conditioning to train a mobile robot to navigate through the environment, it was even stated that their architecture and initial synaptic weight matrix were intuitively hand-coded. In another example by Nichols et al. (2013), temporal difference learning was used to train a mobile robot in a self-organizing SNN for a wall-following task. However, each synaptic connection between neurons was formed when two specific neurons were active at the same time, which ultimately resulted in every single neuron in this multilayer structure having a specific predetermined function. Moeys et al. (2016) adopted the convolutional neural network (CNN) in the context of a predator/prey scenario. The events from an event-based vision sensor in each step are mapped into a frame of image based on the scene activity, which is fed into the CNN as the input. The network was off-line trained on labeled data and outputs simple left, right, or forward action directly. Milde et al. (2017) performed obstacle avoidance and target acquisition tasks with a robotic vehicle, on which an SNN takes event-based vision sensor as the input and runs on a neuromorphic hardware. It is worth mentioning that some fixed SNN architectures aim at solving a problem by imitating parts of structures of natural neural networks found in living organisms such as the withdrawal circuit of the Aplysia—a marine snail organism—in Alnajjar et al. (2008), olfactory learning observed in the fruit fly or honey bee in Helgadottir et al. (2013), or the cerebellum in Carrillo et al. (2008).There are other SNN-based control approaches that are not necessarily dependent on the specific network architecture but with other drawbacks that limit their further utility. Bing et al. (2018) introduced an end-to-end learning approach of SNNs for a lane keeping vehicle. Their SNN was constructed with R-STDP synapses in an all-to-all connection and trained by the R-STDP learning rule. Even this end-to-end sensorimotor mapping drove the robot to follow lanes with different patterns, their network had a simple architecture only with the input layer and the output layer. Mahadevuni and Li (2017) solved goal approaching task by training an SNN using R-STDP. Shim and Li (2017) further proposed a multiplicative R-STDP by multiplying the current weight to the normal R-STDP and assigned the global award to all the synapses among two separated hidden layers in an SNN. In fact, most of the other approaches propose architectures that do not necessarily support hidden layers in their networks. In Vasilaki et al. (2009) and Frémaux et al. (2013), a map was fed into the network in the form of cells which were directly connected to the output layer neurons in a feed-forward and all-to-all manner. Each output neuron represented a different movement direction. In other approaches, such as Helgadottir et al. (2013) or Spüler et al. (2015), only a limited amount of synaptic connections employ synaptic plasticity while the majority of the synaptic strengths were fixed. Unfortunately, similar approaches only work for simple tasks rather than more complex tasks, which require precise tuning of many more degrees of freedom, e.g., one or more hidden layers, to solve the given task with satisfactory precision.In summary, it can be seen that state-of-the-art SNNs based on R-STDP are still far from being general-purpose and easy-to-use, let alone the complexities in designing proper rewards or tunning a group of learning parameters. To remove the burden of designing complicated SNN architectures, indirect approaches for training SNNs are investigated. Foderaro et al. (2010) that induced changes in the synaptic efficacy through input spikes generated by a separate critic SNN. This external network was provided with control inputs as well as feedback signals and trained using a reward-based STDP learning rule. By minimizing the error between the control output and optimal control law, it was able to learn adaptive control of an aircraft. This was then used to train a simulated flying insect robot to follow a flight trajectory in Clawson et al. (2016). Similar ideas were presented by Zhang et al. (2012, 2013), Hu et al. (2014), and Mazumder et al. (2016) who trained a simple, virtual insect in a target reaching and obstacle avoidance task. However, this method is not suited for training an SNN on multi-dimensional inputs since the reward is dependent on the sign of the difference between the desired and actual SNN output. This also reveals another defect of most of current SNN-based control, which limits the use of SNN only to one-dimensional output.To remove those aforementioned barriers, the architectures and learning rules used for SNNs should be able to operate on networks with hidden layer(s), multiple outputs, and continuous actions. These nice properties are also necessary in order for SNNs to extend and rival the concept of deep traditional ANNs using RL strategies or simply build the bridge between them. Therefore, we propose a novel SNN training approach based on R-STDP learning rule and the supervised learning framework. Based on this method, an SNN-based controller for mobile robot applications can be quickly and easily build with the help of traditional control knowledge. | [
"26494281",
"30034334",
"18616974",
"25602766",
"23592970",
"17220510",
"28747883",
"28680387",
"22736650",
"22237491",
"11665765",
"19997492",
"20510579"
] | [
{
"pmid": "26494281",
"title": "A GPU-accelerated cortical neural network model for visually guided robot navigation.",
"abstract": "Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls."
},
{
"pmid": "30034334",
"title": "A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks.",
"abstract": "Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs."
},
{
"pmid": "18616974",
"title": "A real-time spiking cerebellum model for learning robot control.",
"abstract": "We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts."
},
{
"pmid": "25602766",
"title": "Two-trace model for spike-timing-dependent synaptic plasticity.",
"abstract": "We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the [Formula: see text] concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, thus constituting a practical tool for the study of the interplay of neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes, the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and timescales for the causal and the anticausal contributions. The model contains otherwise only three free parameters, which can be adjusted to reproduce triplet nonlinearities in hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns."
},
{
"pmid": "23592970",
"title": "Reinforcement learning using a continuous time actor-critic framework with spiking neurons.",
"abstract": "Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity."
},
{
"pmid": "17220510",
"title": "Solving the distal reward problem through linkage of STDP and dopamine signaling.",
"abstract": "In Pavlovian and instrumental conditioning, reward typically comes seconds after reward-triggering actions, creating an explanatory conundrum known as \"distal reward problem\": How does the brain know what firing patterns of what neurons are responsible for the reward if 1) the patterns are no longer there when the reward arrives and 2) all neurons and synapses are active during the waiting period to the reward? Here, we show how the conundrum is resolved by a model network of cortical spiking neurons with spike-timing-dependent plasticity (STDP) modulated by dopamine (DA). Although STDP is triggered by nearly coincident firing patterns on a millisecond timescale, slow kinetics of subsequent synaptic plasticity is sensitive to changes in the extracellular DA concentration during the critical period of a few seconds. Random firings during the waiting period to the reward do not affect STDP and hence make the network insensitive to the ongoing activity-the key feature that distinguishes our approach from previous theoretical studies, which implicitly assume that the network be quiet during the waiting period or that the patterns be preserved until the reward arrives. This study emphasizes the importance of precise firing patterns in brain dynamics and suggests how a global diffusive reinforcement signal in the form of extracellular DA can selectively influence the right synapses at the right time."
},
{
"pmid": "28747883",
"title": "Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.",
"abstract": "Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware."
},
{
"pmid": "28680387",
"title": "Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.",
"abstract": "An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning."
},
{
"pmid": "22736650",
"title": "Biologically Inspired SNN for Robot Control.",
"abstract": "This paper proposes a spiking-neural-network-based robot controller inspired by the control structures of biological systems. Information is routed through the network using facilitating dynamic synapses with short-term plasticity. Learning occurs through long-term synaptic plasticity which is implemented using the temporal difference learning rule to enable the robot to learn to associate the correct movement with the appropriate input conditions. The network self-organizes to provide memories of environments that the robot encounters. A Pioneer robot simulator with laser and sonar proximity sensors is used to verify the performance of the network with a wall-following task, and the results are presented."
},
{
"pmid": "22237491",
"title": "Introduction to spiking neural networks: Information processing, learning and applications.",
"abstract": "The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing."
},
{
"pmid": "11665765",
"title": "Spike-based strategies for rapid processing.",
"abstract": "Most experimental and theoretical studies of brain function assume that neurons transmit information as a rate code, but recent studies on the speed of visual processing impose temporal constraints that appear incompatible with such a coding scheme. Other coding schemes that use the pattern of spikes across a population a neurons may be much more efficient. For example, since strongly activated neurons tend to fire first, one can use the order of firing as a code. We argue that Rank Order Coding is not only very efficient, but also easy to implement in biological hardware: neurons can be made sensitive to the order of activation of their inputs by including a feed-forward shunting inhibition mechanism that progressively desensitizes the neuronal population during a wave of afferent activity. In such a case, maximum activation will only be produced when the afferent inputs are activated in the order of their synaptic weights."
},
{
"pmid": "19997492",
"title": "Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail.",
"abstract": "Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties."
},
{
"pmid": "20510579",
"title": "Evolving spiking neural networks for audiovisual information processing.",
"abstract": "This paper presents a new modular and integrative sensory information system inspired by the way the brain performs information processing, in particular, pattern recognition. Spiking neural networks are used to model human-like visual and auditory pathways. This bimodal system is trained to perform the specific task of person authentication. The two unimodal systems are individually tuned and trained to recognize faces and speech signals from spoken utterances, respectively. New learning procedures are designed to operate in an online evolvable and adaptive way. Several ways of modelling sensory integration using spiking neural network architectures are suggested and evaluated in computer experiments."
}
] |
Learning Health Systems | 31245557 | PMC6516719 | 10.1002/lrh2.10019 | Embedding data provenance into the Learning Health System to facilitate reproducible research | AbstractIntroductionThe learning health system (LHS) community has taken up the challenge of bringing the complex relationship between clinical research and practice into this brave new world. At the heart of the LHS vision is the notion of routine capture, transformation, and dissemination of data and knowledge, with various use cases, such as clinical studies, quality improvement initiatives, and decision support, constructed on top of specific routes that the data is taking through the system. In order to stop this increased data volume and analytical complexity from obfuscating the research process, it is essential to establish trust in the system through implementing reproducibility and auditability throughout the workflow.MethodsData provenance technologies can automatically capture the trace of the research task and resulting data, thereby facilitating reproducible research. While some computational domains, such as bioinformatics, have embraced the technology through provenance‐enabled execution middlewares, disciplines based on distributed, heterogeneous software, such as medical research, are only starting on the road to adoption, motivated by the institutional pressures to improve transparency and reproducibility.ResultsGuided by the experiences of the TRANSFoRm project, we present the opportunities that data provenance offers to the LHS community. We illustrate how provenance can facilitate documenting 21 CFR Part 11 compliance for Food and Drug Administration submissions and provide auditability for decisions made by the decision support tools and discuss the transformational effect of routine provenance capture on data privacy, study reporting, and publishing medical research.ConclusionsIf the scaling up of the LHS is to succeed, we have to embed mechanisms to verify trust in the system inside our research instruments. In the research world increasingly reliant on electronic tools, provenance gives us a lingua franca to achieve traceability, which we have shown to be essential to building these mechanisms. To realize the vision of making computable provenance a feasible approach to implementing reproducibility in the LHS, we have to provide viable mechanisms for adoption. These include defining meaningful provenance models for problem domains and also introducing provenance support to existing tools in a minimally invasive manner. | 5.1Related workThe full provenance architecture and the details of the template model used in TRANSFoRm are currently submitted for publication and are under review. The templates that the solution is based on are similar to the efforts of the team at University of Southampton,50 with the main difference being that their work is better suited to atomic instantiations, where each template is immediately instantiated in full, while the TRANSFoRm model allows for variable repetitions (eg, sequence of edits to a study protocol). The PRIME methodology51 covers the life cycle of provenance model design, from use case specification to identification of actors, processes, and information flows, but it stops short of defining the architecture for provenance capture, the joint work on which is underway. Related to our use of ontologies for constraining provenance artifacts is the wider effort in the use of ontologies as part of the software engineering process,52 eg, through translations between ontologies and UML constructs.53 A broader overview of provenance implementation issues in biomedical research can be found in the work of Curcin et al.44
Recently, the DPROV initiative
‖
http://wiki.siframework.org/Data+Provenance+Initiative
has been working on aligning data provenance with the HL7 and FHIR protocols, with the goal of identifying opportunities within CDA R2 where basic provenance information about clinical (and other care related information) can be integrated, eg, who created it, when was it created, where was it created, how it was created, why it was created, and what action was taken to produce the information captured, thus enabling detailed audit of the data entry process.Deciding the level of granularity of provenance capture is a recognized problem in the field. Indeed, there are infrastructures that collect finely grained provenance, on the level of the operating system (Hi‐Fi,54 SPADE,55 PASS,56 and PLUS57) or of individual programmatic scripts (noWorkflow58). In both cases, the scale of captured data and lack of semantics make the resulting provenance trails difficult to link to underlying research domain. Our approach minimizes the disruption required to instrument existing code by interleaving provenance‐specific elements into the code, in line with the principles of aspect‐oriented programming.59 An alternative approach is to reconstruct provenance from separately maintained logs,60 but this comes at the cost to the level of confidence in the resulting provenance data.As part of the W3C PROV initiative, a comprehensive survey of available provenance implementations was assembled in 2013, which lists a wide range of provenance‐related software tools at various levels of maturity.61 | [
"26842041",
"21892149",
"22460880",
"26315443",
"25552691",
"17032985",
"21720406",
"26797239",
"25383411",
"26789876",
"26808582",
"20876290",
"17947786",
"26440803",
"21803926",
"26450020",
"25342177",
"20064798",
"26539547",
"23571850",
"25648301"
] | [
{
"pmid": "26315443",
"title": "PSYCHOLOGY. Estimating the reproducibility of psychological science.",
"abstract": "Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams."
},
{
"pmid": "25552691",
"title": "Reproducibility in science: improving the standard for basic and preclinical research.",
"abstract": "Medical and scientific advances are predicated on new knowledge that is robust and reliable and that serves as a solid foundation on which further advances can be built. In biomedical research, we are in the midst of a revolution with the generation of new data and scientific publications at a previously unprecedented rate. However, unfortunately, there is compelling evidence that the majority of these discoveries will not stand the test of time. To a large extent, this reproducibility crisis in basic and preclinical research may be as a result of failure to adhere to good scientific practice and the desperation to publish or perish. This is a multifaceted, multistakeholder problem. No single party is solely responsible, and no single solution will suffice. Here we review the reproducibility problems in basic and preclinical biomedical research, highlight some of the complexities, and discuss potential solutions that may help improve research quality and reproducibility."
},
{
"pmid": "21720406",
"title": "Bridging the efficacy-effectiveness gap: a regulator's perspective on addressing variability of drug response.",
"abstract": "Drug regulatory agencies should ensure that the benefits of drugs outweigh their risks, but licensed medicines sometimes do not perform as expected in everyday clinical practice. Failure may relate to lower than anticipated efficacy or a higher than anticipated incidence or severity of adverse effects. Here we show that the problem of benefit-risk is to a considerable degree a problem of variability in drug response. We describe biological and behavioural sources of variability and how these contribute to the long-known efficacy-effectiveness gap. In this context, efficacy describes how a drug performs under conditions of clinical trials, whereas effectiveness describes how it performs under conditions of everyday clinical practice. We argue that a broad range of pre- and post-licensing technologies will need to be harnessed to bridge the efficacy-effectiveness gap. Successful approaches will not be limited to the current notion of pharmacogenomics-based personalized medicines, but will also entail the wider use of electronic health-care tools to improve drug prescribing and patient adherence."
},
{
"pmid": "26797239",
"title": "The \"Efficacy-Effectiveness Gap\": Historical Background and Current Conceptualization.",
"abstract": "BACKGROUND\nThe concept of the \"efficacy-effectiveness gap\" (EEG) has started to challenge confidence in decisions made for drugs when based on randomized controlled trials alone. Launched by the Innovative Medicines Initiative, the GetReal project aims to improve understanding of how to reconcile evidence to support efficacy and effectiveness and at proposing operational solutions.\n\n\nOBJECTIVES\nThe objectives of the present narrative review were 1) to understand the historical background in which the concept of the EEG has emerged and 2) to describe the conceptualization of EEG.\n\n\nMETHODS\nA focused literature review was conducted across the gray literature and articles published in English reporting insights on the EEG concept. The identification of different \"paradigms\" was performed by simple inductive analysis of the documents' content.\n\n\nRESULTS\nThe literature on the EEG falls into three major paradigms, in which EEG is related to 1) real-life characteristics of the health care system; 2) the method used to measure the drug's effect; and 3) a complex interaction between the drug's biological effect and contextual factors.\n\n\nCONCLUSIONS\nThe third paradigm provides an opportunity to look beyond any dichotomy between \"standardized\" versus \"real-life\" characteristics of the health care system and study designs. Namely, future research will determine whether the identification of these contextual factors can help to best design randomized controlled trials that provide better estimates of drugs' effectiveness."
},
{
"pmid": "26789876",
"title": "Data Sharing.",
"abstract": "The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick. However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have . . ."
},
{
"pmid": "20876290",
"title": "Reproducible science.",
"abstract": "The reproducibility of an experimental result is a fundamental assumption in science. Yet, results that are merely confirmatory of previous findings are given low priority and can be difficult to publish. Furthermore, the complex and chaotic nature of biological systems imposes limitations on the replicability of scientific experiments. This essay explores the importance and limits of reproducibility in scientific manuscripts."
},
{
"pmid": "26440803",
"title": "The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement.",
"abstract": "Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (http://www.record-statement.org), will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting."
},
{
"pmid": "21803926",
"title": "Standards for reporting randomized controlled trials in medical informatics: a systematic review of CONSORT adherence in RCTs on clinical decision support.",
"abstract": "INTRODUCTION\nThe Consolidated Standards for Reporting Trials (CONSORT) were published to standardize reporting and improve the quality of clinical trials. The objective of this study is to assess CONSORT adherence in randomized clinical trials (RCT) of disease specific clinical decision support (CDS).\n\n\nMETHODS\nA systematic search was conducted of the Medline, EMBASE, and Cochrane databases. RCTs on CDS were assessed against CONSORT guidelines and the Jadad score.\n\n\nRESULT\n32 of 3784 papers identified in the primary search were included in the final review. 181 702 patients and 7315 physicians participated in the selected trials. Most trials were performed in primary care (22), including 897 general practitioner offices. RCTs assessing CDS for asthma (4), diabetes (4), and hyperlipidemia (3) were the most common. Thirteen CDS systems (40%) were implemented in electronic medical records, and 14 (43%) provided automatic alerts. CONSORT and Jadad scores were generally low; the mean CONSORT score was 30.75 (95% CI 27.0 to 34.5), median score 32, range 21-38. Fourteen trials (43%) did not clearly define the study objective, and 11 studies (34%) did not include a sample size calculation. Outcome measures were adequately identified and defined in 23 (71%) trials; adverse events or side effects were not reported in 20 trials (62%). Thirteen trials (40%) were of superior quality according to the Jadad score (≥3 points). Six trials (18%) reported on long-term implementation of CDS.\n\n\nCONCLUSION\nThe overall quality of reporting RCTs was low. There is a need to develop standards for reporting RCTs in medical informatics."
},
{
"pmid": "25342177",
"title": "Toward a science of learning systems: a research agenda for the high-functioning Learning Health System.",
"abstract": "OBJECTIVE\nThe capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale--marrying technology, process, and policy--is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges.\n\n\nMATERIALS AND METHODS\nThe National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11-12 April 2013.\n\n\nRESULTS\nThe workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges.\n\n\nCONCLUSIONS\nThe intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a 'new science of learning systems', create an important agenda for informatics and related disciplines."
},
{
"pmid": "20064798",
"title": "Computerized clinical decision support for prescribing: provision does not guarantee uptake.",
"abstract": "There is wide variability in the use and adoption of recommendations generated by computerized clinical decision support systems (CDSSs) despite the benefits they may bring to clinical practice. We conducted a systematic review to explore the barriers to, and facilitators of, CDSS uptake by physicians to guide prescribing decisions. We identified 58 studies by searching electronic databases (1990-2007). Factors impacting on CDSS use included: the availability of hardware, technical support and training; integration of the system into workflows; and the relevance and timeliness of the clinical messages. Further, systems that were endorsed by colleagues, minimized perceived threats to professional autonomy, and did not compromise doctor-patient interactions were accepted by users. Despite advances in technology and CDSS sophistication, most factors were consistently reported over time and across ambulatory and institutional settings. Such factors must be addressed when deploying CDSSs so that improvements in uptake, practice and patient outcomes may be achieved."
},
{
"pmid": "26539547",
"title": "Translational Medicine and Patient Safety in Europe: TRANSFoRm--Architecture for the Learning Health System in Europe.",
"abstract": "UNLABELLED\nThe Learning Health System (LHS) describes linking routine healthcare systems directly with both research translation and knowledge translation as an extension of the evidence-based medicine paradigm, taking advantage of the ubiquitous use of electronic health record (EHR) systems. TRANSFoRm is an EU FP7 project that seeks to develop an infrastructure for the LHS in European primary care.\n\n\nMETHODS\nThe project is based on three clinical use cases, a genotype-phenotype study in diabetes, a randomised controlled trial with gastroesophageal reflux disease, and a diagnostic decision support system for chest pain, abdominal pain, and shortness of breath.\n\n\nRESULTS\nFour models were developed (clinical research, clinical data, provenance, and diagnosis) that form the basis of the projects approach to interoperability. These models are maintained as ontologies with binding of terms to define precise data elements. CDISC ODM and SDM standards are extended using an archetype approach to enable a two-level model of individual data elements, representing both research content and clinical content. Separate configurations of the TRANSFoRm tools serve each use case.\n\n\nCONCLUSIONS\nThe project has been successful in using ontologies and archetypes to develop a highly flexible solution to the problem of heterogeneity of data sources presented by the LHS."
},
{
"pmid": "23571850",
"title": "A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.",
"abstract": "OBJECTIVE\nBiomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method.\n\n\nMATERIALS AND METHODS\nWe developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures.\n\n\nRESULTS\nOur unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project.\n\n\nCONCLUSIONS\nWe present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration."
},
{
"pmid": "25648301",
"title": "A cluster randomised controlled trial evaluating the effectiveness of eHealth-supported patient recruitment in primary care research: the TRANSFoRm study protocol.",
"abstract": "BACKGROUND\nOpportunistic recruitment is a highly laborious and time-consuming process that is currently performed manually, increasing the workload of already busy practitioners and resulting in many studies failing to achieve their recruitment targets. The Translational Medicine and Patient Safety in Europe (TRANSFoRm) platform enables automated recruitment, data collection and follow-up of patients, potentially improving the efficiency, time and costs of clinical research. This study aims to assess the effectiveness of TRANSFoRm in improving patient recruitment and follow-up in primary care trials.\n\n\nMETHODS/DESIGN\nThis multi-centre, parallel-arm cluster randomised controlled trial will compare TRANSFoRm-supported with standard opportunistic recruitment. Participants will be general practitioners and patients with gastro-oesophageal reflux disease from 40 primary care centres in five European countries. Randomisation will take place at the care centre level. The intervention arm will use the TRANSFoRm tools for recruitment, baseline data collection and follow-up. The control arm will use web-based case report forms and paper self-completed questionnaires. The primary outcome will be the proportion of eligible patients successfully recruited at the end of the 16-week recruitment period. Secondary outcomes will include the proportion of recruited patients with complete baseline and follow-up data and the proportion of participants withdrawn or lost to follow-up. The study will also include an economic evaluation and measures of technology acceptance and user experience.\n\n\nDISCUSSION\nThe study should shed light on the use of eHealth to improve the effectiveness of recruitment and follow-up in primary care research and provide an evidence base for future eHealth-supported recruitment initiatives. Reporting of results is expected in October 2015.\n\n\nTRIAL REGISTRATION\nEudraCT: 2014-001314-25."
}
] |
Journal of Clinical Medicine | 30959798 | PMC6518303 | 10.3390/jcm8040462 | Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence | Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%). | 2. Related WorksThe present era of digital technology has made a significant contribution to medical science. The number of medical imaging modalities is growing rapidly with improvements in biomedical sensors and high-throughput image acquisition technologies. These devices generate an enormous collection of heterogeneous medical images that make a significant contribution to disease analysis and treatment. A medical expert can make a better diagnosis related to a similar situation in the past by retrieving relevant cases from this enormous collection of medical images. Before the advent of machine learning (ML) and AI algorithms, it was considered a tedious task to explore the huge multimodal database for getting assistance related to any complex problem. Hence, it is important to evolve an efficient MIRS that will support medical experts and thus improve diagnosis and treatment.Conventional text-based image retrieval systems use certain textual tags that images are often manually annotated with as search keywords. Due to the enormous collection of heterogeneous medical image databases, this manual annotation task is very tedious and time-consuming. In many hospitals, the PACS [11] is deployed to manage a very large collection of medical images that is compatible with the digital imaging and communications in medicine (DICOM) file format [12]. This framework utilizes the textual information stored in the DICOM header for image retrieval; the header contains a patient identifier (ID), name, date, modality, body parts examined, etc. This header information is lost when a DICOM image is converted into another image format for efficient storage and communication such as tagged image file format (TIFF), joint photographic experts group (JPEG), portable network graphics (PNG), etc. To resolve this problem, CBMIR systems have been proposed by many researchers to assist medical experts. However, these systems are application-specific and can store or retrieve a specific type of medical image, e.g., a retrieval system for X-ray images of the chest as proposed in [13].Although many researchers have studied the CBMIR by using handcrafted features [14,15,16,17,18,19,20,21,22,23,24,25,26], the overall performance of the existing systems is still low due to the growing heterogeneous medical images of multiclass database and conventional ML techniques. These techniques are unable to decrease the “semantic gap,” which is the information lost by converting an image (i.e., a high-level representation) into its visual features (i.e., a low-level representation) [27]. Recently, a significant breakthrough has occurred in the ML domain with the advent of the deep learning framework, which comprises many efficient ML algorithms that can show high-level abstractions in visual data with a minimum semantic gap [28]. Ultimately, these layers extract the complex deep features from the input data in a fully systematic way. Finally, the deep network learns from these features without using other handcrafted features.In recent studies, a significant breakthrough in deep learning has been done in the medical domain, and they are classified into two categories of single modality-based [29,30,31,32,33,34,35,36] and multiple modalities-based methods [28] of imaging.As the single modality-based method, a two-stage CBMIR framework is presented for automatic retrieval of radiographic images [29]. In the first stage, the main class label is assigned by using CNN-based features, and in the second stage, outlier images are filtered out from the predicted class on the basis of low-level edge histogram features. Another CNN-based system is presented in [30] for categorization of interstitial lung diseases (ILDs) patterns by extraction of ILD features from the selected dataset. In [31], a convolutional classification restricted Boltzmann machine (RBM)-based framework is proposed for analyzing the lung CT scan by combining both generative and discriminative representation learning. A CNN-based automatic classification of peri-fissural nodules (PFN) is presented in [32], which has high relevance in the context of lung cancer screening. In [33], a two-stage multi-instance deep learning framework is presented for the classification of different body organs. In the first stage, a CNN is trained on local patches to separate discriminative and non-informative patches from training data samples. The network is then fine-tuned on extracted discriminative patches for the classification task. A detailed analysis of deep learning in CAD is presented in [37]. Three main characteristics (i.e., different CNN architectures, dataset scale, and transfer learning) of CNN are explored in this work. A deep CNN model pre-trained on the general dataset is then fine-tuned for a large collection of multimodal medical image databases. A fully automatic 3D CNN framework to detect cerebral microbleeds (CMBs) from MRI is proposed in [34]. CMBs are small hemorrhages near blood vessels whose detection provides deep insight into many cerebrovascular diseases and cognitive dysfunctions. In [35], an efficient CNN training method is proposed by dynamically choosing negative samples (misclassified) during the training process, which shows better performance in hemorrhage detection within a color fundus image. A multiview convolutional network (ConvNets)-based CAD system is proposed [36] for detecting pulmonary nodules from lung CT scan images.As the multiple modalities-based method, a deep-learning-based framework for multiclass CBMIR is recently proposed in [28] that can classify multimodal medical images. In this framework, an intermodal dataset that contains twenty-four classes with five modalities (CT, MRI, fundus camera, PET, and OPT) is used to train the network.The maximum numbers of classes can usually increase the usability of CBMIR system in healthcare medical application [28]. In addition, it is reported that a large number of classes can help the medical expert in exploring the specific class of disease from a huge collection of medical record according to [38] and healthcare professional. Nevertheless, in previous researches, the maximum numbers of classes to be dealt with were limited as 31 [20,29], and we increased the numbers of classes as 50 in our research. For this purpose, we propose a deep-feature-based medical image classification and retrieval framework by using the enhanced residual network (ResNet) for CBMIR of large numbers of classes with nine modalities (CT, MRI, fundus camera, PET, OPT, X-ray, ultrasound, endoscopy, and visible light camera). The strengths and weaknesses of our proposed and existing methods are summarized in Table 1. | [
"29843416",
"28778026",
"26978662",
"1734458",
"16223609",
"21118769",
"10843252",
"15888631",
"17249404",
"26259520",
"26955021",
"26886968",
"26458112",
"26863652",
"26886975",
"26886969",
"26955024",
"26886976",
"29507784",
"23884657",
"29760397"
] | [
{
"pmid": "29843416",
"title": "Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.",
"abstract": "Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods."
},
{
"pmid": "28778026",
"title": "A survey on deep learning in medical image analysis.",
"abstract": "Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research."
},
{
"pmid": "26978662",
"title": "Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?",
"abstract": "Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data."
},
{
"pmid": "1734458",
"title": "Picture archiving and communication systems: an overview.",
"abstract": "Organizational techniques that enable small departments to function efficiently often fail as departments become larger. With the recent growth in imaging technology, the capacity of film-based systems to meet the increasing needs of radiology departments has decreased. Electronic picture archiving and communication systems (PACS) have been developed in an attempt to provide economical storage, rapid retrieval of images, access to images acquired with multiple modalities, and simultaneous access at multiple sites. Input to a PACS may come from digital or analog sources (when the latter have been digitized). A PACS consists primarily of an image acquisition device (an electronic gateway to the system), data management system (a specialized computer system that controls the flow of information on the network), image storage devices (both short- and long-term archives), transmission network (which serves local or wide areas), display stations (which include a computer, text monitor, image monitors, and a user interface), and devices to produce hard-copy images (currently, a multiformat or laser camera). The goals of PACS are to improve operational efficiency while maintaining or improving diagnostic ability."
},
{
"pmid": "16223609",
"title": "DICOM demystified: a review of digital file formats and their use in radiological practice.",
"abstract": "Digital imaging and communications in medicine (DICOM) is the standard image file format used by radiological hardware devices. This article will provide an overview of DICOM and attempt to demystify the bewildering number of image formats that are commonly encountered. The characteristics and usefulness of different image file types will be explored and a variety of freely available web-based resources to aid viewing and manipulation of digital images will be reviewed. How best to harness DICOM technology before the introduction of picture archiving and communication systems (PACS) will also be described."
},
{
"pmid": "21118769",
"title": "X-ray categorization and retrieval on the organ and pathology level, using patch-based visual words.",
"abstract": "In this study we present an efficient image categorization and retrieval system applied to medical image databases, in particular large radiograph archives. The methodology is based on local patch representation of the image content, using a \"bag of visual words\" approach. We explore the effects of various parameters on system performance, and show best results using dense sampling of simple features with spatial content, and a nonlinear kernel-based support vector machine (SVM) classifier. In a recent international competition the system was ranked first in discriminating orientation and body regions in X-ray images. In addition to organ-level discrimination, we show an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology. The system discriminates between healthy and pathological cases, and is also shown to successfully identify specific pathologies in a set of chest radiographs taken from a routine hospital examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics."
},
{
"pmid": "10843252",
"title": "Content-based retrieval in picture archiving and communication systems.",
"abstract": "A COntent-Based Retrieval Architecture (COBRA) for picture archiving and communication systems (PACS) is introduced. COBRA improves the diagnosis, research, and training capabilities of PACS systems by adding retrieval by content features to those systems. COBRA is an open architecture based on widely used health care and technology standards. In addition to regular PACS components, COBRA includes additional components to handle representation, storage, and content-based similarity retrieval. Within COBRA, an anatomy classification algorithm is introduced to automatically classify PACS studies based on their anatomy. Such a classification allows the use of different segmentation and image-processing algorithms for different anatomies. COBRA uses primitive retrieval criteria such as color, texture, shape, and more complex criteria including object-based spatial relations and regions of interest. A prototype content-based retrieval system for MR brain images was developed to illustrate the concepts introduced in COBRA."
},
{
"pmid": "15888631",
"title": "Informatics in radiology (infoRAD): benefits of content-based visual data access in radiology.",
"abstract": "The field of medicine is often cited as an area for which content-based visual retrieval holds considerable promise. To date, very few visual image retrieval systems have been used in clinical practice; the first applications of image retrieval systems in medicine are currently being developed to complement conventional text-based searches. An image retrieval system was developed and integrated into a radiology teaching file system, and the performance of the retrieval system was evaluated, with use of query topics that represent the teaching database well, against a standard of reference generated by a radiologist. The results of this evaluation indicate that content-based image retrieval has the potential to become an important technology for the field of radiology, not only in research, but in teaching and diagnostics as well. However, acceptance of this technology in the clinical domain will require identification and implementation of clinical applications that use content-based access mechanisms, necessitating close cooperation between medical practitioners and medical computer scientists. Nevertheless, content-based image retrieval has the potential to become an important technology for radiology practice."
},
{
"pmid": "17249404",
"title": "A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback.",
"abstract": "A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework."
},
{
"pmid": "26259520",
"title": "Endowing a Content-Based Medical Image Retrieval System with Perceptual Similarity Using Ensemble Strategy.",
"abstract": "Content-based medical image retrieval (CBMIR) is a powerful resource to improve differential computer-aided diagnosis. The major problem with CBMIR applications is the semantic gap, a situation in which the system does not follow the users' sense of similarity. This gap can be bridged by the adequate modeling of similarity queries, which ultimately depends on the combination of feature extractor methods and distance functions. In this study, such combinations are referred to as perceptual parameters, as they impact on how images are compared. In a CBMIR, the perceptual parameters must be manually set by the users, which imposes a heavy burden on the specialists; otherwise, the system will follow a predefined sense of similarity. This paper presents a novel approach to endow a CBMIR with a proper sense of similarity, in which the system defines the perceptual parameter depending on the query element. The method employs ensemble strategy, where an extreme learning machine acts as a meta-learner and identifies the most suitable perceptual parameter according to a given query image. This parameter defines the search space for the similarity query that retrieves the most similar images. An instance-based learning classifier labels the query image following the query result set. As the concept implementation, we integrated the approach into a mammogram CBMIR. For each query image, the resulting tool provided a complete second opinion, including lesion class, system certainty degree, and set of most similar images. Extensive experiments on a large mammogram dataset showed that our proposal achieved a hit ratio up to 10% higher than the traditional CBMIR approach without requiring external parameters from the users. Our database-driven solution was also up to 25% faster than content retrieval traditional approaches."
},
{
"pmid": "26955021",
"title": "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network.",
"abstract": "Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 × 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ~ 85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists."
},
{
"pmid": "26886968",
"title": "Combining Generative and Discriminative Representation Learning for Lung CT Analysis With Convolutional Restricted Boltzmann Machines.",
"abstract": "The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both for describing the training data and for classification. We present experiments with feature learning for lung texture classification and airway detection in CT images. In both applications, a combination of learning objectives outperformed purely discriminative or generative learning, increasing, for instance, the lung tissue classification accuracy by 1 to 8 percentage points. This shows that discriminative learning can help an otherwise unsupervised feature learner to learn filters that are optimized for classification."
},
{
"pmid": "26458112",
"title": "Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box.",
"abstract": "In this paper, we tackle the problem of automatic classification of pulmonary peri-fissural nodules (PFNs). The classification problem is formulated as a machine learning approach, where detected nodule candidates are classified as PFNs or non-PFNs. Supervised learning is used, where a classifier is trained to label the detected nodule. The classification of the nodule in 3D is formulated as an ensemble of classifiers trained to recognize PFNs based on 2D views of the nodule. In order to describe nodule morphology in 2D views, we use the output of a pre-trained convolutional neural network known as OverFeat. We compare our approach with a recently presented descriptor of pulmonary nodule morphology, namely Bag of Frequencies, and illustrate the advantages offered by the two strategies, achieving performance of AUC = 0.868, which is close to the one of human experts."
},
{
"pmid": "26863652",
"title": "Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition.",
"abstract": "In general image recognition problems, discriminative information often lies in local image patches. For example, most human identity information exists in the image patches containing human faces. The same situation stays in medical images as well. \"Bodypart identity\" of a transversal slice-which bodypart the slice comes from-is often indicated by local image information, e.g., a cardiac slice and an aorta arch slice are only differentiated by the mediastinum region. In this work, we design a multi-stage deep learning framework for image classification and apply it on bodypart recognition. Specifically, the proposed framework aims at: 1) discover the local regions that are discriminative and non-informative to the image classification problem, and 2) learn a image-level classifier based on these local regions. We achieve these two tasks by the two stages of learning scheme, respectively. In the pre-train stage, a convolutional neural network (CNN) is learned in a multi-instance learning fashion to extract the most discriminative and and non-informative local patches from the training slices. In the boosting stage, the pre-learned CNN is further boosted by these local patches for image classification. The CNN learned by exploiting the discriminative local appearances becomes more accurate than those learned from global image context. The key hallmark of our method is that it automatically discovers the discriminative and non-informative local patches through multi-instance deep learning. Thus, no manual annotation is required. Our method is validated on a synthetic dataset and a large scale CT dataset. It achieves better performances than state-of-the-art approaches, including the standard deep CNN."
},
{
"pmid": "26886975",
"title": "Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks.",
"abstract": "Cerebral microbleeds (CMBs) are small haemorrhages nearby blood vessels. They have been recognized as important diagnostic biomarkers for many cerebrovascular diseases and cognitive dysfunctions. In current clinical routine, CMBs are manually labelled by radiologists but this procedure is laborious, time-consuming, and error prone. In this paper, we propose a novel automatic method to detect CMBs from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). Compared with previous methods that employed either low-level hand-crafted descriptors or 2D CNNs, our method can take full advantage of spatial contextual information in MR volumes to extract more representative high-level features for CMBs, and hence achieve a much better detection accuracy. To further improve the detection performance while reducing the computational cost, we propose a cascaded framework under 3D CNNs for the task of CMB detection. We first exploit a 3D fully convolutional network (FCN) strategy to retrieve the candidates with high probabilities of being CMBs, and then apply a well-trained 3D CNN discrimination model to distinguish CMBs from hard mimics. Compared with traditional sliding window strategy, the proposed 3D FCN strategy can remove massive redundant computations and dramatically speed up the detection process. We constructed a large dataset with 320 volumetric MR scans and performed extensive experiments to validate the proposed method, which achieved a high sensitivity of 93.16% with an average number of 2.74 false positives per subject, outperforming previous methods using low-level descriptors or 2D CNNs by a significant margin. The proposed method, in principle, can be adapted to other biomarker detection tasks from volumetric medical data."
},
{
"pmid": "26886969",
"title": "Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images.",
"abstract": "Convolutional neural networks (CNNs) are deep learning network architectures that have pushed forward the state-of-the-art in a range of computer vision applications and are increasingly popular in medical image analysis. However, training of CNNs is time-consuming and challenging. In medical image analysis tasks, the majority of training examples are easy to classify and therefore contribute little to the CNN learning process. In this paper, we propose a method to improve and speed-up the CNN training for medical image analysis tasks by dynamically selecting misclassified negative samples during training. Training samples are heuristically sampled based on classification by the current status of the CNN. Weights are assigned to the training samples and informative samples are more likely to be included in the next CNN training iteration. We evaluated and compared our proposed method by training a CNN with (SeS) and without (NSeS) the selective sampling method. We focus on the detection of hemorrhages in color fundus images. A decreased training time from 170 epochs to 60 epochs with an increased performance-on par with two human experts-was achieved with areas under the receiver operating characteristics curve of 0.894 and 0.972 on two data sets. The SeS CNN statistically outperformed the NSeS CNN on an independent test set."
},
{
"pmid": "26955024",
"title": "Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks.",
"abstract": "We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system."
},
{
"pmid": "26886976",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.",
"abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks."
},
{
"pmid": "29507784",
"title": "Artificial intelligence in healthcare: past, present and future.",
"abstract": "Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. We survey the current status of AI applications in healthcare and discuss its future. AI can be applied to various types of healthcare data (structured and unstructured). Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology. We then review in more details the AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation. We conclude with discussion about pioneer AI systems, such as IBM Watson, and hurdles for real-life deployment of AI."
},
{
"pmid": "23884657",
"title": "The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository.",
"abstract": "The National Institutes of Health have placed significant emphasis on sharing of research data to support secondary research. Investigators have been encouraged to publish their clinical and imaging data as part of fulfilling their grant obligations. Realizing it was not sufficient to merely ask investigators to publish their collection of imaging and clinical data, the National Cancer Institute (NCI) created the open source National Biomedical Image Archive software package as a mechanism for centralized hosting of cancer related imaging. NCI has contracted with Washington University in Saint Louis to create The Cancer Imaging Archive (TCIA)-an open-source, open-access information resource to support research, development, and educational initiatives utilizing advanced medical imaging of cancer. In its first year of operation, TCIA accumulated 23 collections (3.3 million images). Operating and maintaining a high-availability image archive is a complex challenge involving varied archive-specific resources and driven by the needs of both image submitters and image consumers. Quality archives of any type (traditional library, PubMed, refereed journals) require management and customer service. This paper describes the management tasks and user support model for TCIA."
},
{
"pmid": "29760397",
"title": "Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks.",
"abstract": "The use of convolutional neural networks (CNNs) has dramatically advanced our ability to recognize images with machine learning methods. We aimed to construct a CNN that could recognize the anatomical location of esophagogastroduodenoscopy (EGD) images in an appropriate manner. A CNN-based diagnostic program was constructed based on GoogLeNet architecture, and was trained with 27,335 EGD images that were categorized into four major anatomical locations (larynx, esophagus, stomach and duodenum) and three subsequent sub-classifications for stomach images (upper, middle, and lower regions). The performance of the CNN was evaluated in an independent validation set of 17,081 EGD images by drawing receiver operating characteristics (ROC) curves and calculating the area under the curves (AUCs). ROC curves showed high performance of the trained CNN to classify the anatomical location of EGD images with AUCs of 1.00 for larynx and esophagus images, and 0.99 for stomach and duodenum images. Furthermore, the trained CNN could recognize specific anatomical locations within the stomach, with AUCs of 0.99 for the upper, middle, and lower stomach. In conclusion, the trained CNN showed robust performance in its ability to recognize the anatomical location of EGD images, highlighting its significant potential for future application as a computer-aided EGD diagnostic system."
}
] |
Micromachines | 30959945 | PMC6523483 | 10.3390/mi10040236 | Shape Programming Using Triangular and Rectangular Soft Robot Primitives | This paper presents fabric-based soft robotic modules with primitive morphologies, which are analogous to basic geometrical polygons—trilateral and quadrilateral. The two modules are the inflatable beam (IB) and fabric-based rotary actuator (FRA). The FRA module is designed with origami-inspired V-shaped pleats, which creates a trilateral outline. Upon pressurization, the pleats unfold, which enables propagation of angular displacement of the FRA module. This allows the FRA module to be implemented as a mobility unit in the larger assembly of pneumatic structures. In the following, we examine various ways by which FRA modules can be connected to IB modules. We studied how different ranges of motion can be achieved by varying the design of the rotary joint of the assemblies. Using a state transition-based position control system, movement of the assembled modules could be controlled by regulating the pneumatic pressurization of the FRA module at the joint. These basic modules allow us to build different types of pneumatic structures. In this paper, using IB and FRA modules of various dimensions, we constructed a soft robotic limb with an end effector, which can be attached to wheelchairs to provide assistive grasping functions for users with disabilities. | 2. Related WorksModular soft robots were previously discussed by Onal and Rus, who designed and fabricated FEA modules, which can be arranged in serial, parallel, and hybrid configuration [15]. However, elastomer-based modules are fabricated by molding of liquid silicone polymer, which solidifies into the desired form upon curing. In order to attach the modules, a similar technique was applied which made the process irreversible, as the cured modules cannot be detached from one another. Furthermore, loss of mechanical energy, which would otherwise be used in force or torque generation, would occur due to deformation of elastic material. There is a limitation on the compatibility of the actuator, as the air chambers need to be designed and fabricated with minimum thickness to assure a correct form of actuation [16]. Lee et al. explored an alternative way of designing modular soft robots [14]. A comprehensive design collection of modules was presented, each of which have diverse functions—motion generation, air distribution, and connection. By selecting and arranging modules of different functions, the user is able to develop soft robots of any desired shapes or functions, such as gripper or locomotion. The actuators modules were fabricated using either multimaterial 3D-printing or molding. Each module is designed with a pneumatic chamber, and the wall thickness is varied to permit different actuation modes. Three mechanical connection mechanisms, which are screw thread, push fitting, and bi-stable junction, were presented and discussed. Each pneumatic module is fabricated with hollow connectors at either end to allow the modules to be easily attached and detached to each other by mechanical means and hence, upon connection, a central fluidic pathway is formed within the assembled modules. However, addition of extruding mechanical connectors compromises the intrinsic soft nature of the actuators and the maximum engagement pressure [17].In this paper, a simple modular design concept is adopted, whereby triangular and rectangular shape primitives are introduced—the inflatable beam (IB) module and fabric-based rotary actuator (FRA) module. Fabric material is used to fabricate these soft pneumatic actuator modules. Fabric-based soft actuators have begun to emerge lately for various applications, such as robotic grippers or wearable rehabilitation devices [18]. The thin nature of fabric sheet allows us to create pneumatic actuators with walls which are thinner than those of silicone-based actuators, and yet capable of generating comparable performance. In contrast to silicone actuators whose fluidic chambers are fabricated using custom-shaped molds and liquid elastomeric polymers, the sheets of fabric-based actuators can be folded and sealed into pleats, which would serve as pneumatic chambers. The propagation of actuating motion is determined by the characteristics of the pleats—dimensions, location, and number of the pleats. In addition, a fabric-based fabrication protocol enables scaling up of actuator designs, without compromising on the convenience of fabrication protocol and required time [19,20,21]. Compared to 3D-printed actuators [13], fabric-based actuators require a lower range of pneumatic pressure to function. The contributions of the paper as follows: (1) A simple modular design concept using shape primitives that are inspired by basic geometrical polygons, and (2) construction of larger pneumatic structures using fabric-based modules which can be assembled and disassembled. In the following sections, we discuss the design concept of the modules, identify and characterize configurations of the modules, implement a position control system for the assembled modules, and highlight a possible application of the modules. | [
"21328664",
"26017446",
"23524383",
"22123978",
"27007297",
"25080193"
] | [
{
"pmid": "26017446",
"title": "Design, fabrication and control of soft robots.",
"abstract": "Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics."
},
{
"pmid": "23524383",
"title": "Autonomous undulatory serpentine locomotion utilizing body dynamics of a fluidic soft robot.",
"abstract": "Soft robotics offers the unique promise of creating inherently safe and adaptive systems. These systems bring man-made machines closer to the natural capabilities of biological systems. An important requirement to enable self-contained soft mobile robots is an on-board power source. In this paper, we present an approach to create a bio-inspired soft robotic snake that can undulate in a similar way to its biological counterpart using pressure for actuation power, without human intervention. With this approach, we develop an autonomous soft snake robot with on-board actuation, power, computation and control capabilities. The robot consists of four bidirectional fluidic elastomer actuators in series to create a traveling curvature wave from head to tail along its body. Passive wheels between segments generate the necessary frictional anisotropy for forward locomotion. It takes 14 h to build the soft robotic snake, which can attain an average locomotion speed of 19 mm s(-1)."
},
{
"pmid": "22123978",
"title": "Multigait soft robot.",
"abstract": "This manuscript describes a unique class of locomotive robot: A soft robot, composed exclusively of soft materials (elastomeric polymers), which is inspired by animals (e.g., squid, starfish, worms) that do not have hard internal skeletons. Soft lithography was used to fabricate a pneumatically actuated robot capable of sophisticated locomotion (e.g., fluid movement of limbs and multiple gaits). This robot is quadrupedal; it uses no sensors, only five actuators, and a simple pneumatic valving system that operates at low pressures (< 10 psi). A combination of crawling and undulation gaits allowed this robot to navigate a difficult obstacle. This demonstration illustrates an advantage of soft robotics: They are systems in which simple types of actuation produce complex motion."
},
{
"pmid": "27007297",
"title": "Characterisation and evaluation of soft elastomeric actuators for hand assistive and rehabilitation applications.",
"abstract": "Various hand exoskeletons have been proposed for the purposes of providing assistance in activities of daily living and rehabilitation exercises. However, traditional exoskeletons are made of rigid components that impede the natural movement of joints and cause discomfort to the user. This paper evaluated a soft wearable exoskeleton using soft elastomeric actuators. The actuators could generate the desired actuation of the finger joints with a simple design. The actuators were characterised in terms of their radius of curvature and force output during actuation. Additionally, the device was evaluated on five healthy subjects in terms of its assisted finger joint range of motion. Results demonstrated that the subjects were able to perform the grasping actions with the assistance of the device and the range of motion of individual finger joints varied from subject to subject. This work evaluated the performance of a soft wearable exoskeleton and highlighted the importance of customisability of the device. It demonstrated the possibility of replacing traditional rigid exoskeletons with soft exoskeletons that are more wearable and customisable."
},
{
"pmid": "25080193",
"title": "Using \"click-e-bricks\" to make 3D elastomeric structures.",
"abstract": "Soft, 3D elastomeric structures and composite structures are easy to fabricate using click-e-bricks, and the internal architecture of these structures together with the capabilities built into the bricks themselves provide mechanical, optical, electrical, and fluidic functions."
}
] |
Frontiers in Neuroscience | 31133772 | PMC6524701 | 10.3389/fnins.2019.00354 | One-Step, Three-Factor Passthought Authentication With Custom-Fit, In-Ear EEG | In-ear EEG offers a promising path toward usable, discreet brain-computer interfaces (BCIs) for both healthy individuals and persons with disabilities. To test the promise of this modality, we produced a brain-based authentication system using custom-fit EEG earpieces. In a sample of N = 7 participants, we demonstrated that our system has high accuracy, higher than prior work using non-custom earpieces. We demonstrated that both inherence and knowledge factors contribute to authentication accuracy, and performed a simulated attack to show our system's robustness against impersonation. From an authentication standpoint, our system provides three factors of authentication in a single step. From a usability standpoint, our system does not require a cumbersome, head-worn device. | 2. Related Work2.1. In-Ear EEGThe concept of in-ear EEG was introduced in 2011 with a demonstration of the feasibility of recording brainwave signals from within the ear canal (Looney et al., 2011). The in-ear placement can produce signal-to-noise ratios comparable to those from conventional EEG electrode placements, is robust to common sources of artifacts, and can be used in a brain-computer interface (BCI) system based on auditory and visual evoked potentials (Kidmose et al., 2013). One previous study attempted to demonstrate user authentication using in-ear EEG, but was only able to attain an accuracy level of 80%, limited by the use of a consumer-grade device with a single generic-fit electrode (Curran et al., 2016). A follow-up study with a single, generic-fit electrode achieved an accuracy of 95.7% over multiple days (Nakamura et al., 2018).2.2. Passthoughts and Behavioral AuthenticationThe use of EEG as a biometric signal for user authentication has a relatively short history. In 2005, Thorpe et al. motivated and outlined the design of a passthoughts system (Thorpe et al., 2005). Since 2002, a number of independent groups have achieved 99–100% authentication accuracy for small populations using research-grade and consumer-grade scalp-based EEG systems (Poulos et al., 2002; Marcel and Millan, 2007; Ashby et al., 2011; Chuang et al., 2013). Several recent works on brainwave biometrics have independently demonstrated individuals' EEG permanence over 1–6 months (Armstrong et al., 2015; Maiorana et al., 2016) or even over 1 year (Ruiz-Blondet et al., 2017).2.2.1. Authentication FactorsBehavioral authentication methods such as keystroke dynamics and speaker authentication can be categorized as one-step two-factor authentication schemes. In both cases, the knowledge factor (password or passphrase) and inherence factor (typing rhythm or speaker's voice) are employed (Monrose and Rubin, 1997). In contrast, the Nymi band supports one-step two-factor authentication via the inherence factor (cardiac rhythm that is supposed to be unique to each individual) and the possession factor (the wearing of the band on the wrist) (Nymi, 2017). However, as far as we know, no one has proposed or demonstrated a one-step three-factor authentication scheme.2.3. Usable AuthenticationWhen proposing or evaluating authentication paradigms, robustness against imposters is often a first consideration, but the usability of these systems is of equal importance as they must conform to a person's needs and lifestyle to warrant adoption and prolonged use. Sasse et al. describe usability issues with common knowledge-based systems like alphanumeric passwords, in particular that a breach in systems which require users to remember complex passwords that must be frequently changed is a failure on the part of the system's design, not the fault of the user (Sasse et al., 2001). Other research analyzed some of the complexities of applying human factors heuristics for interface design to authentication, and indicate the importance of social acceptability, learnability, and simplicity of authentication methods (Braz and Robert, 2006). Technologies worn on the head entail particular usability issues; in their analysis of user perceptions of headworn devices, Genaro et al. identified design, usability, ease of use, and obtrusiveness among the top ten concerns of users, as well as qualitative comments around comfort and “looking weird” (Genaro Motti and Caine, 2014).Mobile and wearable technologies' continuous proximity to the user's body provides favorable conditions for unobtrusively capturing biometrics for authentication. Many such uses have been proposed that embrace usability like touch-based interactions (Holz and Knaust, 2015; Tartz and Gooding, 2015) and walking patterns (Lu et al., 2014) using mobile phones, as well as identification via head movements and blinking in head-worn devices (Rogers et al., 2015). However, these typically draw only from the inherence factor. Chen et al. proposed an inherence and knowledge two-factor method for multi-touch mobile devices based on a user's unique finger tapping of a song (Chen et al., 2015), though it may be vulnerable to “shoulder surfing”: imposters observing and mimicking the behavior to gain access.2.4. One-Step, Three-Factor AuthenticationIt is well appreciated by experts and end-users alike that strong authentication is critical to cybersecurity and privacy, now and into the future. Unfortunately, news reports of celebrity account hackings serve as regular reminders that the currently dominant method of authentication in consumer applications, single-factor authentication using passwords or other user-chosen secrets, faces many challenges. Many major online services have strongly encouraged their users to adopt two-factor authentication (2FA). However, submitting two different authenticators in two separate steps has frustrated wide adoption due to its additional hassle to users. Modern smartphones, for instance, already support device unlock using either a user-selected passcode or a fingerprint. These devices could very well support a two-step two-factor authentication scheme if desired. However, it is easy to understand why users would balk at having to enter a passcode and provide a fingerprint each time they want to unlock their phone.“One-step two-factor authentication” has been proposed as a new approach to authentication that can provide the security benefits of two-factor authentication without incurring the hassle cost of two-step verification (Chuang, 2014). In this work we undertake, to the best of our knowledge, the first-ever study and design of one-step, three-factor authentication. In computer security, authenticators are classified into three types: knowledge factors (e.g., passwords and PINs), possession factors (e.g., physical tokens, ATM cards), and inherence factors (e.g., fingerprints and other biometrics). By taking advantage of a physical token in the form of personalized earpieces, the uniqueness of an individual's brainwaves, and a choice of mental task to use as one's “passthought,” we seek to achieve all three factors of authentication within a single step by the user.In the system we propose here we seek to incorporate recommendations from this research for improved usability while maintaining a highly secure system. The mental tasks we test are simple and personally relevant; instead of complex alphanumeric patterns like a traditional password, a mental activity like relaxed breathing or imagining a portion of one's favorite song are easy for a user to remember and perform as shown by participant feedback in previous passthoughts research and in our own results later in this paper. These mental activities are largely invisible to “shoulder surfing” attempts by onlookers, and furthermore present a possible solution to “rubber-hose attacks” (forceful coercion to divulge a password); a thought has a particular expression unique to an individual, the specific performance of which cannot be described and thus cannnot be coerced or forcibly unlike for example the combination to a padlock or fingerprint. Finally, to combat the wearability and obtrusiveness issues of scalp-based EEG systems used in other brain-based authentication research, our system's form factor of earpieces with embedded electrodes is highly similar to earbud headphones or wireless headsets already commonly worn and generally socially accepted technologies. | [
"24980915",
"22506831",
"12899257",
"29993415",
"23722447",
"17299229",
"25486653",
"26635514",
"16538095",
"17254636",
"11933767"
] | [
{
"pmid": "24980915",
"title": "Usability of four commercially-oriented EEG systems.",
"abstract": "Electroencephalography (EEG) holds promise as a neuroimaging technology that can be used to understand how the human brain functions in real-world, operational settings while individuals move freely in perceptually-rich environments. In recent years, several EEG systems have been developed that aim to increase the usability of the neuroimaging technology in real-world settings. Here, the usability of three wireless EEG systems from different companies are compared to a conventional wired EEG system, BioSemi's ActiveTwo, which serves as an established laboratory-grade 'gold standard' baseline. The wireless systems compared include Advanced Brain Monitoring's B-Alert X10, Emotiv Systems' EPOC and the 2009 version of QUASAR's Dry Sensor Interface 10-20. The design of each wireless system is discussed in relation to its impact on the system's usability as a potential real-world neuroimaging system. Evaluations are based on having participants complete a series of cognitive tasks while wearing each of the EEG acquisition systems. This report focuses on the system design, usability factors and participant comfort issues that arise during the experimental sessions. In particular, the EEG systems are assessed on five design elements: adaptability of the system for differing head sizes, subject comfort and preference, variance in scalp locations for the recording electrodes, stability of the electrical connection between the scalp and electrode, and timing integration between the EEG system, the stimulus presentation computer and other external events."
},
{
"pmid": "22506831",
"title": "Evaluating the ergonomics of BCI devices for research and experimentation.",
"abstract": "The use of brain computer interface (BCI) devices in research and applications has exploded in recent years. Applications such as lie detectors that use functional magnetic resonance imaging (fMRI) to video games controlled using electroencephalography (EEG) are currently in use. These developments, coupled with the emergence of inexpensive commercial BCI headsets, such as the Emotiv EPOC ( http://emotiv.com/index.php ) and the Neurosky MindWave, have also highlighted the need of performing basic ergonomics research since such devices have usability issues, such as comfort during prolonged use, and reduced performance for individuals with common physical attributes, such as long or coarse hair. This paper examines the feasibility of using consumer BCIs in scientific research. In particular, we compare user comfort, experiment preparation time, signal reliability and ease of use in light of individual differences among subjects for two commercially available hardware devices, the Emotiv EPOC and the Neurosky MindWave. Based on these results, we suggest some basic considerations for selecting a commercial BCI for research and experimentation. STATEMENT OF RELEVANCE: Despite increased usage, few studies have examined the usability of commercial BCI hardware. This study assesses usability and experimentation factors of two commercial BCI models, for the purpose of creating basic guidelines for increased usability. Finding that more sensors can be less comfortable and accurate than devices with fewer sensors."
},
{
"pmid": "12899257",
"title": "Comparison of linear, nonlinear, and feature selection methods for EEG signal classification.",
"abstract": "The reliable operation of brain-computer interfaces (BCIs) based on spontaneous electroencephalogram (EEG) signals requires accurate classification of multichannel EEG. The design of EEG representations and classifiers for BCI are open research questions whose difficulty stems from the need to extract complex spatial and temporal patterns from noisy multidimensional time series obtained from EEG measurements. The high-dimensional and noisy nature of EEG may limit the advantage of nonlinear classification methods over linear ones. This paper reports the results of a linear (linear discriminant analysis) and two nonlinear classifiers (neural networks and support vector machines) applied to the classification of spontaneous EEG during five mental tasks, showing that nonlinear classifiers produce only slightly better classification results. An approach to feature selection based on genetic algorithms is also presented with preliminary results of application to EEG during finger movement."
},
{
"pmid": "29993415",
"title": "Dry-Contact Electrode Ear-EEG.",
"abstract": "OBJECTIVE\nEar-EEG is a recording method in which EEG signals are acquired from electrodes placed on an earpiece inserted into the ear. Thereby, ear-EEG provides a noninvasive and discreet way of recording EEG, and has the potential to be used for long-term brain monitoring in real-life environments. Whereas previously reported ear-EEG recordings have been performed with wet electrodes, the objective of this study was to develop and evaluate dry-contact electrode ear-EEG.\n\n\nMETHODS\nTo achieve a well-functioning dry-contact interface, a new ear-EEG platform was developed. The platform comprised actively shielded and nanostructured electrodes embedded in an individualized soft-earpiece. The platform was evaluated in a study of 12 subjects and four EEG paradigms: auditory steady-state response, steady-state visual evoked potential, mismatch negativity, and alpha-band modulation.\n\n\nRESULTS\nRecordings from the prototyped dry-contact ear-EEG platform were compared to conventional scalp EEG recordings. When all electrodes were referenced to a common scalp electrode (Cz), the performance was on par with scalp EEG measured close to the ear. With both the measuring electrode and the reference electrode located within the ear, statistically significant (p < 0.05) responses were measured for all paradigms, although for mismatch negativity, it was necessary to use a reference located in the opposite ear, to obtain a statistically significant response.\n\n\nCONCLUSION\nThe study demonstrated that dry-contact electrode ear-EEG is a feasible technology for EEG recording.\n\n\nSIGNIFICANCE\nThe prototyped dry-contact ear-EEG platform represents an important technological advancement of the method in terms of user-friendliness, because it eliminates the need for gel in the electrode-skin interface."
},
{
"pmid": "23722447",
"title": "A study of evoked potentials from ear-EEG.",
"abstract": "A method for brain monitoring based on measuring the electroencephalogram (EEG) from electrodes placed in-the-ear (ear-EEG) was recently proposed. The objective of this study is to further characterize the ear-EEG and perform a rigorous comparison against conventional on-scalp EEG. This is achieved for both auditory and visual evoked responses, over steady-state and transient paradigms, and across a population of subjects. The respective steady-state responses are evaluated in terms of signal-to-noise ratio and statistical significance, while the qualitative analysis of the transient responses is performed by considering grand averaged event-related potential (ERP) waveforms. The outcomes of this study demonstrate conclusively that the ear-EEG signals, in terms of the signal-to-noise ratio, are on par with conventional EEG recorded from electrodes placed over the temporal region."
},
{
"pmid": "17299229",
"title": "Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.",
"abstract": "In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others."
},
{
"pmid": "25486653",
"title": "Wearable, wireless EEG solutions in daily life applications: what are we missing?",
"abstract": "Monitoring human brain activity has great potential in helping us understand the functioning of our brain, as well as in preventing mental disorders and cognitive decline and improve our quality of life. Noninvasive surface EEG is the dominant modality for studying brain dynamics and performance in real-life interaction of humans with their environment. To take full advantage of surface EEG recordings, EEG technology has to be advanced to a level that it can be used in daily life activities. Furthermore, users have to see it as an unobtrusive option to monitor and improve their health. To achieve this, EEG systems have to be transformed from stationary, wired, and cumbersome systems used mostly in clinical practice today, to intelligent wearable, wireless, convenient, and comfortable lifestyle solutions that provide high signal quality. Here, we discuss state-of-the-art in wireless and wearable EEG solutions and a number of aspects where such solutions require improvements when handling electrical activity of the brain. We address personal traits and sensory inputs, brain signal generation and acquisition, brain signal analysis, and feedback generation. We provide guidelines on how these aspects can be advanced further such that we can develop intelligent wearable, wireless, lifestyle EEG solutions. We recognized the following aspects as the ones that need rapid research progress: application driven design, end-user driven development, standardization and sharing of EEG data, and development of sophisticated approaches to handle EEG artifacts."
},
{
"pmid": "26635514",
"title": "EEG Recorded from the Ear: Characterizing the Ear-EEG Method.",
"abstract": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses."
},
{
"pmid": "16538095",
"title": "Seizure anticipation: from algorithms to clinical practice.",
"abstract": "PURPOSE OF REVIEW\nOur understanding of the mechanisms that lead to the occurrence of epileptic seizures is rather incomplete. If it were possible to identify preictal precursors from the EEG of epilepsy patients, therapeutic possibilities could improve dramatically. Studies on seizure prediction have advanced from preliminary descriptions of preictal phenomena via proof of principle studies and controlled studies to studies on continuous multi-day recordings.\n\n\nRECENT FINDINGS\nFollowing mostly promising early reports, recent years have witnessed a debate over the reproducibility of results and suitability of approaches. The current literature is inconclusive as to whether seizures are predictable by prospective algorithms. Prospective out-of-sample studies including a statistical validation are missing. Nevertheless, there are indications of a superior performance for approaches characterizing relations between different brain regions.\n\n\nSUMMARY\nPrediction algorithms must be proven to perform better than a random predictor before prospective clinical trials involving seizure intervention techniques in patients can be justified."
},
{
"pmid": "17254636",
"title": "PsychoPy--Psychophysics software in Python.",
"abstract": "The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits."
},
{
"pmid": "11933767",
"title": "Person identification from the EEG using nonlinear signal classification.",
"abstract": "OBJECTIVES\nThis paper focusses on the person identification problem based on features extracted from the ElectroEncephaloGram (EEG). A bilinear rather than a purely linear model is fitted on the EEG signal, prompted by the existence of non-linear components in the EEG signal--a conjecture already investigated in previous research works. The novelty of the present work lies in the comparison between the linear and the bilinear results, obtained from real field EEG data, aiming towards identification of healthy subjects rather than classification of pathological cases for diagnosis.\n\n\nMETHODS\nThe EEG signal of a, in principle, healthy individual is processed via (non)linear (AR, bilinear) methods and classified by an artificial neural network classifier.\n\n\nRESULTS\nExperiments performed on real field data show that utilization of the bilinear model parameters as features improves correct classification scores at the cost of increased complexity and computations. Results are seen to be statistically significant at the 99.5% level of significance, via the chi 2 test for contingency.\n\n\nCONCLUSIONS\nThe results obtained in the present study further corroborate existing research, which shows evidence that the EEG carries individual-specific information, and that it can be successfully exploited for purposes of person identification and authentication."
}
] |
Scientific Reports | 31110326 | PMC6527613 | 10.1038/s41598-019-43951-8 | Classification of Polar Maps from Cardiac Perfusion Imaging with Graph-Convolutional Neural Networks | Myocardial perfusion imaging is a non-invasive imaging technique commonly used for the diagnosis of Coronary Artery Disease and is based on the injection of radiopharmaceutical tracers into the blood stream. The patient’s heart is imaged while at rest and under stress in order to determine its capacity to react to the imposed challenge. Assessment of imaging data is commonly performed by visual inspection of polar maps showing the tracer uptake in a compact, two-dimensional representation of the left ventricle. This article presents a method for automatic classification of polar maps based on graph convolutional neural networks. Furthermore, it evaluates how well localization techniques developed for standard convolutional neural networks can be used for the localization of pathological segments with respect to clinically relevant areas. The method is evaluated using 946 labeled datasets and compared quantitatively to three other neural-network-based methods. The proposed model achieves an agreement with the human observer on 89.3% of rest test polar maps and on 91.1% of stress test polar maps. Localization performed on a fine 17-segment division of the polar maps achieves an agreement of 83.1% with the human observer, while localization on a coarse 3-segment division based on the vessel beds of the left ventricle has an agreement of 78.8% with the human observer. Our method could thus assist the decision-making process of physicians when analyzing polar map data obtained from myocardial perfusion images. | Related workFujita et al.5 used a multilayer perceptron with 16 × 16 input nodes, 100 hidden nodes and eight output nodes to classify digitized and down-sampled polar maps into normal or pathological with respect to one of the following areas: Left Circumflex Artery (LCX), Right Coronary Artery (RCA) and Left Anterior Descending Artery (LAD) as well as combinations thereof (LAD + LCX, LCX + RCA, LAD + RCA, LAD + LCX + RCA). The segmentation of the polar maps into these areas is depicted in Fig. 1c. This approach was trained and evaluated using 74 cases examined by coronary angiography and it has been reported that the achieved recognition performance was better than that of a radiology resident, but worse than the one of the participating experienced radiologist. Porenta et al.6 used a similar network architecture with 45 input nodes (corresponding to the relative segmental thallium uptake at stress), 15 hidden nodes and one output node indicating the presence of CAD. This study was conducted using 159 patients, with coronary angiography being available for 81 patients, and the network’s average sensitivity was reported to be 51% (at a specificity of 90%) in comparison to 72% achieved by an expert reader. Also similar to Fujita et al.5, Hamilton et al.7 used an artificial neural network with 15 × 40 input nodes, 5 hidden nodes and one output node. This study was conducted using both simulation data and real stress-rest data obtained from 410 male patients. The accuracy of the network for detecting CAD was reported to be 92%. Lindahl et al.8 conducted a conceptually similar study using 135 patients, where a contrast left ventriculogram was performed in 106 cases. This study particularly investigated the usage of quantization-based and Fourier-transform-based dimensionality reduction techniques for reducing the input data size. The authors also trained several networks for detecting CAD, CAD in the LAD territory, and CAD in the RCA/LCX territory. They reported a statistically significant improvement of over 10% in terms of sensitivity in comparison to two human experts for the detection of CAD. All these methods have in common that they directly use quantized and down-sampled versions of the original polar maps as an input to a 3-layer neural network (or 3-layer-perceptron) with one hidden layer. Hence, we term these approaches direct methods.In contrast to this, there also exist several indirect methods which aim for a computer-assisted diagnosis of CAD via image-derived quantitative measures that are used for classification. An early example is the work of Slomka et al.9, who proposed a method based on intensity-based image registration and normalization in order to derive a relative count change measure, i.e. the measure of ischemia (ISCH). ISCH was reported to significantly outperform existing quantitative approaches based on reference databases. Asanjani et al.10 extended this concept and proposed a method based on support vector machines taking ISCH and Stress Total Perfusion Deficit (TPD) as well as functional parameters, such as Poststress Ejection Fraction Changes or Motion and Thickening Changes, as a classifier input. In a related work, Asanjani et al.11 used even more inputs, i.e. supine/prone TPD, stress/rest perfusion changes, transient ischemic dilatation, age, sex, and post-electrocardiogram CAD probability, as an input to a boosted ensemble classifier. The diagnostic accuracy of the classifier in the latter study has been shown to be on par or slightly better than the one achieved by two participating experts, respectively11.Recently, Betancour et al.12 proposed a hybrid method based on deep convolutional neural networks which takes into account both raw and quantitative (based on TPD) polar maps for the prediction of obstructive stenosis. The approach has been developed using data from 1,638 patients. Using a threshold to match the specificity of TPD, per-patient sensitivity has been reported to improve from 79.8% (TPD) to 82.3% (p < 0.05), and per-vessel sensitivity to improve from 64.4% (TPD) to 69.8% (p < 0.01). In a subsequent multi-center study, Betancour
et al.13 applied a comparable method to predict obstructive CAD taking into account both upright and supine polar maps. The authors showed that when operating with the same specificity as clinical readers, the proposed method had the same sensitivity for disease prediction as on-site clinical readers, and significantly improved the sensitivity compared to combined total perfusion deficit (cTPD). | [
"23703378",
"25388380"
] | [
{
"pmid": "23703378",
"title": "Improved accuracy of myocardial perfusion SPECT for detection of coronary artery disease by machine learning in a large population.",
"abstract": "OBJECTIVE\nWe aimed to improve the diagnostic accuracy of myocardial perfusion SPECT (MPS) by integrating clinical data and quantitative image features with machine learning (ML) algorithms.\n\n\nMETHODS\n1,181 rest (201)Tl/stress (99m)Tc-sestamibi dual-isotope MPS studies [713 consecutive cases with correlating invasive coronary angiography (ICA) and suspected coronary artery disease (CAD) and 468 with low likelihood (LLk) of CAD <5%] were considered. Cases with stenosis <70% by ICA and LLk of CAD were considered normal. Total stress perfusion deficit (TPD) for supine/prone data, stress/rest perfusion change, and transient ischemic dilatation were derived by automated perfusion quantification software and were combined with age, sex, and post-electrocardiogram CAD probability by a boosted ensemble ML algorithm (LogitBoost). The diagnostic accuracy of the model for prediction of obstructive CAD ≥70% was compared to standard prone/supine quantification and to visual analysis by two experienced readers utilizing all imaging, quantitative, and clinical data. Tenfold stratified cross-validation was performed.\n\n\nRESULTS\nThe diagnostic accuracy of ML (87.3% ± 2.1%) was similar to Expert 1 (86.0% ± 2.1%), but superior to combined supine/prone TPD (82.8% ± 2.2%) and Expert 2 (82.1% ± 2.2%) (P < .01). The receiver operator characteristic areas under curve for ML algorithm (0.94 ± 0.01) were higher than those for TPD and both visual readers (P < .001). The sensitivity of ML algorithm (78.9% ± 4.2%) was similar to TPD (75.6% ± 4.4%) and Expert 1 (76.3% ± 4.3%), but higher than that of Expert 2 (71.1% ± 4.6%), (P < .01). The specificity of ML algorithm (92.1% ± 2.2%) was similar to Expert 1 (91.4% ± 2.2%) and Expert 2 (88.3% ± 2.5%), but higher than TPD (86.8% ± 2.6%), (P < .01).\n\n\nCONCLUSION\nML significantly improves diagnostic performance of MPS by computational integration of quantitative perfusion and clinical data to the level rivaling expert analysis."
},
{
"pmid": "25388380",
"title": "Quantitative high-efficiency cadmium-zinc-telluride SPECT with dedicated parallel-hole collimation system in obese patients: results of a multi-center study.",
"abstract": "BACKGROUND\nObesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride parallel-hole SPECT MPI for coronary artery disease (CAD) in obese patients.\n\n\nMETHODS AND RESULTS\n118 consecutive obese patients at three centers (BMI 43.6 ± 8.9 kg·m(-2), range 35-79.7 kg·m(-2)) had upright/supine HE-SPECT and invasive coronary angiography > 6 months (n = 67) or low likelihood of CAD (n = 51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD), and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5 = excellent; < 3 nondiagnostic) was compared among BMI 35-39.9 (n = 58), 40-44.9 (n = 24) and ≥45 (n = 36) groups. ROC curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P = .02). C-TPD normalcy rate was higher than U-TPD (88% vs 75%, P = .02). Mean IQ was similar among BMI 35-39.9, 40-44.9 and ≥45 groups [4.6 vs 4.4 vs 4.5, respectively (P = .6)]. No patient had a nondiagnostic stress scan.\n\n\nCONCLUSIONS\nIn obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions."
}
] |
Scientific Reports | 31110220 | PMC6527700 | 10.1038/s41598-019-44030-8 | Privacy-preserving Quantum Sealed-bid Auction Based on Grover’s Search Algorithm | Sealed-bid auction is an important tool in modern economic especially concerned with networks. However, the bidders still lack the privacy protection in previously proposed sealed-bid auction schemes. In this paper, we focus on how to further protect the privacy of the bidders, especially the non-winning bidders. We first give a new privacy-preserving model of sealed-bid auction and then present a quantum sealed-bid auction scheme with stronger privacy protection. Our proposed scheme takes a general state in N-dimensional Hilbert space as the message carrier, in which each bidder privately marks his bid in an anonymous way, and further utilizes Grover’s search algorithm to find the current highest bid. By O(lnn) iterations, it can get the highest bid finally. Compared with any classical scheme in theory, our proposed quantum scheme gets the lower communication complexity. | Related WorksElectronic auction plays an important role in modern economy especially concerned with networks. Generally, electronic auction can be mainly classified into three categories: English auction, Dutch auction and Sealed-bid auction. The traditional English auction is a public ascending price auction. In this auction, the auctioneer first gives a base price, and then some bidder bids a higher price than the base price. Furthermore, the next bidder outbids the last bidder, and the process continues until no one else bids a higher price. Finally, the item is sold to the highest bidder at the highest bid. On the contrary, the Dutch auction is a public descending price auction. The auctioneer in Dutch auction begins with a high asking price which is lowered until some bidder is willing to accept the auctioneer’s price. Difference from the former two auctions, the sealed-bid auction needs to protect the privacy of the bids and ensure the fairness among the bidders. That is, any eavesdropper cannot get any private information about the bids, and the auctioneer cannot help any bidder to win the auction unfairly. During traditional sealed-bid auction, the bidder does not know the bids of others. After all bids are transmitted privately to the auctioneer, the auctioneer selects out the highest bid and announces it and the corresponding winner.The first quantum sealed-bid auction protocol was proposed by Naseri in 200920. The auction protocol introduced a multi-party quantum secure direct communication protocol to privately transmit the bids. However, Qin et al.22 and Yang et al.23 independently pointed out that there was a secure flaw in Naseri’ protocol, i.e., a malicious bidder could obtain all private bids without being found by performing double Controlled NOT attack or using fake entangled particles. Then they improved Naseri’s original protocol by inserting some decoy particles into the transmitted particles. In addition to the detecting strategy of the decoy particles, there still appeared other defense strategies24,25 to prevent these attacks. Furthermore, Zhao et al.26 found that these previously proposed protocols were unfair, i.e., a malicious bidder could collude the dishonest auctioneer to perform a collusion attack to win the auction unfairly. Accordingly, they presented a security protocol for QSA with post-confirmation26. Subsequently, in order to enhance the security of QSA or ensure the feasibility of QSA, many quantum protocols with post-confirmation were proposed27–33. In 2017, we presented an economic and feasible quantum sealed-bid auction protocol based on single photons in both the polarization and the spatial-mode degrees of freedom34. In our protocol, the post-confirmation mechanism uses single photons instead of entangled EPR pairs, and it does not require quantum memory. Therefore, our protocol is a practical and feasible quantum sealed-bid auction.In all previously proposed quantum sealed-bid auction (QSA) protocols, it requires all bidders to send their real bids to the auctioneer. Even if the bidder can not win the auction, the auctioneer also knows his or her real bid. However, in practical settings, the bidders who will not be able to win the auction don’t want to reveal their real bids. That is, the non-winning bidders lack the privacy protection in current QSA schemes. In this paper, we present a strong privacy-preserving QSA model. In our model, anyone cannot get the real bid of other bidders, even for the auctioneer. So the privacy of the bidders can be better protected in our model. In addition, the bids of the bidders are anonymous, i.e., no one can discern who these bids belong to. Furthermore, we design a novel privacy-preserving QSA scheme based on Grover’s search algorithm. The proposed scheme not only guarantees the correctness and fairness of the auction, but also ensures the privacy and anonymity of the bidders, even for the auctioneer. Compared with the current existing quantum sealed-bid auction, our proposed scheme can provide stronger privacy protections, which are urgently requirements in modern network society. | [
"10053414",
"25839250",
"28621985",
"25782417",
"26792197",
"23215060",
"29203858"
] | [
{
"pmid": "25839250",
"title": "Entanglement-based machine learning on a quantum computer.",
"abstract": "Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing \"big data\" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning."
},
{
"pmid": "28621985",
"title": "Quantum Secure Direct Communication with Quantum Memory.",
"abstract": "Quantum communication provides an absolute security advantage, and it has been widely developed over the past 30 years. As an important branch of quantum communication, quantum secure direct communication (QSDC) promotes high security and instantaneousness in communication through directly transmitting messages over a quantum channel. The full implementation of a quantum protocol always requires the ability to control the transfer of a message effectively in the time domain; thus, it is essential to combine QSDC with quantum memory to accomplish the communication task. In this Letter, we report the experimental demonstration of QSDC with state-of-the-art atomic quantum memory for the first time in principle. We use the polarization degrees of freedom of photons as the information carrier, and the fidelity of entanglement decoding is verified as approximately 90%. Our work completes a fundamental step toward practical QSDC and demonstrates a potential application for long-distance quantum communication in a quantum network."
},
{
"pmid": "25782417",
"title": "Security of quantum digital signatures for classical messages.",
"abstract": "Quantum digital signatures can be used to authenticate classical messages in an information-theoretically secure way. Previously, a novel quantum digital signature for classical messages has been proposed and gave an experimental demonstration of distributing quantum digital signatures from one sender to two receivers. Some improvement versions were subsequently presented, which made it more feasible with present technology. These proposals for quantum digital signatures are basic building blocks which only deal with the problem of sending single bit messages while no-forging and non-repudiation are guaranteed. For a multi-bit message, it is only mentioned that the basic building blocks must be iterated, but the iteration of the basic building block still does not suffice to define the entire protocol. In this paper, we show that it is necessary to define the entire protocol because some attacks will arise if these building blocks are used in a naive way of iteration. Therefore, we give a way of defining an entire protocol to deal with the problem of sending multi-bit messages based on the basic building blocks and analyse its security."
},
{
"pmid": "26792197",
"title": "Secure Multiparty Quantum Computation for Summation and Multiplication.",
"abstract": "As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics."
},
{
"pmid": "23215060",
"title": "Complete insecurity of quantum protocols for classical two-party computation.",
"abstract": "A fundamental task in modern cryptography is the joint computation of a function which has two inputs, one from Alice and one from Bob, such that neither of the two can learn more about the other's input than what is implied by the value of the function. In this Letter, we show that any quantum protocol for the computation of a classical deterministic function that outputs the result to both parties (two-sided computation) and that is secure against a cheating Bob can be completely broken by a cheating Alice. Whereas it is known that quantum protocols for this task cannot be completely secure, our result implies that security for one party implies complete insecurity for the other. Our findings stand in stark contrast to recent protocols for weak coin tossing and highlight the limits of cryptography within quantum mechanics. We remark that our conclusions remain valid, even if security is only required to be approximate and if the function that is computed for Bob is different from that of Alice."
},
{
"pmid": "29203858",
"title": "Complete 3-Qubit Grover search on a programmable quantum computer.",
"abstract": "The Grover quantum search algorithm is a hallmark application of a quantum computer with a well-known speedup over classical searches of an unsorted database. Here, we report results for a complete three-qubit Grover search algorithm using the scalable quantum computing technology of trapped atomic ions, with better-than-classical performance. Two methods of state marking are used for the oracles: a phase-flip method employed by other experimental demonstrations, and a Boolean method requiring an ancilla qubit that is directly equivalent to the state marking scheme required to perform a classical search. We also report the deterministic implementation of a Toffoli-4 gate, which is used along with Toffoli-3 gates to construct the algorithms; these gates have process fidelities of 70.5% and 89.6%, respectively."
}
] |
Frontiers in Bioengineering and Biotechnology | 31158269 | PMC6529804 | 10.3389/fbioe.2019.00102 | Unsupervised Domain Adaptation for Classification of Histopathology Whole-Slide Images | Computational image analysis is one means for evaluating digitized histopathology specimens that can increase the reproducibility and reliability with which cancer diagnoses are rendered while simultaneously providing insight as to the underlying mechanisms of disease onset and progression. A major challenge that is confronted when analyzing samples that have been prepared at disparate laboratories and institutions is that the algorithms used to assess the digitized specimens often exhibit heterogeneous staining characteristics because of slight differences in incubation times and the protocols used to prepare the samples. Unfortunately, such variations can render a prediction model learned from one batch of specimens ineffective for characterizing an ensemble originating from another site. In this work, we propose to adopt unsupervised domain adaptation to effectively transfer the discriminative knowledge obtained from any given source domain to the target domain without requiring any additional labeling or annotation of images at the target site. In this paper, our team investigates the use of two approaches for performing the adaptation: (1) color normalization and (2) adversarial training. The adversarial training strategy is implemented through the use of convolutional neural networks to find an invariant feature space and Siamese architecture within the target domain to add a regularization that is appropriate for the entire set of whole-slide images. The adversarial adaptation results in significant classification improvement compared with the baseline models under a wide range of experimental settings. | 2. Related Works2.1. Color NormalizationIn an attempt to address the challenge of the previously described color batch effects, many investigators have applied color normalization methods to the imaged histopathology specimens prior to analysis (Ranefall et al., 1997; Meurie et al., 2003; Mao et al., 2006; Kong et al., 2007; Kothari et al., 2011; Khan et al., 2014; Tam et al., 2016; Vahadane et al., 2016; Alsubaie et al., 2017; del Toro et al., 2017; Janowczyk et al., 2017; Gadermayr et al., 2018; Sankaranarayanan et al., 2018; Zanjani et al., 2018a). One common approach for analyzing tissue samples is to treat stains as agents exhibiting selective affinities for specific biological substances. With an implicit assumption that the proportion of pixels associated with each stain is same in source and target images, histogram-based methods are investigated (Jain, 1989; Kong et al., 2007; Tabesh et al., 2007; Hipp et al., 2011; Kothari et al., 2011; Papadakis et al., 2011; Krishnan et al., 2012; Basavanhally and Madabhushi, 2013; Bejnordi et al., 2016; Tam et al., 2016). The main drawback of histogram-based methods is that they often introduce visual artifacts into the resulting images. Color deconvolution strategies (Macenko et al., 2009; Niethammer et al., 2010; Gavrilovic et al., 2013) have been utilized extensively in the analysis imaged histopathology specimens by separating RGB images into individual channels such as by converting from RBG to Lab (Reinhard et al., 2001) or HSV space (Zarella et al., 2017). The limitation of this approach is that both the image-specific stain matrix and a control tissue stained with a single stain is required to perform the color deconvolution. Another strategy that has been explored is to utilize blind color decomposition which is achieved by applying expectation and maximization operations on color distributions within the Maxwell color triangle (Gavrilovic et al., 2013). This strategy requires a heuristic randomization function to select stable colors for performing the estimation, thus it is prone to be affected by achromatic pixels at the weak stain pixels. Tissue inherent morphological and structural features may not be preserved after color deconvolution since statistical characteristics of decomposition channels are modified during this process. Model-based color normalization has also been studied in such applications by including Gaussian mixture models (Reinhard et al., 2001; Magee et al., 2009; Basavanhally and Madabhushi, 2013; Khan et al., 2014; Li and Plataniotis, 2015), matrix factorization (Vahadane et al., 2016), sparse encoder (Janowczyk et al., 2017), and wavelet transformation with independent component analysis (Alsubaie et al., 2017). Other studies utilize generative models (Goodfellow et al., 2014) to achieve the stain normalization (Cho et al., 2017; Bentaieb and Hamarneh, 2018; Shaban et al., 2018; Zanjani et al., 2018b). Typically, a reference image is needed from a group of image dataset. The different reference image would give the different domain adaptation performance. Color normalization models can provide stain estimation, but they are solely dependent on image color information, while the morphology and spatial structural dependency among imaged tissues is not considered (Gavrilovic et al., 2013; Bejnordi et al., 2016; Tam et al., 2016; Zarella et al., 2017), which could lead to unpredictable results especially when strong staining variations appear in the imaged specimens.2.2. Adversarial Domain AdaptationIn recent years, there have been many studies on unsupervised domain adaptation for transferring the learned representative features from the source to the target domain (Bousmalis et al., 2017; Herath et al., 2017; Wu et al., 2017; Yan et al., 2017). The works based on CNN show significant advantages due to better generalization across different distributions (Krizhevsky et al., 2012; Luo et al., 2017). With the development of the Generative Adversarial Networks (GAN) (Goodfellow et al., 2014), studies show the synthesized images could be used to perform unsupervised domain adaptation in a learned feature space where a generator is applied to learn the image distribution and generate the synthetic images while a discriminator is trained to differentiate the synthesized and the real distribution (Bousmalis et al., 2016; Liu and Tuzel, 2016). For example, Generate-to-Adapt (Sankaranarayanan et al., 2018) proposes to learn a joint embedding space between the source and target domain, where the embedding space could be used to synthesize both the source and target images. Inspired by previous studies, we utilize the adversarial training to find a discriminative feature space that can be used to transfer the knowledge from source to target domain. Furthermore, we introduce a Siamese architecture at target domain which can be used to regularize the classification of WSIs in an unsupervised manner. | [
"28076381",
"19884074",
"26353368",
"29533895",
"26166626",
"23848987",
"25220842",
"23322760",
"23739794",
"20671804",
"9227344",
"21383936",
"12814236",
"27373749",
"24132290",
"24845283",
"20703647",
"25706507",
"27212078",
"16761842",
"21118775",
"9497852",
"30096632",
"25462637",
"17948727",
"26745946",
"27164577",
"28355298"
] | [
{
"pmid": "28076381",
"title": "Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.",
"abstract": "Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners."
},
{
"pmid": "19884074",
"title": "Computerized image-based detection and grading of lymphocytic infiltration in HER2+ breast cancer histopathology.",
"abstract": "The identification of phenotypic changes in breast cancer (BC) histopathology on account of corresponding molecular changes is of significant clinical importance in predicting disease outcome. One such example is the presence of lymphocytic infiltration (LI) in histopathology, which has been correlated with nodal metastasis and distant recurrence in HER2+ BC patients. In this paper, we present a computer-aided diagnosis (CADx) scheme to automatically detect and grade the extent of LI in digitized HER2+ BC histopathology. Lymphocytes are first automatically detected by a combination of region growing and Markov random field algorithms. Using the centers of individual detected lymphocytes as vertices, three graphs (Voronoi diagram, Delaunay triangulation, and minimum spanning tree) are constructed and a total of 50 image-derived features describing the arrangement of the lymphocytes are extracted from each sample. A nonlinear dimensionality reduction scheme, graph embedding (GE), is then used to project the high-dimensional feature vector into a reduced 3-D embedding space. A support vector machine classifier is used to discriminate samples with high and low LI in the reduced dimensional embedding space. A total of 41 HER2+ hematoxylin-and-eosin-stained images obtained from 12 patients were considered in this study. For more than 100 three-fold cross-validation trials, the architectural feature set successfully distinguished samples of high and low LI levels with a classification accuracy greater than 90%. The popular unsupervised Varma-Zisserman texton-based classification scheme was used for comparison and yielded a classification accuracy of only 60%. Additionally, the projection of the 50 image-derived features for all 41 tissue samples into a reduced dimensional space via GE allowed for the visualization of a smooth manifold that revealed a continuum between low, intermediate, and high levels of LI. Since it is known that extent of LI in BC biopsy specimens is a prognostic indicator, our CADx scheme will potentially help clinicians determine disease outcome and allow them to make better therapy recommendations for patients with HER2+ BC."
},
{
"pmid": "26353368",
"title": "Stain Specific Standardization of Whole-Slide Histopathological Images.",
"abstract": "Variations in the color and intensity of hematoxylin and eosin (H&E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. This paper presents a fully automated algorithm for standardization of whole-slide histopathological images to reduce the effect of these variations. The proposed algorithm, called whole-slide image color standardizer (WSICS), utilizes color and spatial information to classify the image pixels into different stain components. The chromatic and density distributions for each of the stain components in the hue-saturation-density color model are aligned to match the corresponding distributions from a template whole-slide image (WSI). The performance of the WSICS algorithm was evaluated on two datasets. The first originated from 125 H&E stained WSIs of lymph nodes, sampled from 3 patients, and stained in 5 different laboratories on different days of the week. The second comprised 30 H&E stained WSIs of rat liver sections. The result of qualitative and quantitative evaluations using the first dataset demonstrate that the WSICS algorithm outperforms competing methods in terms of achieving color constancy. The WSICS algorithm consistently yields the smallest standard deviation and coefficient of variation of the normalized median intensity measure. Using the second dataset, we evaluated the impact of our algorithm on the performance of an already published necrosis quantification system. The performance of this system was significantly improved by utilizing the WSICS algorithm. The results of the empirical evaluations collectively demonstrate the potential contribution of the proposed standardization algorithm to improved diagnostic accuracy and consistency in computer-aided diagnosis for histopathology data."
},
{
"pmid": "29533895",
"title": "Adversarial Stain Transfer for Histopathology Image Analysis.",
"abstract": "It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines."
},
{
"pmid": "26166626",
"title": "A Contemporary Prostate Cancer Grading System: A Validated Alternative to the Gleason Score.",
"abstract": "BACKGROUND\nDespite revisions in 2005 and 2014, the Gleason prostate cancer (PCa) grading system still has major deficiencies. Combining of Gleason scores into a three-tiered grouping (6, 7, 8-10) is used most frequently for prognostic and therapeutic purposes. The lowest score, assigned 6, may be misunderstood as a cancer in the middle of the grading scale, and 3+4=7 and 4+3=7 are often considered the same prognostic group.\n\n\nOBJECTIVE\nTo verify that a new grading system accurately produces a smaller number of grades with the most significant prognostic differences, using multi-institutional and multimodal therapy data.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBetween 2005 and 2014, 20,845 consecutive men were treated by radical prostatectomy at five academic institutions; 5501 men were treated with radiotherapy at two academic institutions.\n\n\nOUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS\nOutcome was based on biochemical recurrence (BCR). The log-rank test assessed univariable differences in BCR by Gleason score. Separate univariable and multivariable Cox proportional hazards used four possible categorizations of Gleason scores.\n\n\nRESULTS AND LIMITATIONS\nIn the surgery cohort, we found large differences in recurrence rates between both Gleason 3+4 versus 4+3 and Gleason 8 versus 9. The hazard ratios relative to Gleason score 6 were 1.9, 5.1, 8.0, and 11.7 for Gleason scores 3+4, 4+3, 8, and 9-10, respectively. These differences were attenuated in the radiotherapy cohort as a whole due to increased adjuvant or neoadjuvant hormones for patients with high-grade disease but were clearly seen in patients undergoing radiotherapy only. A five-grade group system had the highest prognostic discrimination for all cohorts on both univariable and multivariable analysis. The major limitation was the unavoidable use of prostate-specific antigen BCR as an end point as opposed to cancer-related death.\n\n\nCONCLUSIONS\nThe new PCa grading system has these benefits: more accurate grade stratification than current systems, simplified grading system of five grades, and lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa.\n\n\nPATIENT SUMMARY\nWe looked at outcomes for prostate cancer (PCa) treated with radical prostatectomy or radiation therapy and validated a new grading system with more accurate grade stratification than current systems, including a simplified grading system of five grades and a lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa."
},
{
"pmid": "23848987",
"title": "The McNemar test for binary matched-pairs data: mid-p and asymptotic are better than exact conditional.",
"abstract": "BACKGROUND\nStatistical methods that use the mid-p approach are useful tools to analyze categorical data, particularly for small and moderate sample sizes. Mid-p tests strike a balance between overly conservative exact methods and asymptotic methods that frequently violate the nominal level. Here, we examine a mid-p version of the McNemar exact conditional test for the analysis of paired binomial proportions.\n\n\nMETHODS\nWe compare the type I error rates and power of the mid-p test with those of the asymptotic McNemar test (with and without continuity correction), the McNemar exact conditional test, and an exact unconditional test using complete enumeration. We show how the mid-p test can be calculated using eight standard software packages, including Excel.\n\n\nRESULTS\nThe mid-p test performs well compared with the asymptotic, asymptotic with continuity correction, and exact conditional tests, and almost as good as the vastly more complex exact unconditional test. Even though the mid-p test does not guarantee preservation of the significance level, it did not violate the nominal level in any of the 9595 scenarios considered in this article. It was almost as powerful as the asymptotic test. The exact conditional test and the asymptotic test with continuity correction did not perform well for any of the considered scenarios.\n\n\nCONCLUSIONS\nThe easy-to-calculate mid-p test is an excellent alternative to the complex exact unconditional test. Both can be recommended for use in any situation. We also recommend the asymptotic test if small but frequent violations of the nominal level is acceptable."
},
{
"pmid": "25220842",
"title": "Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012.",
"abstract": "Estimates of the worldwide incidence and mortality from 27 major cancers and for all cancers combined for 2012 are now available in the GLOBOCAN series of the International Agency for Research on Cancer. We review the sources and methods used in compiling the national cancer incidence and mortality estimates, and briefly describe the key results by cancer site and in 20 large \"areas\" of the world. Overall, there were 14.1 million new cases and 8.2 million deaths in 2012. The most commonly diagnosed cancers were lung (1.82 million), breast (1.67 million), and colorectal (1.36 million); the most common causes of cancer death were lung cancer (1.6 million deaths), liver cancer (745,000 deaths), and stomach cancer (723,000 deaths)."
},
{
"pmid": "23322760",
"title": "Blind color decomposition of histological images.",
"abstract": "Cancer diagnosis is based on visual examination under a microscope of tissue sections from biopsies. But whereas pathologists rely on tissue stains to identify morphological features, automated tissue recognition using color is fraught with problems that stem from image intensity variations due to variations in tissue preparation, variations in spectral signatures of the stained tissue, spectral overlap and spatial aliasing in acquisition, and noise at image acquisition. We present a blind method for color decomposition of histological images. The method decouples intensity from color information and bases the decomposition only on the tissue absorption characteristics of each stain. By modeling the charge-coupled device sensor noise, we improve the method accuracy. We extend current linear decomposition methods to include stained tissues where one spectral signature cannot be separated from all combinations of the other tissues' spectral signatures. We demonstrate both qualitatively and quantitatively that our method results in more accurate decompositions than methods based on non-negative matrix factorization and independent component analysis. The result is one density map for each stained tissue type that classifies portions of pixels into the correct stained tissue allowing accurate identification of morphological features that may be linked to cancer."
},
{
"pmid": "23739794",
"title": "Prostate histopathology: learning tissue component histograms for cancer detection and classification.",
"abstract": "Radical prostatectomy is performed on approximately 40% of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90% and 85% for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care."
},
{
"pmid": "20671804",
"title": "Histopathological image analysis: a review.",
"abstract": "Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe."
},
{
"pmid": "9227344",
"title": "Automated location of dysplastic fields in colorectal histology using image texture analysis.",
"abstract": "Automation in histopathology is an attractive concept and recent advances in the application of computerized expert systems and machine vision have made automated image analysis of histological images possible. Systems capable of complete automation not only require the ability to segment tissue features and grade histological abnormalities, but, must also be capable of locating diagnostically useful areas from within complex histological scenes. This is the first stage of the diagnostic process. The object of this study was to develop criteria for the automatic identification of focal areas of colorectal dysplasia from a background of histologically normal tissue. Fields of view representing normal colorectal mucosa (n = 120) and dysplastic mucosa (n = 120) were digitally captured and subjected to image texture analysis. Two features were selected as being the most important in the discrimination of normal and adenomatous colorectal mucosa. The first was a feature of the co-occurrence matrix and the second was the number of low optical density pixels in the image. A linear classification rule defined using these two features was capable of correctly classifying 86 per cent of a series of training images into their correct groups. In addition, large histological scenes were digitally captured, split into their component images, analysed according to texture, and classified as normal or abnormal using the previously defined classification rule. Maps of the histological scenes were constructed and in most cases, dysplastic colorectal mucosa was correctly identified on the basis of image texture: 83 per cent of test images were correctly classified. This study demonstrates that abnormalities in low-power tissue morphology can be identified using quantitative image analysis. The identification of diagnostically useful fields advances the potential of automated systems in histopathology: these regions could than be scrutinized at high power using knowledge-guided image segmentation for disease grading. Systems of this kind have the potential to provide objectivity, unbiased sampling, and valuable diagnostic decision support."
},
{
"pmid": "21383936",
"title": "Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.",
"abstract": "INTRODUCTION\nHISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise.\n\n\nRESULTS\nIn this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings.\n\n\nCONCLUSION\nWith the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert."
},
{
"pmid": "12814236",
"title": "Multiwavelet grading of pathological images of prostate.",
"abstract": "Histological grading of pathological images is used to determine level of malignancy of cancerous tissues. This is a very important task in prostate cancer prognosis, since it is used for treatment planning. If infection of cancer is not rejected by non-invasive diagnostic techniques like magnetic resonance imaging, computed tomography scan, and ultrasound, then biopsy specimens of tissue are tested. For prostate, biopsied tissue is stained by hematoxyline and eosine method and viewed by pathologists under a microscope to determine its histological grade. Human grading is very subjective due to interobserver and intraobserver variations and in some cases difficult and time-consuming. Thus, an automatic and repeatable technique is needed for grading. Gleason grading system is the most common method for histological grading of prostate tissue samples. According to this system, each cancerous specimen is assigned one of five grades. Although some automatic systems have been developed for analysis of pathological images, Gleason grading has not yet been automated; the goal of this research is to automate it. To this end, we calculate energy and entropy features of multiwavelet coefficients of the image. Then, we select most discriminative features by simulated annealing and use a k-nearest neighbor classifier to classify each image to appropriate grade (class). The leaving-one-out technique is used for error rate estimation. We also obtain the results using features extracted by wavelet packets and co-occurrence matrices and compare them with the multiwavelet method. Experimental results show the superiority of the multiwavelet transforms compared with other techniques. For multiwavelets, critically sampled preprocessing outperforms repeated-row preprocessing and has less sensitivity to noise for second level of decomposition. The first level of decomposition is very sensitive to noise and, thus, should not be used for feature extraction. The best multiwavelet method grades prostate pathological images correctly 97% of the time."
},
{
"pmid": "27373749",
"title": "Stain Normalization using Sparse AutoEncoders (StaNoSA): Application to digital pathology.",
"abstract": "Digital histopathology slides have many sources of variance, and while pathologists typically do not struggle with them, computer aided diagnostic algorithms can perform erratically. This manuscript presents Stain Normalization using Sparse AutoEncoders (StaNoSA) for use in standardizing the color distributions of a test image to that of a single template image. We show how sparse autoencoders can be leveraged to partition images into tissue sub-types, so that color standardization for each can be performed independently. StaNoSA was validated on three experiments and compared against five other color standardization approaches and shown to have either comparable or superior results."
},
{
"pmid": "24132290",
"title": "Mutational landscape and significance across 12 major cancer types.",
"abstract": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment."
},
{
"pmid": "24845283",
"title": "A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution.",
"abstract": "Histopathology diagnosis is based on visual examination of the morphology of histological sections under a microscope. With the increasing popularity of digital slide scanners, decision support systems based on the analysis of digital pathology images are in high demand. However, computerized decision support systems are fraught with problems that stem from color variations in tissue appearance due to variation in tissue preparation, variation in stain reactivity from different manufacturers/batches, user or protocol variation, and the use of scanners from different manufacturers. In this paper, we present a novel approach to stain normalization in histopathology images. The method is based on nonlinear mapping of a source image to a target image using a representation derived from color deconvolution. Color deconvolution is a method to obtain stain concentration values when the stain matrix, describing how the color is affected by the stain concentration, is given. Rather than relying on standard stain matrices, which may be inappropriate for a given image, we propose the use of a color-based classifier that incorporates a novel stain color descriptor to calculate image-specific stain matrix. In order to demonstrate the efficacy of the proposed stain matrix estimation and stain normalization methods, they are applied to the problem of tumor segmentation in breast histopathology images. The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular."
},
{
"pmid": "20703647",
"title": "Statistical analysis of textural features for improved classification of oral histopathological images.",
"abstract": "The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed."
},
{
"pmid": "25706507",
"title": "A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.",
"abstract": "GOAL\nIn digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation.\n\n\nRESULTS\nAs the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors.\n\n\nCONCLUSION\nExtensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization.\n\n\nSIGNIFICANCE\nThe proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis."
},
{
"pmid": "27212078",
"title": "Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.",
"abstract": "Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging."
},
{
"pmid": "16761842",
"title": "Supervised learning-based cell image segmentation for p53 immunohistochemistry.",
"abstract": "In this paper, we present two new algorithms for cell image segmentation. First, we demonstrate that pixel classification-based color image segmentation in color space is equivalent to performing segmentation on grayscale image through thresholding. Based on this result, we develop a supervised learning-based two-step procedure for color cell image segmentation, where color image is first mapped to grayscale via a transform learned through supervised learning, thresholding is then performed on the grayscale image to segment objects out of background. Experimental results show that the supervised learning-based two-step procedure achieved a boundary disagreement (mean absolute distance) of 0.85 while the disagreement produced by the pixel classification-based color image segmentation method is 3.59. Second, we develop a new marker detection algorithm for watershed-based separation of overlapping or touching cells. The merit of the new algorithm is that it employs both photometric and shape information and combines the two naturally in the framework of pattern classification to provide more reliable markers. Extensive experiments show that the new marker detection algorithm achieved 0.4% and 0.2% over-segmentation and under-segmentation, respectively, while reconstruction-based method produced 4.4% and 1.1% over-segmentation and under-segmentation, respectively."
},
{
"pmid": "21118775",
"title": "A variational model for histogram transfer of color images.",
"abstract": "In this paper, we propose a variational formulation for histogram transfer of two or more color images. We study an energy functional composed by three terms: one tends to approach the cumulative histograms of the transformed images, the other two tend to maintain the colors and geometry of the original images. By minimizing this energy, we obtain an algorithm that balances equalization and the conservation of features of the original images. As a result, they evolve while approaching an intermediate histogram between them. This intermediate histogram does not need to be specified in advance, but it is a natural result of the model. Finally, we provide experiments showing that the proposed method compares well with the state of the art."
},
{
"pmid": "9497852",
"title": "A new method for segmentation of colour images applied to immunohistochemically stained cell nuclei.",
"abstract": "A new method for segmenting images of immunohistochemically stained cell nuclei is presented. The aim is to distinguish between cell nuclei with a positive staining reaction and other cell nuclei, and to make it possible to quantify the reaction. First, a new supervised algorithm for creating a pixel classifier is applied to an image that is typical for the sample. The training phase of the classifier is very user friendly since only a few typical pixels for each class need to be selected. The classifier is robust in that it is non-parametric and has a built-in metric that adapts to the colour space. After the training the classifier can be applied to all images from the same staining session. Then, all pixels classified as belonging to nuclei of cells are grouped into individual nuclei through a watershed segmentation and connected component labelling algorithm. This algorithm also separates touching nuclei. Finally, the nuclei are classified according to their fraction of positive pixels."
},
{
"pmid": "30096632",
"title": "A study about color normalization methods for histopathology images.",
"abstract": "Histopathology images are used for the diagnosis of the cancerous disease by the examination of tissue with the help of Whole Slide Imaging (WSI) scanner. A decision support system works well by the analysis of the histopathology images but a lot of problems arise in its decision. Color variation in the histopathology images is occurring due to use of the different scanner, use of various equipments, different stain coloring and reactivity from a different manufacturer. In this paper, detailed study and performance evaluation of color normalization methods on histopathology image datasets are presented. Color normalization of the source image by transferring the mean color of the target image in the source image and also to separate stain present in the source image. Stain separation and color normalization of the histopathology images can be helped for both pathology and computerized decision support system. Quality performances of different color normalization methods are evaluated and compared in terms of quaternion structure similarity index matrix (QSSIM), structure similarity index matrix (SSIM) and Pearson correlation coefficient (PCC) on various histopathology image datasets. Our experimental analysis suggests that structure-preserving color normalization (SPCN) provides better qualitatively and qualitatively results in comparison to the all the presented methods for breast and colorectal cancer histopathology image datasets."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "17948727",
"title": "Multifeature prostate cancer diagnosis and Gleason grading of histological images.",
"abstract": "We present a study of image features for cancer diagnosis and Gleason grading of the histological images of prostate. In diagnosis, the tissue image is classified into the tumor and nontumor classes. In Gleason grading, which characterizes tumor aggressiveness, the image is classified as containing a low- or high-grade tumor. The image sets used in this paper consisted of 367 and 268 color images for the diagnosis and Gleason grading problems, respectively, and were captured from representative areas of hematoxylin and eosin-stained tissue retrieved from tissue microarray cores or whole sections. The primary contribution of this paper is to aggregate color, texture, and morphometric cues at the global and histological object levels for classification. Features representing different visual cues were combined in a supervised learning framework. We compared the performance of Gaussian, k-nearest neighbor, and support vector machine classifiers together with the sequential forward feature selection algorithm. On diagnosis, using a five-fold cross-validation estimate, an accuracy of 96.7% was obtained. On Gleason grading, the achieved accuracy of classification into low- and high-grade classes was 81.0%."
},
{
"pmid": "26745946",
"title": "A method for normalizing pathology images to improve feature extraction for quantitative pathology.",
"abstract": "PURPOSE\nWith the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides.\n\n\nMETHODS\nTo overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets.\n\n\nRESULTS\nThe ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature.\n\n\nCONCLUSIONS\nICHE may be a useful preprocessing step a digital pathology image processing pipeline."
},
{
"pmid": "27164577",
"title": "Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.",
"abstract": "Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis."
},
{
"pmid": "28355298",
"title": "An alternative reference space for H&E color normalization.",
"abstract": "Digital imaging of H&E stained slides has enabled the application of image processing to support pathology workflows. Potential applications include computer-aided diagnostics, advanced quantification tools, and innovative visualization platforms. However, the intrinsic variability of biological tissue and the vast differences in tissue preparation protocols often lead to significant image variability that can hamper the effectiveness of these computational tools. We developed an alternative representation for H&E images that operates within a space that is more amenable to many of these image processing tools. The algorithm to derive this representation operates by exploiting the correlation between color and the spatial properties of the biological structures present in most H&E images. In this way, images are transformed into a structure-centric space in which images are segregated into tissue structure channels. We demonstrate that this framework can be extended to achieve color normalization, effectively reducing inter-slide variability."
}
] |
PLoS Computational Biology | 31083649 | PMC6533009 | 10.1371/journal.pcbi.1007012 | DoGNet: A deep architecture for synapse detection in multiplexed fluorescence images | Neuronal synapses transmit electrochemical signals between cells through the coordinated action of presynaptic vesicles, ion channels, scaffolding and adapter proteins, and membrane receptors. In situ structural characterization of numerous synaptic proteins simultaneously through multiplexed imaging facilitates a bottom-up approach to synapse classification and phenotypic description. Objective automation of efficient and reliable synapse detection within these datasets is essential for the high-throughput investigation of synaptic features. Convolutional neural networks can solve this generalized problem of synapse detection, however, these architectures require large numbers of training examples to optimize their thousands of parameters. We propose DoGNet, a neural network architecture that closes the gap between classical computer vision blob detectors, such as Difference of Gaussians (DoG) filters, and modern convolutional networks. DoGNet is optimized to analyze highly multiplexed microscopy data. Its small number of training parameters allows DoGNet to be trained with few examples, which facilitates its application to new datasets without overfitting. We evaluate the method on multiplexed fluorescence imaging data from both primary mouse neuronal cultures and mouse cortex tissue slices. We show that DoGNet outperforms convolutional networks with a low-to-moderate number of training examples, and DoGNet is efficiently transferred between datasets collected from separate research groups. DoGNet synapse localizations can then be used to guide the segmentation of individual synaptic protein locations and spatial extents, revealing their spatial organization and relative abundances within individual synapses. The source code is publicly available: https://github.com/kulikovv/dognet. | Related workAutomation of synapse detection and large-scale investigation of neuronal organization has seen considerable progress in recent years. Most work has been dedicated to the segmentation of electron microscopy datasets, with modern high-throughput pipelines for automated segmentation and morphological reconstruction of synapses [8–10, 22, 23]. Much of this progress may be credited to deep convolutional networks. Segmentation accuracy of these approaches can be increased by making deeper networks [24], adding dilated/ a-trous convolution [25] or using hourglass architectures [8, 26] that include downscaling/upscaling parts with so-called skip connections. ConvNets typically outperform random forest and other classical machine learning approaches that are dependent on hand-crafted features such as those proposed in [27, 28]. At the same time, while it is possible to reduce the number of training examples needed by splitting the segmentation pipeline into several smaller pipelines [10], the challenge of reducnig the number of training parameters without sacrificing segmentation accuracy remains.Within the context of neuronal immunofluorescence images, synapses are typically defined by the colocalization of pre- and postsynaptic proteins within puncta that have sizes on the order of the diffraction limit of 250 nm. One fully automated method using priors, which quantifies synaptic elements and complete synapses based on pre- and postsynaptic labeling plus a dendritic or cell surface marker, was previously proposed and applied successfully [29]. Alternatively, a machine learning approach to synapse detection was proposed in [30, 31], where a support vector machine (SVM) was used to estimate the confidence of a pixel being a synapse, depending on a small number of neighboring pixels. Synapse positions were then computed from these confidence values by evaluating local confidence profiles and comparing them with a minimum confidence value. Finally, in [32], a probabilistic approach to synapse detection on AT volumes was proposed. The principal idea of this approach was to estimate the probability of a pixel being a punctum within each tissue slice, and then calculating the joint distribution of presynapic and postsynapic proteins between neighbouring slices. Our work was mainly inspired by works [32] and [11], that produced the state-of-the-art results in synapse detection on fluorescence images.More conventional machine vision techniques have also been applied for synapse detection [6, 11, 12]. These methods aim at detecting regions that differ in brightness compared with neighboring regions. The most common approach for this task is convolution with a Laplacian filter [12]. The Laplacian filter can be computed as the limiting case of the difference between two Gaussian smoothed images. Since convolution with a Gaussian kernel is a linear operation, convolution with the difference of two Gaussian kernels can be used instead of seeking the difference between smooth images. The usage of Difference of Gaussians for synapse detection was proposed in [11] with manually defined filter parameters. Here, we introduce a new DoGNet architecture that integrates the use of simple DoG filters for blob detection with machine, deep learning, thereby combining the strengths of the preceding published approaches [8, 11, 32]. Our approach offers the ability to capture complex dependencies between synaptic signals in distinct imaging planes, acting as a trainable frequency filter. | [
"10817752",
"14625534",
"21048117",
"21423165",
"17610815",
"24333471",
"17356626",
"25855189",
"25977797",
"26424801",
"22869597",
"28344747",
"22031814",
"23771317",
"22108140",
"20230863",
"28414801",
"24633176",
"26052271",
"24577276",
"23126323"
] | [
{
"pmid": "21048117",
"title": "Changes in prefrontal axons may disrupt the network in autism.",
"abstract": "Neural communication is disrupted in autism by unknown mechanisms. Here, we examined whether in autism there are changes in axons, which are the conduit for neural communication. We investigated single axons and their ultrastructure in the white matter of postmortem human brain tissue below the anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), and lateral prefrontal cortex (LPFC), which are associated with attention, social interactions, and emotions, and have been consistently implicated in the pathology of autism. Area-specific changes below ACC (area 32) included a decrease in the largest axons that communicate over long distances. In addition, below ACC there was overexpression of the growth-associated protein 43 kDa accompanied by excessive number of thin axons that link neighboring areas. In OFC (area 11), axons had decreased myelin thickness. Axon features below LPFC (area 46) appeared to be unaffected, but the altered white matter composition below ACC and OFC changed the relationships among all prefrontal areas examined, and could indirectly affect LPFC function. These findings provide a mechanism for disconnection of long-distance pathways, excessive connections between neighboring areas, and inefficiency in pathways for emotions, and may help explain why individuals with autism do not adequately shift attention, engage in repetitive behavior, and avoid social interactions. These changes below specific prefrontal areas appear to be linked through a cascade of developmental events affecting axon growth and guidance, and suggest targeting the associated signaling pathways for therapeutic interventions in autism."
},
{
"pmid": "21423165",
"title": "Shank3 mutant mice display autistic-like behaviours and striatal dysfunction.",
"abstract": "Autism spectrum disorders (ASDs) comprise a range of disorders that share a core of neurobehavioural deficits characterized by widespread abnormalities in social interactions, deficits in communication as well as restricted interests and repetitive behaviours. The neurological basis and circuitry mechanisms underlying these abnormal behaviours are poorly understood. SHANK3 is a postsynaptic protein, whose disruption at the genetic level is thought to be responsible for the development of 22q13 deletion syndrome (Phelan-McDermid syndrome) and other non-syndromic ASDs. Here we show that mice with Shank3 gene deletions exhibit self-injurious repetitive grooming and deficits in social interaction. Cellular, electrophysiological and biochemical analyses uncovered defects at striatal synapses and cortico-striatal circuits in Shank3 mutant mice. Our findings demonstrate a critical role for SHANK3 in the normal development of neuronal connectivity and establish causality between a disruption in the Shank3 gene and the genesis of autistic-like behaviours in mice."
},
{
"pmid": "17610815",
"title": "Array tomography: a new tool for imaging the molecular architecture and ultrastructure of neural circuits.",
"abstract": "Many biological functions depend critically upon fine details of tissue molecular architecture that have resisted exploration by existing imaging techniques. This is particularly true for nervous system tissues, where information processing function depends on intricate circuit and synaptic architectures. Here, we describe a new imaging method, called array tomography, which combines and extends superlative features of modern optical fluorescence and electron microscopy methods. Based on methods for constructing and repeatedly staining and imaging ordered arrays of ultrathin (50-200 nm), resin-embedded serial sections on glass microscope slides, array tomography allows for quantitative, high-resolution, large-field volumetric imaging of large numbers of antigens, fluorescent proteins, and ultrastructure in individual tissue specimens. Compared to confocal microscopy, array tomography offers the advantage of better spatial resolution, in particular along the z axis, as well as depth-independent immunofluorescent staining. The application of array tomography can reveal important but previously unseen features of brain molecular architecture."
},
{
"pmid": "24333471",
"title": "Evaluation of the effectiveness of Gaussian filtering in distinguishing punctate synaptic signals from background noise during image analysis.",
"abstract": "BACKGROUND\nImages in biomedical imaging research are often affected by non-specific background noise. This poses a serious problem when the noise overlaps with specific signals to be quantified, e.g. for their number and intensity. A simple and effective means of removing background noise is to prepare a filtered image that closely reflects background noise and to subtract it from the original unfiltered image. This approach is in common use, but its effectiveness in identifying and quantifying synaptic puncta has not been characterized in detail.\n\n\nNEW ANALYSIS\nWe report on our assessment of the effectiveness of isolating punctate signals from diffusely distributed background noise using one variant of this approach, \"Difference of Gaussian(s) (DoG)\" which is based on a Gaussian filter.\n\n\nRESULTS\nWe evaluated immunocytochemically stained, cultured mouse hippocampal neurons as an example, and provided the rationale for choosing specific parameter values for individual steps in detecting glutamatergic nerve terminals. The intensity and width of the detected puncta were proportional to those obtained by manual fitting of two-dimensional Gaussian functions to the local information in the original image.\n\n\nCOMPARISON WITH EXISTING METHODS\nDoG was compared with the rolling-ball method, using biological data and numerical simulations. Both methods removed background noise, but differed slightly with respect to their efficiency in discriminating neighboring peaks, as well as their susceptibility to high-frequency noise and variability in object size.\n\n\nCONCLUSIONS\nDoG will be useful in detecting punctate signals, once its characteristics are examined quantitatively by experimenters."
},
{
"pmid": "17356626",
"title": "Gaussian approximations of fluorescence microscope point-spread function models.",
"abstract": "We comprehensively study the least-squares Gaussian approximations of the diffraction-limited 2D-3D paraxial-nonparaxial point-spread functions (PSFs) of the wide field fluorescence microscope (WFFM), the laser scanning confocal microscope (LSCM), and the disk scanning confocal microscope (DSCM). The PSFs are expressed using the Debye integral. Under an L(infinity) constraint imposing peak matching, optimal and near-optimal Gaussian parameters are derived for the PSFs. With an L1 constraint imposing energy conservation, an optimal Gaussian parameter is derived for the 2D paraxial WFFM PSF. We found that (1) the 2D approximations are all very accurate; (2) no accurate Gaussian approximation exists for 3D WFFM PSFs; and (3) with typical pinhole sizes, the 3D approximations are accurate for the DSCM and nearly perfect for the LSCM. All the Gaussian parameters derived in this study are in explicit analytical form, allowing their direct use in practical applications."
},
{
"pmid": "25855189",
"title": "Mapping synapses by conjugate light-electron array tomography.",
"abstract": "Synapses of the mammalian CNS are diverse in size, structure, molecular composition, and function. Synapses in their myriad variations are fundamental to neural circuit development, homeostasis, plasticity, and memory storage. Unfortunately, quantitative analysis and mapping of the brain's heterogeneous synapse populations has been limited by the lack of adequate single-synapse measurement methods. Electron microscopy (EM) is the definitive means to recognize and measure individual synaptic contacts, but EM has only limited abilities to measure the molecular composition of synapses. This report describes conjugate array tomography (AT), a volumetric imaging method that integrates immunofluorescence and EM imaging modalities in voxel-conjugate fashion. We illustrate the use of conjugate AT to advance the proteometric measurement of EM-validated single-synapse analysis in a study of mouse cortex."
},
{
"pmid": "25977797",
"title": "Synaptic molecular imaging in spared and deprived columns of mouse barrel cortex with array tomography.",
"abstract": "A major question in neuroscience is how diverse subsets of synaptic connections in neural circuits are affected by experience dependent plasticity to form the basis for behavioral learning and memory. Differences in protein expression patterns at individual synapses could constitute a key to understanding both synaptic diversity and the effects of plasticity at different synapse populations. Our approach to this question leverages the immunohistochemical multiplexing capability of array tomography (ATomo) and the columnar organization of mouse barrel cortex to create a dataset comprising high resolution volumetric images of spared and deprived cortical whisker barrels stained for over a dozen synaptic molecules each. These dataset has been made available through the Open Connectome Project for interactive online viewing, and may also be downloaded for offline analysis using web, Matlab, and other interfaces."
},
{
"pmid": "26424801",
"title": "Probability-based particle detection that enables threshold-free and robust in vivo single-molecule tracking.",
"abstract": "Single-molecule detection in fluorescence nanoscopy has become a powerful tool in cell biology but can present vexing issues in image analysis, such as limited signal, unspecific background, empirically set thresholds, image filtering, and false-positive detection limiting overall detection efficiency. Here we present a framework in which expert knowledge and parameter tweaking are replaced with a probability-based hypothesis test. Our method delivers robust and threshold-free signal detection with a defined error estimate and improved detection of weaker signals. The probability value has consequences for downstream data analysis, such as weighing a series of detections and corresponding probabilities, Bayesian propagation of probability, or defining metrics in tracking applications. We show that the method outperforms all current approaches, yielding a detection efficiency of >70% and a false-positive detection rate of <5% under conditions down to 17 photons/pixel background and 180 photons/molecule signal, which is beneficial for any kind of photon-limited application. Examples include limited brightness and photostability, phototoxicity in live-cell single-molecule imaging, and use of new labels for nanoscopy. We present simulations, experimental data, and tracking of low-signal mRNAs in yeast cells."
},
{
"pmid": "22869597",
"title": "All three components of the neuronal SNARE complex contribute to secretory vesicle docking.",
"abstract": "Before exocytosis, vesicles must first become docked to the plasma membrane. The SNARE complex was originally hypothesized to mediate both the docking and fusion steps in the secretory pathway, but previous electron microscopy (EM) studies indicated that the vesicular SNARE protein synaptobrevin (syb) was dispensable for docking. In this paper, we studied the function of syb in the docking of large dense-core vesicles (LDCVs) in live PC12 cells using total internal reflection fluorescence microscopy. Cleavage of syb by a clostridial neurotoxin resulted in significant defects in vesicle docking in unfixed cells; these results were confirmed via EM using cells that were prepared using high-pressure freezing. The membrane-distal portion of its SNARE motif was critical for docking, whereas deletion of a membrane-proximal segment had little effect on docking but diminished fusion. Because docking was also inhibited by toxin-mediated cleavage of the target membrane SNAREs syntaxin and SNAP-25, syb might attach LDCVs to the plasma membrane through N-terminal assembly of trans-SNARE pairs."
},
{
"pmid": "28344747",
"title": "Gastric peritoneal carcinomatosis - a retrospective review.",
"abstract": "AIM\nTo characterize patients with gastric peritoneal carcinomatosis (PC) and their typical clinical and treatment course with palliative systemic chemotherapy as the current standard of care.\n\n\nMETHODS\nWe performed a retrospective electronic chart review of all patients with gastric adenocarcinoma with PC diagnosed at initial metastatic presentation between January 2010 and December 2014 in a single tertiary referral centre.\n\n\nRESULTS\nWe studied a total of 271 patients with a median age of 63.8 years and median follow-up duration of 5.1 mo. The majority (n = 217, 80.1%) had the peritoneum as the only site of metastasis at initial presentation. Palliative systemic chemotherapy was eventually planned for 175 (64.6%) of our patients at initial presentation, of which 171 were initiated on it. Choice of first-line regime was in accordance with the National Comprehensive Cancer Network Guidelines for Gastric Cancer Treatment. These patients underwent a median of one line of chemotherapy, completing a median of six cycles in total. Chemotherapy disruption due to unplanned hospitalizations occurred in 114 (66.7%), while cessation of chemotherapy occurred in 157 (91.8%), with 42 cessations primarily attributable to PC-related complications. Patients who had initiation of systemic chemotherapy had a significantly better median overall survival than those who did not (10.9 mo vs 1.6 mo, P < 0.001). Of patients who had initiation of systemic chemotherapy, those who experienced any disruptions to chemotherapy due to unplanned hospitalizations had a significantly worse median overall survival compared to those who did not (8.7 mo vs 14.6 mo, P < 0.001).\n\n\nCONCLUSION\nGastric PC carries a grim prognosis with a clinical course fraught with disease-related complications which may attenuate any survival benefit which palliative systemic chemotherapy may have to offer. As such, investigational use of regional therapies is warranted and required validation in patients with isolated PC to maximize their survival outcomes in the long run."
},
{
"pmid": "22031814",
"title": "Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images.",
"abstract": "We describe a protocol for fully automated detection and segmentation of asymmetric, presumed excitatory, synapses in serial electron microscopy images of the adult mammalian cerebral cortex, taken with the focused ion beam, scanning electron microscope (FIB/SEM). The procedure is based on interactive machine learning and only requires a few labeled synapses for training. The statistical learning is performed on geometrical features of 3D neighborhoods of each voxel and can fully exploit the high z-resolution of the data. On a quantitative validation dataset of 111 synapses in 409 images of 1948×1342 pixels with manual annotations by three independent experts the error rate of the algorithm was found to be comparable to that of the experts (0.92 recall at 0.89 precision). Our software offers a convenient interface for labeling the training data and the possibility to visualize and proofread the results in 3D. The source code, the test dataset and the ground truth annotation are freely available on the website http://www.ilastik.org/synapse-detection."
},
{
"pmid": "23771317",
"title": "Learning context cues for synapse segmentation.",
"abstract": "We present a new approach for the automated segmentation of synapses in image stacks acquired by electron microscopy (EM) that relies on image features specifically designed to take spatial context into account. These features are used to train a classifier that can effectively learn cues such as the presence of a nearby post-synaptic region. As a result, our algorithm successfully distinguishes synapses from the numerous other organelles that appear within an EM volume, including those whose local textural properties are relatively similar. Furthermore, as a by-product of the segmentation, our method flawlessly determines synaptic orientation, a crucial element in the interpretation of brain circuits. We evaluate our approach on three different datasets, compare it against the state-of-the-art in synapse segmentation and demonstrate our ability to reliably collect shape, density, and orientation statistics over hundreds of synapses."
},
{
"pmid": "22108140",
"title": "Automated quantification of synapses by fluorescence microscopy.",
"abstract": "The quantification of synapses in neuronal cultures is essential in studies of the molecular mechanisms underlying synaptogenesis and synaptic plasticity. Conventional counting of synapses based on morphological or immunocytochemical criteria is extremely work-intensive. We developed a fully automated method which quantifies synaptic elements and complete synapses based on immunocytochemistry. Pre- and postsynaptic elements are detected by their corresponding fluorescence signals and their proximity to dendrites. Synapses are defined as the combination of a pre- and postsynaptic element within a given distance. The analysis is performed in three dimensions and all parameters required for quantification can be easily adjusted by a graphical user interface. The integrated batch processing enables the analysis of large datasets without any further user interaction and is therefore efficient and timesaving. The potential of this method was demonstrated by an extensive quantification of synapses in neuronal cultures from DIV 7 to DIV 21. The method can be applied to all datasets containing a pre- and postsynaptic labeling plus a dendritic or cell surface marker."
},
{
"pmid": "20230863",
"title": "Automated detection and quantification of fluorescently labeled synapses in murine brain tissue sections for high throughput applications.",
"abstract": "The automated detection and quantification of fluorescently labeled synapses in the brain is a fundamental challenge in neurobiology. Here we have applied a framework, based on machine learning, to detect and quantify synapses in murine hippocampus tissue sections, fluorescently labeled for synaptophysin using a direct and indirect labeling method with FITC as fluorescent dye. In a pixel-wise application of the classifier, small neighborhoods around the image pixels are mapped to confidence values. Synapse positions are computed from these confidence values by evaluating the local confidence profiles and comparing the values with a chosen minimum confidence value, the so called confidence threshold. To avoid time-consuming hand-tuning of the confidence threshold we describe a protocol for deriving the threshold from a small set of images, in which an expert has marked punctuate synaptic fluorescence signals. We can show that it works with high accuracy for fully automated synapse detection in new sample images. The resulting patch-by-patch synapse screening system, referred to as i3S (intelligent synapse screening system), is able to detect several thousand synapses in an area of 768×512 pixels in approx. 20s. The software approach presented in this study provides a reliable basis for high throughput quantification of synapses in neural tissue."
},
{
"pmid": "28414801",
"title": "Probabilistic fluorescence-based synapse detection.",
"abstract": "Deeper exploration of the brain's vast synaptic networks will require new tools for high-throughput structural and molecular profiling of the diverse populations of synapses that compose those networks. Fluorescence microscopy (FM) and electron microscopy (EM) offer complementary advantages and disadvantages for single-synapse analysis. FM combines exquisite molecular discrimination capacities with high speed and low cost, but rigorous discrimination between synaptic and non-synaptic fluorescence signals is challenging. In contrast, EM remains the gold standard for reliable identification of a synapse, but offers only limited molecular discrimination and is slow and costly. To develop and test single-synapse image analysis methods, we have used datasets from conjugate array tomography (cAT), which provides voxel-conjugate FM and EM (annotated) images of the same individual synapses. We report a novel unsupervised probabilistic method for detection of synapses from multiplex FM (muxFM) image data, and evaluate this method both by comparison to EM gold standard annotated data and by examining its capacity to reproduce known important features of cortical synapse distributions. The proposed probabilistic model-based synapse detector accepts molecular-morphological synapse models as user queries, and delivers a volumetric map of the probability that each voxel represents part of a synapse. Taking human annotation of cAT EM data as ground truth, we show that our algorithm detects synapses from muxFM data alone as successfully as human annotators seeing only the muxFM data, and accurately reproduces known architectural features of cortical synapse distributions. This approach opens the door to data-driven discovery of new synapse types and their density. We suggest that our probabilistic synapse detector will also be useful for analysis of standard confocal and super-resolution FM images, where EM cross-validation is not practical."
},
{
"pmid": "24633176",
"title": "High content image analysis identifies novel regulators of synaptogenesis in a high-throughput RNAi screen of primary neurons.",
"abstract": "The formation of synapses, the specialized points of chemical communication between neurons, is a highly regulated developmental process fundamental to establishing normal brain circuitry. Perturbations of synapse formation and function causally contribute to human developmental and degenerative neuropsychiatric disorders, such as Alzheimer's disease, intellectual disability, and autism spectrum disorders. Many genes controlling synaptogenesis have been identified, but lack of facile experimental systems has made systematic discovery of regulators of synaptogenesis challenging. Thus, we created a high-throughput platform to study excitatory and inhibitory synapse development in primary neuronal cultures and used a lentiviral RNA interference library to identify novel regulators of synapse formation. This methodology is broadly applicable for high-throughput screening of genes and drugs that may rescue or improve synaptic dysfunction associated with cognitive function and neurological disorders."
},
{
"pmid": "26052271",
"title": "FIB/SEM technology and high-throughput 3D reconstruction of dendritic spines and synapses in GFP-labeled adult-generated neurons.",
"abstract": "The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3-4 and 8-9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner."
},
{
"pmid": "24577276",
"title": "Precisely and accurately localizing single emitters in fluorescence microscopy.",
"abstract": "Methods based on single-molecule localization and photophysics have brought nanoscale imaging with visible light into reach. This has enabled single-particle tracking applications for studying the dynamics of molecules and nanoparticles and contributed to the recent revolution in super-resolution localization microscopy techniques. Crucial to the optimization of such methods are the precision and accuracy with which single fluorophores and nanoparticles can be localized. We present a lucid synthesis of the developments on this localization precision and accuracy and their practical implications in order to guide the increasing number of researchers using single-particle tracking and super-resolution localization microscopy."
},
{
"pmid": "23126323",
"title": "3-D PSF fitting for fluorescence microscopy: implementation and localization application.",
"abstract": "Localization microscopy relies on computationally efficient Gaussian approximations of the point spread function for the calculation of fluorophore positions. Theoretical predictions show that under specific experimental conditions, localization accuracy is significantly improved when the localization is performed using a more realistic model. Here, we show how this can be achieved by considering three-dimensional (3-D) point spread function models for the wide field microscope. We introduce a least-squares point spread function fitting framework that utilizes the Gibson and Lanni model and propose a computationally efficient way for evaluating its derivative functions. We demonstrate the usefulness of the proposed approach with algorithms for particle localization and defocus estimation, both implemented as plugins for ImageJ."
}
] |
JMIR Medical Informatics | 31094361 | PMC6533869 | 10.2196/12596 | Identifying Clinical Terms in Medical Text Using Ontology-Guided Machine Learning | BackgroundAutomatic recognition of medical concepts in unstructured text is an important component of many clinical and research applications, and its accuracy has a large impact on electronic health record analysis. The mining of medical concepts is complicated by the broad use of synonyms and nonstandard terms in medical documents.ObjectiveWe present a machine learning model for concept recognition in large unstructured text, which optimizes the use of ontological structures and can identify previously unobserved synonyms for concepts in the ontology.MethodsWe present a neural dictionary model that can be used to predict if a phrase is synonymous to a concept in a reference ontology. Our model, called the Neural Concept Recognizer (NCR), uses a convolutional neural network to encode input phrases and then rank medical concepts based on the similarity in that space. It uses the hierarchical structure provided by the biomedical ontology as an implicit prior embedding to better learn embedding of various terms. We trained our model on two biomedical ontologies—the Human Phenotype Ontology (HPO) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT).ResultsWe tested our model trained on HPO by using two different data sets: 288 annotated PubMed abstracts and 39 clinical reports. We achieved 1.7%-3% higher F1-scores than those for our strongest manually engineered rule-based baselines (P=.003). We also tested our model trained on the SNOMED-CT by using 2000 Intensive Care Unit discharge summaries from MIMIC (Multiparameter Intelligent Monitoring in Intensive Care) and achieved 0.9%-1.3% higher F1-scores than those of our baseline. The results of our experiments show high accuracy of our model as well as the value of using the taxonomy structure of the ontology in concept recognition.ConclusionMost popular medical concept recognizers rely on rule-based models, which cannot generalize well to unseen synonyms. In addition, most machine learning methods typically require large corpora of annotated text that cover all classes of concepts, which can be extremely difficult to obtain for biomedical ontologies. Without relying on large-scale labeled training data or requiring any custom training, our model can be efficiently generalized to new synonyms and performs as well or better than state-of-the-art methods custom built for specific ontologies. | Related WorksRecently, several machine learning methods have been used in biomedical NER or concept recognition. Habibi et al [25] trained the LSTM-CRF NER model, introduced by Lample et al [17], to recognize five entity classes of genes/proteins, chemicals, species, cell lines and diseases. They tested their model on several biomedical corpora and achieved better results than previous rule-based methods. In another work, Vani et al [26] introduced a novel RNN–based model and showed its efficiency on predicting ICD-9 codes in clinical notes. Both of these methods require a training corpus annotated with the concepts (loosely annotated in the case of Vani et al [26]).Curating such an annotated corpus is more difficult for typical biomedical ontologies, as the corpus has to cover thousands of classes. For example, the HPO contains 11,442 concepts (classes), while, to the best of our knowledge, the only publicly available corpus hand annotated with HPO concepts [14] contains 228 PubMed abstracts with only 607 unique annotations that are not an exact match of a concept name or a synonym. Thus, training a method to recognize the presence of concepts in biomedical text requires a different approach when there is a large number of concepts.The concepts in an ontology often have a hierarchical structure (ie, a taxonomy), which can be utilized in representation learning. Hierarchies have been utilized in several recent machine learning approaches. Deng et al [27] proposed a CRF-based method for image classification that takes into account inheritance and exclusion relations between the labels. Their CRF model transfers knowledge between classes by summing the weights along the hierarchy, leading to improved performance. Vendrov et al [28] introduced the order-embedding penalty to learn representations of hierarchical entities and used it for image caption retrieval tasks. Gaussian embeddings were introduced by Neelakantan et al [29] and learn a high-dimensional Gaussian distribution that can model entailment instead of single point vectors. Most recently, Nickel et al [30] showed that learning representations in a hyperbolic space can improve performance for hierarchical representations. | [
"27807747",
"28643174",
"26420781",
"25877637",
"27899602",
"27782107",
"26014595",
"21347171",
"11825149",
"20819853",
"29250549",
"23636887",
"9377276",
"27219127",
"28866570",
"28881963",
"25313974",
"14681409"
] | [
{
"pmid": "27807747",
"title": "Text Mining for Precision Medicine: Bringing Structure to EHRs and Biomedical Literature to Understand Genes and Health.",
"abstract": "The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine."
},
{
"pmid": "28643174",
"title": "Natural Language Processing for EHR-Based Pharmacovigilance: A Structured Review.",
"abstract": "The goal of pharmacovigilance is to detect, monitor, characterize and prevent adverse drug events (ADEs) with pharmaceutical products. This article is a comprehensive structured review of recent advances in applying natural language processing (NLP) to electronic health record (EHR) narratives for pharmacovigilance. We review methods of varying complexity and problem focus, summarize the current state-of-the-art in methodology advancement, discuss limitations and point out several promising future directions. The ability to accurately capture both semantic and syntactic structures in clinical narratives becomes increasingly critical to enable efficient and accurate ADE detection. Significant progress has been made in algorithm development and resource construction since 2000. Since 2012, statistical analysis and machine learning methods have gained traction in automation of ADE mining from EHR narratives. Current state-of-the-art methods for NLP-based ADE detection from EHRs show promise regarding their integration into production pharmacovigilance systems. In addition, integrating multifaceted, heterogeneous data sources has shown promise in improving ADE detection and has become increasingly adopted. On the other hand, challenges and opportunities remain across the frontier of NLP application to EHR-based pharmacovigilance, including proper characterization of ADE context, differentiation between off- and on-label drug-use ADEs, recognition of the importance of polypharmacy-induced ADEs, better integration of heterogeneous data sources, creation of shared corpora, and organization of shared-task challenges to advance the state-of-the-art."
},
{
"pmid": "26420781",
"title": "Recent Advances and Emerging Applications in Text and Data Mining for Biomedical Discovery.",
"abstract": "Precision medicine will revolutionize the way we treat and prevent disease. A major barrier to the implementation of precision medicine that clinicians and translational scientists face is understanding the underlying mechanisms of disease. We are starting to address this challenge through automatic approaches for information extraction, representation and analysis. Recent advances in text and data mining have been applied to a broad spectrum of key biomedical questions in genomics, pharmacogenomics and other fields. We present an overview of the fundamental methods for text and data mining, as well as recent advances and emerging applications toward precision medicine."
},
{
"pmid": "25877637",
"title": "DisGeNET: a discovery platform for the dynamical exploration of human diseases and their genes.",
"abstract": "DisGeNET is a comprehensive discovery platform designed to address a variety of questions concerning the genetic underpinning of human diseases. DisGeNET contains over 380,000 associations between >16,000 genes and 13,000 diseases, which makes it one of the largest repositories currently available of its kind. DisGeNET integrates expert-curated databases with text-mined data, covers information on Mendelian and complex diseases, and includes data from animal disease models. It features a score based on the supporting evidence to prioritize gene-disease associations. It is an open access resource available through a web interface, a Cytoscape plugin and as a Semantic Web resource. The web interface supports user-friendly data exploration and navigation. DisGeNET data can also be analysed via the DisGeNET Cytoscape plugin, and enriched with the annotations of other plugins of this popular network analysis software suite. Finally, the information contained in DisGeNET can be expanded and complemented using Semantic Web technologies and linked to a variety of resources already present in the Linked Data cloud. Hence, DisGeNET offers one of the most comprehensive collections of human gene-disease associations and a valuable set of tools for investigating the molecular mechanisms underlying diseases of genetic origin, designed to fulfill the needs of different user profiles, including bioinformaticians, biologists and health-care practitioners. Database URL: http://www.disgenet.org/"
},
{
"pmid": "27899602",
"title": "The Human Phenotype Ontology in 2017.",
"abstract": "Deep phenotyping has been defined as the precise and comprehensive analysis of phenotypic abnormalities in which the individual components of the phenotype are observed and described. The three components of the Human Phenotype Ontology (HPO; www.human-phenotype-ontology.org) project are the phenotype vocabulary, disease-phenotype annotations and the algorithms that operate on these. These components are being used for computational deep phenotyping and precision medicine as well as integration of clinical data into translational research. The HPO is being increasingly adopted as a standard for phenotypic abnormalities by diverse groups such as international rare disease organizations, registries, clinical labs, biomedical resources, and clinical software tools and will thereby contribute toward nascent efforts at global data exchange for identifying disease etiologies. This update article reviews the progress of the HPO project since the debut Nucleic Acids Research database article in 2014, including specific areas of expansion such as common (complex) disease, new algorithms for phenotype driven genomic discovery and diagnostics, integration of cross-species mapping efforts with the Mammalian Phenotype Ontology, an improved quality control pipeline, and the addition of patient-friendly terminology."
},
{
"pmid": "27782107",
"title": "'IRDiRC Recognized Resources': a new mechanism to support scientists to conduct efficient, high-quality research for rare diseases.",
"abstract": "The International Rare Diseases Research Consortium (IRDiRC) has created a quality label, 'IRDiRC Recognized Resources', formerly known as 'IRDiRC Recommended'. It is a peer-reviewed quality indicator process established based on the IRDiRC Policies and Guidelines to designate resources (ie, standards, guidelines, tools, and platforms) designed to accelerate the pace of discoveries and translation into clinical applications for the rare disease (RD) research community. In its first year of implementation, 13 resources successfully applied for this designation, each focused on key areas essential to IRDiRC objectives and to the field of RD research more broadly. These included data sharing for discovery, knowledge organisation and ontologies, networking patient registries, and therapeutic development. 'IRDiRC Recognized Resources' is a mechanism aimed to provide community-approved contributions to RD research higher visibility, and encourage researchers to adopt recognised standards, guidelines, tools, and platforms that facilitate research advances guided by the principles of interoperability and sharing."
},
{
"pmid": "26014595",
"title": "ClinGen--the Clinical Genome Resource.",
"abstract": "On autopsy, a patient is found to have hypertrophic cardiomyopathy. The patient’s family pursues genetic testing that shows a “likely pathogenic” variant for the condition on the basis of a study in an original research publication. Given the dominant inheritance of the condition and the risk of sudden cardiac death, other family members are tested for the genetic variant to determine their risk. Several family members test negative and are told that they are not at risk for hypertrophic cardiomyopathy and sudden cardiac death, and those who test positive are told that they need to be regularly monitored for cardiomyopathy on echocardiography. Five years later, during a routine clinic visit of one of the genotype-positive family members, the cardiologist queries a database for current knowledge on the genetic variant and discovers that the variant is now interpreted as “likely benign” by another laboratory that uses more recently derived population-frequency data. A newly available testing panel for additional genes that are implicated in hypertrophic cardiomyopathy is initiated on an affected family member, and a different variant is found that is determined to be pathogenic. Family members are retested, and one member who previously tested negative is now found to be positive for this new variant. An immediate clinical workup detects evidence of cardiomyopathy, and an intracardiac defibrillator is implanted to reduce the risk of sudden cardiac death."
},
{
"pmid": "21347171",
"title": "The open biomedical annotator.",
"abstract": "The range of publicly available biomedical data is enormous and is expanding fast. This expansion means that researchers now face a hurdle to extracting the data they need from the large numbers of data that are available. Biomedical researchers have turned to ontologies and terminologies to structure and annotate their data with ontology concepts for better search and retrieval. However, this annotation process cannot be easily automated and often requires expert curators. Plus, there is a lack of easy-to-use systems that facilitate the use of ontologies for annotation. This paper presents the Open Biomedical Annotator (OBA), an ontology-based Web service that annotates public datasets with biomedical ontology concepts based on their textual metadata (www.bioontology.org). The biomedical community can use the annotator service to tag datasets automatically with ontology terms (from UMLS and NCBO BioPortal ontologies). Such annotations facilitate translational discoveries by integrating annotated data.[1]."
},
{
"pmid": "11825149",
"title": "Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program.",
"abstract": "The UMLS Metathesaurus, the largest thesaurus in the biomedical domain, provides a representation of biomedical knowledge consisting of concepts classified by semantic type and both hierarchical and non-hierarchical relationships among the concepts. This knowledge has proved useful for many applications including decision support systems, management of patient records, information retrieval (IR) and data mining. Gaining effective access to the knowledge is critical to the success of these applications. This paper describes MetaMap, a program developed at the National Library of Medicine (NLM) to map biomedical text to the Metathesaurus or, equivalently, to discover Metathesaurus concepts referred to in text. MetaMap uses a knowledge intensive approach based on symbolic, natural language processing (NLP) and computational linguistic techniques. Besides being applied for both IR and data mining applications, MetaMap is one of the foundations of NLM's Indexing Initiative System which is being applied to both semi-automatic and fully automatic indexing of the biomedical literature at the library."
},
{
"pmid": "20819853",
"title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.",
"abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text."
},
{
"pmid": "29250549",
"title": "Identifying Human Phenotype Terms by Combining Machine Learning and Validation Rules.",
"abstract": "Named-Entity Recognition is commonly used to identify biological entities such as proteins, genes, and chemical compounds found in scientific articles. The Human Phenotype Ontology (HPO) is an ontology that provides a standardized vocabulary for phenotypic abnormalities found in human diseases. This article presents the Identifying Human Phenotypes (IHP) system, tuned to recognize HPO entities in unstructured text. IHP uses Stanford CoreNLP for text processing and applies Conditional Random Fields trained with a rich feature set, which includes linguistic, orthographic, morphologic, lexical, and context features created for the machine learning-based classifier. However, the main novelty of IHP is its validation step based on a set of carefully crafted manual rules, such as the negative connotation analysis, that combined with a dictionary can filter incorrectly identified entities, find missed entities, and combine adjacent entities. The performance of IHP was evaluated using the recently published HPO Gold Standardized Corpora (GSC), where the system Bio-LarK CR obtained the best F-measure of 0.56. IHP achieved an F-measure of 0.65 on the GSC. Due to inconsistencies found in the GSC, an extended version of the GSC was created, adding 881 entities and modifying 4 entities. IHP achieved an F-measure of 0.863 on the new GSC."
},
{
"pmid": "23636887",
"title": "PhenoTips: patient phenotyping software for clinical and research use.",
"abstract": "We have developed PhenoTips: open source software for collecting and analyzing phenotypic information for patients with genetic disorders. Our software combines an easy-to-use interface, compatible with any device that runs a Web browser, with a standardized database back end. The PhenoTips' user interface closely mirrors clinician workflows so as to facilitate the recording of observations made during the patient encounter. Collected data include demographics, medical history, family history, physical and laboratory measurements, physical findings, and additional notes. Phenotypic information is represented using the Human Phenotype Ontology; however, the complexity of the ontology is hidden behind a user interface, which combines simple selection of common phenotypes with error-tolerant, predictive search of the entire ontology. PhenoTips supports accurate diagnosis by analyzing the entered data, then suggesting additional clinical investigations and providing Online Mendelian Inheritance in Man (OMIM) links to likely disorders. By collecting, classifying, and analyzing phenotypic information during the patient encounter, PhenoTips allows for streamlining of clinic workflow, efficient data entry, improved diagnosis, standardization of collected patient phenotypes, and sharing of anonymized patient phenotype data for the study of rare disorders. Our source code and a demo version of PhenoTips are available at http://phenotips.org."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
},
{
"pmid": "28866570",
"title": "PhenoLines: Phenotype Comparison Visualizations for Disease Subtyping via Topic Models.",
"abstract": "PhenoLines is a visual analysis tool for the interpretation of disease subtypes, derived from the application of topic models to clinical data. Topic models enable one to mine cross-sectional patient comorbidity data (e.g., electronic health records) and construct disease subtypes-each with its own temporally evolving prevalence and co-occurrence of phenotypes-without requiring aligned longitudinal phenotype data for all patients. However, the dimensionality of topic models makes interpretation challenging, and de facto analyses provide little intuition regarding phenotype relevance or phenotype interrelationships. PhenoLines enables one to compare phenotype prevalence within and across disease subtype topics, thus supporting subtype characterization, a task that involves identifying a proposed subtype's dominant phenotypes, ages of effect, and clinical validity. We contribute a data transformation workflow that employs the Human Phenotype Ontology to hierarchically organize phenotypes and aggregate the evolving probabilities produced by topic models. We introduce a novel measure of phenotype relevance that can be used to simplify the resulting topology. The design of PhenoLines was motivated by formative interviews with machine learning and clinical experts. We describe the collaborative design process, distill high-level tasks, and report on initial evaluations with machine learning experts and a medical domain expert. These results suggest that PhenoLines demonstrates promising approaches to support the characterization and optimization of topic models."
},
{
"pmid": "28881963",
"title": "Deep learning with word embeddings improves biomedical named entity recognition.",
"abstract": "MOTIVATION\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nRESULTS\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nCONTACT\[email protected]."
},
{
"pmid": "25313974",
"title": "The National Institutes of Health undiagnosed diseases program.",
"abstract": "PURPOSE OF REVIEW\nTo review the approach to undiagnosed patients and results of the National Institutes of Health (NIH) undiagnosed diseases program (UDP), and discuss its benefits to patients, academic medical centers, and the greater scientific community.\n\n\nRECENT FINDINGS\nThe NIH UDP provides comprehensive and collaborative evaluations for patients with objective findings of disease whose diagnoses have long eluded the medical community. Intensive review of patient records, careful phenotyping, and new genomic technologies have resulted in the diagnosis of new and extremely rare conditions, expanded the phenotypes of rare disorders, and determined that symptoms are caused by more than one disorder in a family.\n\n\nSUMMARY\nMany children and adults with complex phenotypes remain undiagnosed despite years of searching. The most common undiagnosed disorders involve a neurologic phenotype. Comprehensive phenotyping and genomic analysis utilizing nuclear families can provide a diagnosis in some cases and provide good 'lead' candidate genes for others. A UDP can be important for patients, academic medical centers, the scientific community, and society."
},
{
"pmid": "14681409",
"title": "The Unified Medical Language System (UMLS): integrating biomedical terminology.",
"abstract": "The Unified Medical Language System (http://umlsks.nlm.nih.gov) is a repository of biomedical vocabularies developed by the US National Library of Medicine. The UMLS integrates over 2 million names for some 900,000 concepts from more than 60 families of biomedical vocabularies, as well as 12 million relations among these concepts. Vocabularies integrated in the UMLS Metathesaurus include the NCBI taxonomy, Gene Ontology, the Medical Subject Headings (MeSH), OMIM and the Digital Anatomist Symbolic Knowledge Base. UMLS concepts are not only inter-related, but may also be linked to external resources such as GenBank. In addition to data, the UMLS includes tools for customizing the Metathesaurus (MetamorphoSys), for generating lexical variants of concept names (lvg) and for extracting UMLS concepts from text (MetaMap). The UMLS knowledge sources are updated quarterly. All vocabularies are available at no fee for research purposes within an institution, but UMLS users are required to sign a license agreement. The UMLS knowledge sources are distributed on CD-ROM and by FTP."
}
] |
Frontiers in Plant Science | 31178875 | PMC6537632 | 10.3389/fpls.2019.00611 | Single-Shot Convolution Neural Networks for Real-Time Fruit Detection Within the Tree | Image/video processing for fruit detection in the tree using hard-coded feature extraction algorithms has shown high accuracy on fruit detection during recent years. While accurate, these approaches even with high-end hardware are still computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks architecture based on single-stage detectors. Using deep-learning techniques eliminates the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This architecture takes the input image and divides into AxA grid, where A is a configurable hyper-parameter that defines the fineness of the grid. To each grid cell an image detection and localization algorithm is applied. Each of those cells is responsible to predict bounding boxes and confidence score for fruit (apple and pear in the case of this study) detected in that cell. We want this confidence score to be high if a fruit exists in a cell, otherwise to be zero, if no fruit is in the cell. More than 100 images of apple and pear trees were taken. Each tree image with approximately 50 fruits, that at the end resulted on more than 5000 images of apple and pear fruits each. Labeling images for training consisted on manually specifying the bounding boxes for fruits, where (x, y) are the center coordinates of the box and (w, h) are width and height. This architecture showed an accuracy of more than 90% fruit detection. Based on correlation between number of visible fruits, detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Processing speed is higher than 20 FPS which is fast enough for any grasping/harvesting robotic arm or other real-time applications.HIGHLIGHTSUsing new convolutional deep learning techniques based on single-shot detectors to detect and count fruits (apple and pear) within the tree canopy. | Background and Related WorkThe positions of the fruits in the tree are widely distributed, highly depending on the tree size, form, and growth. Furthermore, in addition to their position, fruits vary in size, shape, and reflectance due to the natural variation that exists in nature. Currently, no growth models can predict where fruit will occur. The shape of the fruit, one of the most distinctive features, varies between species and even cultivars (e.g., apples, oranges, etc., are cylindrical, but the width/height ratio are not constant with other fruits like pears) (Bac et al., 2014). Reflectance (mostly color and near-infrared) of fruit is a visual cue often used to distinguish fruit from other plant parts and still it varies strongly (Tao and Zhou, 2017). Color and texture are the fundamental character of natural images and plays an important role in visual perception. Color is often a distinctive and indicative cue for the presence of fruit. Most fruits when ripe have a distinctive color: red (apples, strawberries, and peaches, etc...), orange (oranges, etc...), or yellow (pears, lemons, peaches, and bananas). This makes them stand out from the green foliage when they are ready to pick (Edan et al., 2009; Barnea et al., 2016). However, some fruits even after ripening are still green (apple cv Granny Smith even after ripening does not change color) making them indistinguishable from the foliage on the basis of color alone (Edan et al., 2009).The earliest fruit detection systems date since 1968 (Jimeìnez et al., 1999). Using different methods and approaches based on photometric information (light reflectance difference from fruits and leaves in visible or infrared spectrum), these detectors were able to differentiate fruits from other part of the tree. According to the reviews devoted to fruit detection by Jimeìnez et al. (1999) and Kapach et al. (2012) there were many problems related to growth habit, that had to be considered. The unstructured and uncontrolled outdoor environment also presents many challenges for computer vision systems in agriculture.Light conditions have a major influence on fruit detection: direct sunlight results in saturated spots without color information and in shadows that cause standard segmentation procedures to split the apple surfaces into several fragments. In order to decrease the non-uniform illumination (daytime lighting can be bright, strong, directional, and variable), (Payne et al., 2014) described a machine vision technique to detect fruit based on images acquired during night time using artificial light sources. They reported 78% fruit detection, 10% errors and suggesting that artificial lighting at night can provide consistent illumination without strong directional shadows.In a different approach, Kelman and Linker (2014) and Linker and Kelman (2015) presented an algorithm for localizing spherical fruits that have a smooth surface, such as apples, using only shape analysis and in particular convexity. It is shown that in the images used for the study, more than 40% of the apple profiles were none-convex, more than 85% of apple edges had 15% or more non-convex profiles, and more than 45% of apple edges had 50% or more non-convex profiles. Overall, 94% of the apples were correctly detected and 14% of the detection corresponded to false positives. Despite high accuracy number, the model is very specific to apples and would not be extensible to other fruit crops with less spherical shapes. Kapach et al. (2012) explains color highlights and spherical attributes, which tend to appear more often on the smoother, more secular, and typically elliptical regions like fruits where the surface normal bisects the angle between illumination and viewing directions. While a method for estimating the number of apple fruits in the orchard using thermal camera was developed by Stajnko et al. (2004). Si et al. (2015) describes location of apples in trees using stereoscopic vision. The advantage of the active triangulation method is that the range data may be obtained without much computation and the speed is very high for any robotic harvesting application. Jiang et al. (2008) developed a binocular stereo vision tomato harvesting in greenhouse. In this method, a pair of stereo images was obtained by stereo cameras and transformed to gray-scale images. According to the gray correlation, corresponding points of the stereo images were searched, and a depth image was obtained by calculating distances between tomatoes and stereo cameras based on triangulation principle. A similar method was described by Barnea et al. (2016) using RGB and range data to analyse shape-related features of objects both in the image plane and 3D space. In another work Nguyen et al. (2014) developed a multi-phase algorithm to detect and localize apple fruits by combining an RGB-D camera and point cloud processing techniques. Tao and Zhou (2017) developed an automatic apple recognition system based on the fusion of color and 3D features.Until recent years, traditional computer vision approaches have been extensively adopted in the agricultural field. In recent years, with the significant increase in computational power, in particular with special purpose processors optimized for matrix-like data processing and large amount of data calculations (eg., Graphical Processing Unit – GPU), a lot of DL, CNN models and methodologies specifically have achieved breakthroughs never achieved before (LeCun et al., 2015).Sa et al. (2016) developed a model called DeepFruits, for fruit detection. Adopting a Faster R-CNN model, goal was to build an accurate, fast and reliable fruit detection system. The model after training was able to achieve 0.838 precision and recall in the detection of sweet pepper. In addition, they used a multi-modal fusion approach that combines the information from RGB and NIR images. The bottle-neck of the model is that in order to deploy on a real robot system, the processing performance required is a GPU of 8 GB or more.It is well known that all DL models, to have high accuracy they need high number of data (Krizhevsky et al., 2012). In case of CNN, the more images of the object of interest, the better the classification/detection performance is. In a model called DeepCount, Rahnemoonfar and Sheppard (2017) developed a CNN architecture based on Inception-ResNet for counting fruits. In order to use less training data, (Rahnemoonfar and Sheppard, 2017) used a different approach. They used another model to generate synthetic images/data to feed the main model to train on. Those generated images were simply a brownish and greenish color background with red circles drawn above it to simulate the background and tomato plant with fruit on it. They used twenty-four thousand generated images to feed into the model. The model after was tested on real world images and showed an accuracy from 85 to 80%.To understand better the amount of data needed for better fruit detection, Bargoti and Underwood (2017) used different data augmentation techniques and transfer learning from other fruits. It is shown that transferring weights between different fruits did not have significant performance gains, while data augmentation like flip and scale were found to improve performance resulting in equivalent performance with less than half the number of training images. | [
"26017442",
"30345940",
"28425947",
"27527168"
] | [
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "30345940",
"title": "Review: Grass-based dairy systems, data and precision technologies.",
"abstract": "Precision technologies and data have had relatively modest impacts in grass-based livestock ruminant production systems compared with other agricultural sectors such as arable. Precision technologies promise increased efficiency, reduced environmental impact, improved animal health, welfare and product quality. The benefits of precision technologies have, however, been relatively slow to be realised on pasture based farms. Though there is significant overlap with indoor systems, implementing technology in grass-based dairying brings unique opportunities and challenges. The large areas animals roam and graze in pasture based systems and the associated connectivity challenges may, in part at least, explain the comparatively lower adoption of such technologies in pasture based systems. With the exception of sensor and Bluetooth-enabled plate metres, there are thus few technologies designed specifically to increase pasture utilisation. Terrestrial and satellite-based spectral analysis of pasture biomass and quality is still in the development phase. One of the key drivers of efficiency in pasture based systems has thus only been marginally impacted by precision technologies. In contrast, technological development in the area of fertility and heat detection has been significant and offers significant potential value to dairy farmers, including those in pasture based systems. A past review of sensors in health management for dairy farms concluded that although the collection of accurate data was generally achieved, the processing, integration and presentation of the resulting information and decision-support applications were inadequate. These technologies' value to farming systems is thus unclear. As a result, it is not certain that farm management is being sufficiently improved to justify widespread adoption of precision technologies currently. We argue for a user need-driven development of technologies and for a focus on how outputs arising from precision technologies and associated decision support applications are delivered to users to maximise their value. Further cost/benefit analysis is required to determine the efficacy of investing in specific precision technologies, potentially taking account of several yet to ascertained farm specific variables."
},
{
"pmid": "28425947",
"title": "Deep Count: Fruit Counting Based on Deep Simulated Learning.",
"abstract": "Recent years have witnessed significant advancement in computer vision research based on deep learning. Success of these tasks largely depends on the availability of a large amount of training samples. Labeling the training samples is an expensive process. In this paper, we present a simulated deep convolutional neural network for yield estimation. Knowing the exact number of fruits, flowers, and trees helps farmers to make better decisions on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits or flowers by workers is a very time consuming and expensive process and it is not practical for big fields. Automatic yield estimation based on robotic agriculture provides a viable solution in this regard. Our network is trained entirely on synthetic data and tested on real data. To capture features on multiple scales, we used a modified version of the Inception-ResNet architecture. Our algorithm counts efficiently even if fruits are under shadow, occluded by foliage, branches, or if there is some degree of overlap amongst fruits. Experimental results show a 91% average test accuracy on real images and 93% on synthetic images."
},
{
"pmid": "27527168",
"title": "DeepFruits: A Fruit Detection System Using Deep Neural Networks.",
"abstract": "This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit."
}
] |
JMIR mHealth and uHealth | 31099340 | PMC6542252 | 10.2196/13421 | Validation of the Mobile App–Recorded Circadian Rhythm by a Digital Footprint | BackgroundModern smartphone use is pervasive and could be an accessible method of evaluating the circadian rhythm and social jet lag via a mobile app.ObjectiveThis study aimed to validate the app-recorded sleep time with daily self-reports by examining the consistency of total sleep time (TST), as well as the timing of sleep onset and wake time, and to validate the app-recorded circadian rhythm with the corresponding 30-day self-reported midpoint of sleep and the consistency of social jetlag.MethodsThe mobile app, Rhythm, recorded parameters and these parameters were hypothesized to be used to infer a relative long-term pattern of the circadian rhythm. In total, 28 volunteers downloaded the app, and 30 days of automatically recorded data along with self-reported sleep measures were collected.ResultsNo significant difference was noted between app-recorded and self-reported midpoint of sleep time and between app-recorded and self-reported social jetlag. The overall correlation coefficient of app-recorded and self-reported midpoint of sleep time was .87.ConclusionsThe circadian rhythm for 1 month, daily TST, and timing of sleep onset could be automatically calculated by the app and algorithm. | Literature Review/Related WorkThere have been several mobile apps on the market to measure sleep automatically via smartphone sensors. Best Effort Sleep [23] uses a sensor-based inference algorithm that combines smartphone usage patterns along with environmental cues such as light and ambient sound to infer a user’s sleep duration. Similarly, Toss ‘N’ Turn [24] also collects sound, light, movement, screen state, app usage, and battery status to classify sleep state and quality. The systems, iSleep [25] and wakeNsmile [26], use a built-in phone microphone to detect body movement and sounds such as cough and snoring to predict sleep phases. However, such apps typically assess sleep time or sleep phases but these mobile apps do not take the circadian rhythm into consideration.These mobile sensing-based algorithms with less power consumption would advantage from delineating the circadian rhythm from a long consecutive sleep recording. Only a couple of mobile apps compute the sleep time and circadian rhythm solely based on smartphone usage patterns. The pilot study we performed identified proactive smartphone screen-on and screen-off patterns to estimate sleep time and achieved 83% accuracy [19]. UbiComp [27] similarly showed that smartphone usage patterns were able to detect sleep duration as well as symptoms of sleep deprivation. Although these mobile sensing-based apps were validated to assess sleep time, this is the first study to validate both sleep time and circadian rhythm for 30 days with corresponding day-by-day self-reports. | [
"6440029",
"2105471",
"1613555",
"9674430",
"3152288",
"8128247",
"6094014",
"25662461",
"25535358",
"24157101",
"12568247",
"17936039",
"27922603",
"26612950",
"27078548",
"28195570",
"28146615",
"25935253",
"30611008",
"22578422",
"16687322",
"25193149",
"22163941"
] | [
{
"pmid": "6440029",
"title": "Restoration of circadian behavioural rhythms by gene transfer in Drosophila.",
"abstract": "The per locus of Drosophila melanogaster has a fundamental role in the construction or maintenance of a biological clock. Three classes of per mutations have been identified: per mutants have circadian behavioural rhythms with a 29-h rather than a 24-h period, pers mutants have short-period rhythms of 19 h, and per mutants have no detectable circadian rhythms. Each of these mutations has a corresponding influence on the 55-s periodicity of male courtship song. Long- and short-period circadian rhythm phenotypes can also be obtained by altering the dosage of the wild-type gene: for example, females carrying only one dose of this X-linked gene have circadian rhythms with periodicities about 1 h longer than those carrying two doses. In a previous report, cloned DNA was used to localize several chromosomal rearrangement breakpoints that alter per locus function. The rearrangements all affected a 7-kilobase (kb) interval that encodes a 4.5-kb poly(A)+ RNA. We report here that when a 7.1-kb fragment from a per+ fly, including the sequences encoding the 4.5-kb transcript, is introduced into the genome of a per (arrhythmic) fly by P element-mediated transformation, circadian rhythmicity of behaviour such as eclosion and locomotor activity is restored. The transforming DNA complements per locus deletions and is transcribed, forming a single 4.5-kb poly(A)+ RNA comparable to that produced by wild-type flies."
},
{
"pmid": "2105471",
"title": "Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels.",
"abstract": "Mutations in the period (per) gene of Drosophila melanogaster affect both circadian and ultradian rhythms. Levels of per gene product undergo circadian oscillation, and it is now shown that there is an underlying oscillation in the level of per RNA. The observations indicate that the cycling of per-encoded protein could result from per RNA cycling, and that there is a feedback loop through which the activity of per-encoded protein causes cycling of its own RNA."
},
{
"pmid": "1613555",
"title": "The period gene encodes a predominantly nuclear protein in adult Drosophila.",
"abstract": "The period gene of Drosophila melanogaster (per) is important for the generation and maintenance of biological rhythms. Previous light microscopic observations indicated that per is expressed in a variety of tissues and cell types and suggested that the per protein (PER) may be present in different subcellular compartments. To understand how PER influences circadian rhythms, it is important to define its subcellular location, especially in adult flies where inducible promoter experiments suggested that it is most relevant to circadian locomotor activity rhythms. To this end, we report the results of an immunoelectron microscopic analysis of wild-type flies and per-beta-galactosidase (beta-gal) fusion gene transgenics using a polyclonal anti-PER antibody or an anti-beta-gal antibody, respectively. Most of the PER antigen and the fusion gene product were located within nuclei, suggesting that PER acts in that subcellular compartment to affect circadian rhythms. The results are discussed in terms of per's possible biochemical functions."
},
{
"pmid": "9674430",
"title": "double-time is a novel Drosophila clock gene that regulates PERIOD protein accumulation.",
"abstract": "We have isolated three alleles of a novel Drosophila clock gene, double-time (dbt). Short- (dbtS) and long-period (dbtL) mutants alter both behavioral rhythmicity and molecular oscillations from previously identified clock genes, period and timeless. A third allele, dbtP, causes pupal lethality and eliminates circadian cycling of per and tim gene products in larvae. In dbtP mutants, PER proteins constitutively accumulate, remain hypophosphorylated, and no longer depend on TIM proteins for their accumulation. We propose that the normal function of DOUBLETIME protein is to reduce the stability and thus the level of accumulation of monomeric PER proteins. This would promote a delay between per/tim transcription and PER/TIM complex function, which is essential for molecular rhythmicity."
},
{
"pmid": "3152288",
"title": "Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system.",
"abstract": "Polyclonal antibodies were prepared against the period gene product, which influences biological rhythms in D. melanogaster, by using small synthetic peptides from the per sequence as immunogens. The peptide that elicited the best antibody reagent was a small domain near the site of the pers (short period) mutation. Specific immunohistochemical staining was detected in a variety of tissue types: the embryonic CNS; a few cell bodies in the central brain of pupae; these and other cells in the central brain of adults, as well as imaginal cells in the eyes, optic lobes, and the gut. The intensity of per-specific staining in the visual system was found to oscillate, defining a free-running circadian rhythm with a peak in the middle of the night."
},
{
"pmid": "8128247",
"title": "Block in nuclear localization of period protein by a second clock mutation, timeless.",
"abstract": "In wild-type Drosophila, the period protein (PER) is found in nuclei of the eyes and brain, and PER immunoreactivity oscillates with a circadian rhythm. The studies described here indicate that the nuclear localization of PER is blocked by timeless (tim), a second chromosome mutation that, like per null mutations, abolishes circadian rhythms. PER fusion proteins without a conserved domain (PAS) and some flanking sequences are nuclear in tim mutants. This suggests that a segment of PER inhibits nuclear localization in tim mutants. The tim gene may have a role in establishing rhythms of PER abundance and nuclear localization in wild-type flies."
},
{
"pmid": "6094014",
"title": "P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster.",
"abstract": "Mutations at the period (per) locus of Drosophila melanogaster disrupt several biological rhythms. Molecular cloning of DNA sequences encompassing the per+ locus has allowed germ-line transformation experiments to be carried out. Certain subsegments of the per region, transduced into the genome of arrhythmic pero flies, restore rhythmicity in circadian locomotor behavior and the male's courtship song."
},
{
"pmid": "25662461",
"title": "Identification of small-molecule modulators of the circadian clock.",
"abstract": "Chemical biology or chemical genetics has emerged as an interdisciplinary research area applying chemistry to understand biological systems. The development of combinatorial chemistry and high-throughput screening technologies has enabled large-scale investigation of the biological activities of diverse small molecules to discover useful chemical probes. This approach is applicable to the analysis of the circadian clock mechanisms through cell-based assays to monitor circadian rhythms using luciferase reporter genes. We and others have established cell-based high-throughput circadian assays and have identified a variety of novel small-molecule modulators of the circadian clock by phenotype-based screening of hundreds of thousands of compounds. The results demonstrated the effectiveness of chemical biology approaches in clock research field. This technique will become more and more common with propagation of high-throughput screening facilities. This chapter describes assay development, screening setups, and their optimization for successful screening campaigns."
},
{
"pmid": "25535358",
"title": "Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness.",
"abstract": "In the past 50 y, there has been a decline in average sleep duration and quality, with adverse consequences on general health. A representative survey of 1,508 American adults recently revealed that 90% of Americans used some type of electronics at least a few nights per week within 1 h before bedtime. Mounting evidence from countries around the world shows the negative impact of such technology use on sleep. This negative impact on sleep may be due to the short-wavelength-enriched light emitted by these electronic devices, given that artificial-light exposure has been shown experimentally to produce alerting effects, suppress melatonin, and phase-shift the biological clock. A few reports have shown that these devices suppress melatonin levels, but little is known about the effects on circadian phase or the following sleep episode, exposing a substantial gap in our knowledge of how this increasingly popular technology affects sleep. Here we compare the biological effects of reading an electronic book on a light-emitting device (LE-eBook) with reading a printed book in the hours before bedtime. Participants reading an LE-eBook took longer to fall asleep and had reduced evening sleepiness, reduced melatonin secretion, later timing of their circadian clock, and reduced next-morning alertness than when reading a printed book. These results demonstrate that evening exposure to an LE-eBook phase-delays the circadian clock, acutely suppresses melatonin, and has important implications for understanding the impact of such technologies on sleep, performance, health, and safety."
},
{
"pmid": "24157101",
"title": "Association between morningness-eveningness and the severity of compulsive Internet use: the moderating role of gender and parenting style.",
"abstract": "BACKGROUND\nEveningness and Internet addiction are major concerns in adolescence and young adulthood. We investigated the relationship between morningness-eveningness and compulsive Internet use in young adults and explored the moderating effects of perceived parenting styles and family support on such relationships.\n\n\nMETHODS\nThe participants consisted of 2731 incoming college students (men, 52.4%; mean age, 19.4±3.6years) from a National University in Taiwan. Each participant completed the questionnaires, which included the Morningness-Eveningness Scale (MES), the Yale-Brown Obsessive Compulsive Scale modified for Internet use (YBOCS-IU), the Parental Bonding Instrument for parenting style, the Family Adaptation, Partnership, Growth, Affection, and Resolve questionnaire (APGAR) for perceived family support, and the Adult Self-Report Inventory-4 (ASRI-4) for psychopathology. The morning (n=459), intermediate (n=1878), and evening (n=394) groups were operationally defined by the MES t scores.\n\n\nRESULTS\nThe results showed that eveningness was associated with greater weekend sleep compensation, increased compulsive Internet use, more anxiety, poorer parenting styles, and less family support; additionally, the most associated variables for increased compulsive Internet use were the tendency of eveningness, male gender, more anxiety symptoms, less maternal affection/care, and a lower level of perceived family support. The negative association between the morning type and compulsive Internet use severity escalated with increased maternal affection/care and decreased with increased perceived family support. The positive association between the evening type and compulsive Internet use severity declined with increased maternal protection. However, the father's parenting style did not influence the relationship between morningness-eveningness and compulsive Internet use severity.\n\n\nCONCLUSIONS\nOur findings imply that sleep schedule and the parental and family process should be part of specific measures for prevention and intervention of compulsive Internet use."
},
{
"pmid": "12568247",
"title": "Life between clocks: daily temporal patterns of human chronotypes.",
"abstract": "Human behavior shows large interindividual variation in temporal organization. Extreme \"larks\" wake up when extreme \"owls\" fall asleep. These chronotypes are attributed to differences in the circadian clock, and in animals, the genetic basis of similar phenotypic differences is well established. To better understand the genetic basis of temporal organization in humans, the authors developed a questionnaire to document individual sleep times, self-reported light exposure, and self-assessed chronotype, considering work and free days separately. This report summarizes the results of 500 questionnaires completed in a pilot study individual sleep times show large differences between work and free days, except for extreme early types. During the workweek, late chronotypes accumulate considerable sleep debt, for which they compensate on free days by lengthening their sleep by several hours. For all chronotypes, the amount of time spent outdoors in broad daylight significantly affects the timing of sleep: Increased self-reported light exposure advances sleep. The timing of self-selected sleep is multifactorial, including genetic disposition, sleep debt accumulated on workdays, and light exposure. Thus, accurate assessment of genetic chronotypes has to incorporate all of these parameters. The dependence of human chronotype on light, that is, on the amplitude of the light:dark signal, follows the known characteristics of circadian systems in all other experimental organisms. Our results predict that the timing of sleep has changed during industrialization and that a majority of humans are sleep deprived during the workweek. The implications are far ranging concerning learning, memory, vigilance, performance, and quality of life."
},
{
"pmid": "17936039",
"title": "Epidemiology of the human circadian clock.",
"abstract": "Humans show large inter-individual differences in organising their behaviour within the 24-h day-this is most obvious in their preferred timing of sleep and wakefulness. Sleep and wake times show a near-Gaussian distribution in a given population, with extreme early types waking up when extreme late types fall asleep. This distribution is predominantly based on differences in an individuals' circadian clock. The relationship between the circadian system and different \"chronotypes\" is formally and genetically well established in experimental studies in organisms ranging from unicells to mammals. To investigate the epidemiology of the human circadian clock, we developed a simple questionnaire (Munich ChronoType Questionnaire, MCTQ) to assess chronotype. So far, more than 55,000 people have completed the MCTQ, which has been validated with respect to the Horne-Østberg morningness-eveningness questionnaire (MEQ), objective measures of activity and rest (sleep-logs and actimetry), and physiological parameters. As a result of this large survey, we established an algorithm which optimises chronotype assessment by incorporating the information on timing of sleep and wakefulness for both work and free days. The timing and duration of sleep are generally independent. However, when the two are analysed separately for work and free days, sleep duration strongly depends on chronotype. In addition, chronotype is both age- and sex-dependent."
},
{
"pmid": "27922603",
"title": "Digital footprints: facilitating large-scale environmental psychiatric research in naturalistic settings through data from everyday technologies.",
"abstract": "Digital footprints, the automatically accumulated by-products of our technology-saturated lives, offer an exciting opportunity for psychiatric research. The commercial sector has already embraced the electronic trails of customers as an enabling tool for guiding consumer behaviour, and analogous efforts are ongoing to monitor and improve the mental health of psychiatric patients. The untargeted collection of digital footprints that may or may not be health orientated comprises a large untapped information resource for epidemiological scale research into psychiatric disorders. Real-time monitoring of mood, sleep and physical and social activity in a substantial portion of the affected population in a naturalistic setting is unprecedented in psychiatry. We propose that digital footprints can provide these measurements from real world setting unobtrusively and in a longitudinal fashion. In this perspective article, we outline the concept of digital footprints and the services and devices that create them, and present examples where digital footprints have been successfully used in research. We then critically discuss the opportunities and fundamental challenges associated digital footprints in psychiatric research, such as collecting data from different sources, analysis, ethical and research design challenges."
},
{
"pmid": "26612950",
"title": "Predicting poverty and wealth from mobile phone metadata.",
"abstract": "Accurate and timely estimates of population characteristics are a critical input to social and economic research and policy. In industrialized economies, novel sources of data are enabling new approaches to demographic profiling, but in developing countries, fewer sources of big data exist. We show that an individual's past history of mobile phone use can be used to infer his or her socioeconomic status. Furthermore, we demonstrate that the predicted attributes of millions of individuals can, in turn, accurately reconstruct the distribution of wealth of an entire nation or to infer the asset distribution of microregions composed of just a few households. In resource-constrained environments where censuses and household surveys are rare, this approach creates an option for gathering localized and timely information at a fraction of the cost of traditional methods."
},
{
"pmid": "28195570",
"title": "To use or not to use? Compulsive behavior and its role in smartphone addiction.",
"abstract": "Global smartphone penetration has led to unprecedented addictive behaviors. To develop a smartphone use/non-use pattern by mobile application (App) in order to identify problematic smartphone use, a total of 79 college students were monitored by the App for 1 month. The App-generated parameters included the daily use/non-use frequency, the total duration and the daily median of the duration per epoch. We introduced two other parameters, the root mean square of the successive differences (RMSSD) and the Similarity Index, in order to explore the similarity in use and non-use between participants. The non-use frequency, non-use duration and non-use-median parameters were able to significantly predict problematic smartphone use. A lower value for the RMSSD and Similarity Index, which represent a higher use/non-use similarity, were also associated with the problematic smartphone use. The use/non-use similarity is able to predict problematic smartphone use and reach beyond just determining whether a person shows excessive use."
},
{
"pmid": "28146615",
"title": "Incorporation of Mobile Application (App) Measures Into the Diagnosis of Smartphone Addiction.",
"abstract": "OBJECTIVE\nGlobal smartphone expansion has brought about unprecedented addictive behaviors. The current diagnosis of smartphone addiction is based solely on information from clinical interview. This study aimed to incorporate application (app)-recorded data into psychiatric criteria for the diagnosis of smartphone addiction and to examine the predictive ability of the app-recorded data for the diagnosis of smartphone addiction.\n\n\nMETHODS\nSmartphone use data of 79 college students were recorded by a newly developed app for 1 month between December 1, 2013, and May 31, 2014. For each participant, psychiatrists made a diagnosis for smartphone addiction based on 2 approaches: (1) only diagnostic interview (standard diagnosis) and (2) both diagnostic interview and app-recorded data (app-incorporated diagnosis). The app-incorporated diagnosis was further used to build app-incorporated diagnostic criteria. In addition, the app-recorded data were pooled as a score to predict smartphone addiction diagnosis.\n\n\nRESULTS\nWhen app-incorporated diagnosis was used as a gold standard for 12 candidate criteria, 7 criteria showed significant accuracy (area under receiver operating characteristic curve [AUC] > 0.7) and were constructed as app-incorporated diagnostic criteria, which demonstrated remarkable accuracy (92.4%) for app-incorporated diagnosis. In addition, both frequency and duration of daily smartphone use significantly predicted app-incorporated diagnosis (AUC = 0.70 for frequency; AUC = 0.72 for duration). The combination of duration, frequency, and frequency trend for 1 month can accurately predict smartphone addiction diagnosis (AUC = 0.79 for app-incorporated diagnosis; AUC = 0.71 for standard diagnosis).\n\n\nCONCLUSIONS\nThe app-incorporated diagnosis, combining both psychiatric interview and app-recorded data, demonstrated substantial accuracy for smartphone addiction diagnosis. In addition, the app-recorded data performed as an accurate screening tool for app-incorporated diagnosis."
},
{
"pmid": "25935253",
"title": "Time distortion associated with smartphone addiction: Identifying smartphone addiction via a mobile application (App).",
"abstract": "BACKGROUND\nGlobal smartphone penetration has brought about unprecedented addictive behaviors.\n\n\nAIMS\nWe report a proposed diagnostic criteria and the designing of a mobile application (App) to identify smartphone addiction.\n\n\nMETHOD\nWe used a novel empirical mode decomposition (EMD) to delineate the trend in smartphone use over one month.\n\n\nRESULTS\nThe daily use count and the trend of this frequency are associated with smartphone addiction. We quantify excessive use by daily use duration and frequency, as well as the relationship between the tolerance symptoms and the trend for the median duration of a use epoch. The psychiatrists' assisted self-reporting use time is significant lower than and the recorded total smartphone use time via the App and the degree of underestimation was positively correlated with actual smartphone use.\n\n\nCONCLUSIONS\nOur study suggests the identification of smartphone addiction by diagnostic interview and via the App-generated parameters with EMD analysis."
},
{
"pmid": "30611008",
"title": "Development of a mobile application (App) to delineate \"digital chronotype\" and the effects of delayed chronotype by bedtime smartphone use.",
"abstract": "The widespread use and deep reach of smartphones motivate the use of mobile applications to continuously monitor the relationship between circadian system, individual sleep patterns, and environmental effects. We selected 61 adults with 14-day data from the \"Know Addiction\" database. We developed an algorithm to identify the \"sleep time\" based on the smartphone behaviors. The total daily smartphone use duration and smartphone use duration prior to sleep onset were identified respectively. We applied mediation analysis to investigate the effects of total daily smartphone use on sleep through pre-sleep use (PS). The results showed participants' averaged pre-sleep episodes within 1 h prior to sleep are 2.58. The duration of three pre-sleep uses (PS1∼3) maybe a more representative index for smartphone use before sleep. Both total daily duration and the duration of the last three uses prior to sleep of smartphone use significantly delayed sleep onset, midpoint of sleep and reduced total sleep time. One hour of increased smartphone use daily, delays the circadian rhythm by 3.5 min, and reduced 5.5 min of total sleep time (TST). One hour of increased pre-sleep smartphone use delayed circadian rhythm by 1.7 min, and reduced 39 s of TST. The mediation effects of PS1∼3 significantly impacted on these three sleep indicators. PS1∼3 accounted for 14.3% of total daily duration, but the proportion mediated of delayed circadian rhythm was 44.0%. We presented \"digital chronotype\" with an automatic system that can collect high temporal resolution data from naturalistic settings with high ecological validity. Smartphone screen time, mainly mediated by pre-sleep use, delayed the circadian rhythm and reduced the total sleep time."
},
{
"pmid": "22578422",
"title": "Social jetlag and obesity.",
"abstract": "Obesity has reached crisis proportions in industrialized societies. Many factors converge to yield increased body mass index (BMI). Among these is sleep duration. The circadian clock controls sleep timing through the process of entrainment. Chronotype describes individual differences in sleep timing, and it is determined by genetic background, age, sex, and environment (e.g., light exposure). Social jetlag quantifies the discrepancy that often arises between circadian and social clocks, which results in chronic sleep loss. The circadian clock also regulates energy homeostasis, and its disruption-as with social jetlag-may contribute to weight-related pathologies. Here, we report the results from a large-scale epidemiological study, showing that, beyond sleep duration, social jetlag is associated with increased BMI. Our results demonstrate that living \"against the clock\" may be a factor contributing to the epidemic of obesity. This is of key importance in pending discussions on the implementation of Daylight Saving Time and on work or school times, which all contribute to the amount of social jetlag accrued by an individual. Our data suggest that improving the correspondence between biological and social clocks will contribute to the management of obesity."
},
{
"pmid": "16687322",
"title": "Social jetlag: misalignment of biological and social time.",
"abstract": "Humans show large differences in the preferred timing of their sleep and activity. This so-called \"chronotype\" is largely regulated by the circadian clock. Both genetic variations in clock genes and environmental influences contribute to the distribution of chronotypes in a given population, ranging from extreme early types to extreme late types with the majority falling between these extremes. Social (e.g., school and work) schedules interfere considerably with individual sleep preferences in the majority of the population. Late chronotypes show the largest differences in sleep timing between work and free days leading to a considerable sleep debt on work days, for which they compensate on free days. The discrepancy between work and free days, between social and biological time, can be described as 'social jetlag.' Here, we explore how sleep quality and psychological wellbeing are associated with individual chronotype and/or social jetlag. A total of 501 volunteers filled out the Munich ChronoType Questionnaire (MCTQ) as well as additional questionnaires on: (i) sleep quality (SF-A), (ii) current psychological wellbeing (Basler Befindlichkeitsbogen), (iii) retrospective psychological wellbeing over the past week (POMS), and (iv) consumption of stimulants (e.g., caffeine, nicotine, and alcohol). Associations of chronotype, wellbeing, and stimulant consumption are strongest in teenagers and young adults up to age 25 yrs. The most striking correlation exists between chronotype and smoking, which is significantly higher in late chronotypes of all ages (except for those in retirement). We show these correlations are most probably a consequence of social jetlag, i.e., the discrepancies between social and biological timing rather than a simple association to different chronotypes. Our results strongly suggest that work (and school) schedules should be adapted to chronotype whenever possible."
},
{
"pmid": "25193149",
"title": "Screen time and sleep among school-aged children and adolescents: a systematic literature review.",
"abstract": "We systematically examined and updated the scientific literature on the association between screen time (e.g., television, computers, video games, and mobile devices) and sleep outcomes among school-aged children and adolescents. We reviewed 67 studies published from 1999 to early 2014. We found that screen time is adversely associated with sleep outcomes (primarily shortened duration and delayed timing) in 90% of studies. Some of the results varied by type of screen exposure, age of participant, gender, and day of the week. While the evidence regarding the association between screen time and sleep is consistent, we discuss limitations of the current studies: 1) causal association not confirmed; 2) measurement error (of both screen time exposure and sleep measures); 3) limited data on simultaneous use of multiple screens, characteristics and content of screens used. Youth should be advised to limit or reduce screen time exposure, especially before or during bedtime hours to minimize any harmful effects of screen time on sleep and well-being. Future research should better account for the methodological limitations of the extant studies, and seek to better understand the magnitude and mechanisms of the association. These steps will help the development and implementation of policies or interventions related to screen time among youth."
},
{
"pmid": "22163941",
"title": "Use of mobile phones as intelligent sensors for sound input analysis and sleep state detection.",
"abstract": "Sleep is not just a passive process, but rather a highly dynamic process that is terminated by waking up. Throughout the night a specific number of sleep stages that are repeatedly changing in various periods of time take place. These specific time intervals and specific sleep stages are very important for the wake up event. It is far more difficult to wake up during the deep NREM (2-4) stage of sleep because the rest of the body is still sleeping. On the other hand if we wake up during the mild (REM, NREM1) sleep stage it is a much more pleasant experience for us and for our bodies. This problem led the authors to undertake this study and develop a Windows Mobile-based device application called wakeNsmile. The wakeNsmile application records and monitors the sleep stages for specific amounts of time before a desired alarm time set by users. It uses a built-in microphone and determines the optimal time to wake the user up. Hence, if the user sets an alarm in wakeNsmile to 7:00 and wakeNsmile detects that a more appropriate time to wake up (REM stage) is at 6:50, the alarm will start at 6:50. The current availability and low price of mobile devices is yet another reason to use and develop such an application that will hopefully help someone to wakeNsmile in the morning. So far, the wakeNsmile application has been tested on four individuals introduced in the final section."
}
] |
Frontiers in Neuroscience | 31191237 | PMC6549580 | 10.3389/fnins.2019.00550 | Different Dopaminergic Dysfunctions Underlying Parkinsonian Akinesia and Tremor | Although the occurrence of Parkinsonian akinesia and tremor is traditionally associated to dopaminergic degeneration, the multifaceted neural processes that cause these impairments are not fully understood. As a consequence, current dopamine medications cannot be tailored to the specific dysfunctions of patients with the result that generic drug therapies produce different effects on akinesia and tremor. This article proposes a computational model focusing on the role of dopamine impairments in the occurrence of akinesia and resting tremor. The model has three key features, to date never integrated in a single computational system: (a) an architecture constrained on the basis of the relevant known system-level anatomy of the basal ganglia-thalamo-cortical loops; (b) spiking neurons with physiologically-constrained parameters; (c) a detailed simulation of the effects of both phasic and tonic dopamine release. The model exhibits a neural dynamics compatible with that recorded in the brain of primates and humans. Moreover, it suggests that akinesia might involve both tonic and phasic dopamine dysregulations whereas resting tremor might be primarily caused by impairments involving tonic dopamine release and the responsiveness of dopamine receptors. These results could lead to develop new therapies based on a system-level view of the Parkinson's disease and targeting phasic and tonic dopamine in differential ways. | 4.1. Related WorksIn the last decade, several computational models have been proposed to study PD (see Humphries et al. 2018 for a recent review). Most of these models reproduce critical anatomical and physiological features (Terman et al., 2002; Leblois, 2006; Kumar et al., 2011; Pavlides et al., 2012, 2015). Some works use more abstract mathematical models to study functional aspects of the basal ganglia-cortical loops (e.g., Holt and Netoff, 2014). These models typically focus on the functioning of the pallidal-subthalamic system, exploring the pathological mechanisms leading to abnormal oscillatory activity in a frequency range which is usually higher than the one characterizing parkinsonian tremor. In addition, although these models are capable of producing abnormal oscillations, their conclusions are limited by the partial reproduction of the basal ganglia-thalamo-cortical loop architecture. In this respect, the model proposed here demonstrates that PD features related to akinesia and tremor are the results of abnormal interactions between different brain areas, including basal ganglia nuclei, cortex, and thalamus. This system-level approach agrees with evidence showing that therapies based on brain stimulation can be effective even if applied to different districts of the basal ganglia-thalamo-cortical circuit (Johnson et al., 2008; Montgomery and Gale, 2008; Caligiore et al., 2016). Moreover, the system-level nature of the model has allowed the achievement of results that could not be obtained by reproducing only the functioning of the pallidal-subthalamic circuit. In particular, the model suggests that alongside this circuit also the input from cortices to striatum and to subthalamic nucleus are critical to produce tremor oscillations.Among the models proposed in the literature, two are particularly relevant for the work presented here. The first one is the physiologically plausible model proposed by Humpries et al. to study the oscillatory properties of the basal ganglia circuitry under dopamine-depleted and dopamine-excessive conditions (Humphries et al., 2006). The model supports the critical role of the basal-ganglia action selection mechanism in the PD dysfunctions and also underlines the importance of system-level approaches to study PD. Moreover, it furnishes interesting predictions on the role of dopamine in the pallidal-subthalamic loop, showing that it is functionally decoupled by tonic dopamine under normal conditions and re-coupled by dopamine depletion.These elements have been an important starting point for the design of the model presented here. However, there are some critical differences between the two models. First, the model of Humphries et al. does not reproduce the whole cortico-striatal-thalamo-cortical loops. This element, present in our model, is important to reproduce the system-level dynamics of the action selection mechanisms. As a consequence, in our model the abnormal oscillatory behavior characterizing tremor emerges as an effect of the dopamine dysregulation in the cortico-striatal-thalamo-cortical circuit. By contrast, the model of Humphries et al. is fed with an external oscillatory input injected into the cortex, rather than being intrinsically generated by the model on basis of its internal circuitry and mechanisms as it happens in brain. In this respect, the model is used to study how its circuits amplify or attenuate oscillatory perturbations when dopamine has different levels. The model is hence not used to show the genesis of tremor following dopamine dysregulation.A second critical difference is that the model of Humphries et al. is primarily used to study the effects of tonic dopamine dysregulation but not those of phasic dopamine damage. Moreover, the model was used to show how alterations of the tonic dopamine levels reproduce data of slow (1 Hz) and γ-band (30–80 Hz) oscillatory phenomena reported in empirical works (MacKay, 1997; Brown et al., 2002). Instead, we implemented and manipulated both phasic and tonic dopamine, alongside the responsiveness of D2 receptors, to study how they might differently affect various features of akinesia and tremor.The model proposed by Dovzhenok and Rubchinsky also represents an important precedent for the model presented here (Dovzhenok and Rubchinsky, 2012). This model, in agreement with converging empirical evidence, proposes a system-level mechanism supporting the idea that the basal ganglia-thalamo-cortical loop is the core oscillator at the origin of tremor. The authors show how the variation of the strength of dopamine-modulated connections in the basal ganglia-thalamo-cortical loop, equated to the decreased dopamine baseline levels in PD, leads to the occurrence of tremor-like burst oscillations. These oscillations are suppressed when the connections are modulated back to represent a higher level of dopamine, as it could happen following dopamine medication. The oscillations also cease when the basal ganglia-thalamo-cortical loop is broken, as it could happen in the case of ablative anti-parkinsonian surgery. Despite these relevant results, the authors implemented a very simplified model of the subthalamo-pallidal loop embedded into an abstract implementation of the basal ganglia-thalamo-cortical system. Moreover, the dopamine dysfunctions were reproduced in a rather indirect way by strengthening the subthalamo-pallidal loop. These features could limit the plausibility of the mechanisms explaining the target phenomena. | [
"1674304",
"3085570",
"16148235",
"16260646",
"6842199",
"7983515",
"10758106",
"9880580",
"10923985",
"17267664",
"12429204",
"11157088",
"10809012",
"28725705",
"28358814",
"26873754",
"23911926",
"1695403",
"30323275",
"24756517",
"10970430",
"29420469",
"11081802",
"27001837",
"15746431",
"22848541",
"7501148",
"19198667",
"24578177",
"24600422",
"17973325",
"9021899",
"15701239",
"12397440",
"9221793",
"11417052",
"15271492",
"19162084",
"23834737",
"29119634",
"22382359",
"10893428",
"25954517",
"25099916",
"27398617",
"29666208",
"17167083",
"18244602",
"1822537",
"20651684",
"18394571",
"25904081",
"27266635",
"6303502",
"1810628",
"22028684",
"16571765",
"21223899",
"11566503",
"27366343",
"26537483",
"17611263",
"10719151",
"8124079",
"17706780",
"27422450",
"28979203",
"20851193",
"3427482",
"9130783",
"12067746",
"8093577",
"17031711",
"9863560",
"15824341",
"26683341",
"22805067",
"15728849",
"25086269",
"10362291",
"23745108",
"11522580",
"15708631",
"14534241",
"9658025",
"9881853",
"15331233",
"19555824",
"8815934",
"16249050",
"11923461",
"24514863",
"14598096",
"10627627",
"11756513",
"23404337",
"19494773",
"25465747"
] | [
{
"pmid": "16148235",
"title": "Ionic mechanisms underlying autonomous action potential generation in the somata and dendrites of GABAergic substantia nigra pars reticulata neurons in vitro.",
"abstract": "Through their repetitive discharge, GABAergic neurons of the substantia nigra pars reticulata (SNr) tonically inhibit the target nuclei of the basal ganglia and the dopamine neurons of the midbrain. As the repetitive firing of SNr neurons persists in vitro, perforated, whole-cell and cell-attached patch-clamp recordings were made from rat brain slices to determine the mechanisms underlying this activity. The spontaneous activity of SNr neurons was not perturbed by the blockade of fast synaptic transmission, demonstrating that it was autonomous in nature. A subthreshold, slowly inactivating, voltage-dependent, tetrodotoxin (TTX)-sensitive Na+ current and a TTX-insensitive inward current that was mediated in part by Na+ were responsible for depolarization to action potential (AP) threshold. An apamin-sensitive spike afterhyperpolarization mediated by small-conductance Ca2+-dependent K+ (SK) channels was critical for the precision of autonomous activity. SK channels were activated, in part, by Ca(2+) flowing throughomega-conotoxin GVIA-sensitive, class 2.2 voltage-dependent Ca2+ channels. Although Cs+/ZD7288 (4-ethylphenylamino-1,2-dimethyl-6-methylaminopyrimidinium chloride)-sensitive hyperpolarization-activated currents were also observed in SNr neurons, they were activated at voltages that were in general more hyperpolarized than those associated with autonomous activity. Simultaneous somatic and dendritic recordings revealed that autonomously generated APs were observed first at the soma before propagating into dendrites up to 120 microm from the somatic recording site. Backpropagation of autonomously generated APs was reliable with no observable incidence of failure. Together, these data suggest that the resting inhibitory output of the basal ganglia relies, in large part, on the intrinsic firing properties of the neurons that convey this signal."
},
{
"pmid": "16260646",
"title": "Dopamine receptors set the pattern of activity generated in subthalamic neurons.",
"abstract": "Information processing in the brain requires adequate background neuronal activity. As Parkinson's disease progresses, patients typically become akinetic; the death of dopaminergic neurons leads to a dopamine-depleted state, which disrupts information processing related to movement in a brain area called the basal ganglia. Using agonists of dopamine receptors in the D1 and D2 families on rat brain slices, we show that dopamine receptors in these two families govern the firing pattern of neurons in the subthalamic nucleus, a crucial part of the basal ganglia. We propose a conceptual frame, based on specific properties of dopamine receptors, to account for the dominance of different background firing patterns in normal and dopamine-depleted states."
},
{
"pmid": "6842199",
"title": "Physiological mechanisms of rigidity in Parkinson's disease.",
"abstract": "Electromyographic responses of triceps surae and tibialis anterior produced by dorsiflexion stretch were studied in 17 patients with Parkinson's disease. Most patients showed increased muscular activity when attempting to relax. A few patients showed an increase of short-latency reflexes when relaxed and when exerting a voluntary plantarflexion prior to the stretch. Many patients showed long-latency reflexes when relaxed and all but one showed long-latency reflexes with voluntary contraction; and these reflexes were often larger in magnitude and longer in duration than those seen in normal subjects. Unlike the short-latency reflex, the long-latency reflex did not disappear with vibration applied to the Achilles tendon. The long-latency reflexes and continuous responses to slow ramp stretches were diminished at a latency similar to the beginning of long-latency reflexes when the stretching was quickly reversed. Dorsiflexion stretch also frequently produced a shortening reaction in tibialis anterior. Of all the abnormal behavior exhibited by the Parkinsonian patients only the long-latency reflex magnitude and duration correlated with the clinical impression of increased tone. The mechanism of the long-latency reflex to stretch which is responsible for rigidity is not certain, but the present results are consistent with a group II mediated tonic response."
},
{
"pmid": "7983515",
"title": "The primate subthalamic nucleus. II. Neuronal activity in the MPTP model of parkinsonism.",
"abstract": "1. The neuronal mechanisms underlying the major motor signs of Parkinson's disease were studied in the basal ganglia of parkinsonian monkeys. Three African green monkeys were systemically treated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) until parkinsonian signs, including akinesia, rigidity, and a prominent 4- to 8-Hz tremor, appeared. The activity of neurons in the subthalamic nucleus (STN) and in the internal segment of the globus pallidus (GPi) was recorded before (STN, n = 220 cells; GPi, n = 175 cells) and after MPTP treatment (STN, n = 326 cells; GPi, n = 154 cells). 2. In STN the spontaneous firing rate was significantly increased from 19 +/- 10 (SD) spikes/s before to 26 +/- 15 spikes/s after MPTP treatment. Division of STN neurons recorded after MPTP treatment into cells with rhythmic bursts of discharge occurring at 4-8 Hz (as defined by autocorrelation analysis) and neurons without 4- to 8-Hz periodic activity revealed an even more prominent increase in the firing rate of the 4- to 8-Hz oscillatory neurons. 3. In GPi overall changes in the average firing rate of cells were inconsistent between different animals and behavioral states. However, the average firing rate of the subpopulation of neurons with 4- to 8-Hz periodic oscillatory activity after treatment with MPTP was significantly increased over that of all neurons before MPTP treatment (from 53 to 76 spikes/s, averaged across monkeys). 4. In the normal state the percentage of neurons with burst discharges (as defined by autocorrelation analysis) was 69% and 78% in STN and GPi, respectively. After MPTP treatment the percentage of cells that discharged in bursts was increased to 79% and 89%, respectively. At the same time the average burst duration decreased (from 121 +/- 98 to 81 +/- 99 ms in STN and from 213 +/- 120 to 146 +/- 134 ms in GPi) with no significant change in the average number of spikes per burst. 5. Periodic oscillatory neuronal activity at low frequency, highly correlated with tremor, was detected in a large number of cells in STN and GPi after MPTP treatment (average oscillation frequency 6.0 and 5.1 Hz, respectively). The autocorrelograms of spike trains of these neurons confirm that the periodic oscillatory activity was very stable. The percentage of cells with 4- to 8-Hz periodic activity significantly increased from 2% to 16% in STN and from 0.6% to 25% in GPi with the MPTP treatment.(ABSTRACT TRUNCATED AT 400 WORDS)"
},
{
"pmid": "10758106",
"title": "Slowly inactivating sodium current (I(NaP)) underlies single-spike activity in rat subthalamic neurons.",
"abstract": "One-half of the subthalamic nucleus (STN) neurons switch from single-spike activity to burst-firing mode according to membrane potential. In an earlier study, the ionic mechanisms of the bursting mode were studied but the ionic currents underlying single-spike activity were not determined. The single-spike mode of activity of STN neurons recorded from acute slices in the current clamp mode is TTX-sensitive but is not abolished by antagonists of ionotropic glutamatergic and GABAergic receptors, blockers of calcium currents (2 mM cobalt or 40 microM nickel), or intracellular Ca(2+) ions chelators. Tonic activity is characterized by a pacemaker depolarization that spontaneously brings the membrane from the peak of the afterspike hyperpolarization (AHP) to firing threshold (from -57.1 +/- 0.5 mV to -42.2 +/- 0.3 mV). Voltage-clamp recordings suggest that the Ni(2+)-sensitive, T-type Ca(2+) current does not play a significant role in single-spike activity because it is totally inactivated at potentials more depolarized than -60 mV. In contrast, the TTX-sensitive, I(NaP) that activated at -54.4 +/- 0.6 mV fulfills the conditions for underlying pacemaker depolarization because it is activated below spike threshold and is not fully inactivated in the pacemaker range. In some cases, the depolarization required to reach the threshold for I(NaP) activation is mediated by hyperpolarization-activated cation current (I(h)). This was directly confirmed by the cesium-induced shift from single-spike to burst-firing mode which was observed in some STN neurons. Therefore, a fraction of I(h) which is tonically activated at rest, exerts a depolarizing influence and enables membrane potential to reach the threshold for I(NaP) activation, thus favoring the single-spike mode. The combined action of I(NaP) and I(h) is responsible for the dual mode of discharge of STN neurons."
},
{
"pmid": "9880580",
"title": "Subthalamic nucleus neurons switch from single-spike activity to burst-firing mode.",
"abstract": "The modification of the discharge pattern of subthalamic nucleus (STN) neurons from single-spike activity to mixed burst-firing mode is one of the characteristics of parkinsonism in rat and primates. However, the mechanism of this process is not yet understood. Intrinsic firing patterns of STN neurons were examined in rat brain slices with intracellular and patch-clamp techniques. Almost half of the STN neurons that spontaneously discharged in the single-spike mode had the intrinsic property of switching to pure or mixed burst-firing mode when the membrane was hyperpolarized from -41.3 +/- 1.0 mV (range, -35 to -50 mV; n = 15) to -51.0 +/- 1.0 mV (range, -42 to -60 mV; n = 20). This switch was greatly facilitated by activation of metabotropic glutamate receptors with 1S,3R-ACPD. Recurrent membrane oscillations underlying burst-firing mode were endogenous and Ca2+-dependent because they were largely reduced by nifedipine (3 microM), Ni2+ (40 microM), and BAPTA-AM (10-50 microM) at any potential tested, whereas TTX (1 microM) had no effect. In contrast, simultaneous application of TEA (1 mM) and apamin (0.2 microM) prolonged burst duration. Moreover, in response to intracellular stimulation at hyperpolarized potentials, a plateau potential with a voltage and ionic basis similar to those of spontaneous bursts was recorded in 82% of the tested STN neurons, all of which displayed a low-threshold Ni2+-sensitive spike. We propose that recurrent membrane oscillations during bursts result from the sequential activation of T/R- and L-type Ca2+ currents, a Ca2+-activated inward current, and Ca2+-activated K+ currents."
},
{
"pmid": "10923985",
"title": "Synaptic organisation of the basal ganglia.",
"abstract": "The basal ganglia are a group of subcortical nuclei involved in a variety of processes including motor, cognitive and mnemonic functions. One of their major roles is to integrate sensorimotor, associative and limbic information in the production of context-dependent behaviours. These roles are exemplified by the clinical manifestations of neurological disorders of the basal ganglia. Recent advances in many fields, including pharmacology, anatomy, physiology and pathophysiology have provided converging data that have led to unifying hypotheses concerning the functional organisation of the basal ganglia in health and disease. The major input to the basal ganglia is derived from the cerebral cortex. Virtually the whole of the cortical mantle projects in a topographic manner onto the striatum, this cortical information is 'processed' within the striatum and passed via the so-called direct and indirect pathways to the output nuclei of the basal ganglia, the internal segment of the globus pallidus and the substantia nigra pars reticulata. The basal ganglia influence behaviour by the projections of these output nuclei to the thalamus and thence back to the cortex, or to subcortical 'premotor' regions. Recent studies have demonstrated that the organisation of these pathways is more complex than previously suggested. Thus the cortical input to the basal ganglia, in addition to innervating the spiny projection neurons, also innervates GABA interneurons, which in turn provide a feed-forward inhibition of the spiny output neurons. Individual neurons of the globus pallidus innervate basal ganglia output nuclei as well as the subthalamic nucleus and substantia nigra pars compacta. About one quarter of them also innervate the striatum and are in a position to control the output of the striatum powerfully as they preferentially contact GABA interneurons. Neurons of the pallidal complex also provide an anatomical substrate, within the basal ganglia, for the synaptic integration of functionally diverse information derived from the cortex. It is concluded that the essential concept of the direct and indirect pathways of information flow through the basal ganglia remains intact but that the role of the indirect pathway is more complex than previously suggested and that neurons of the globus pallidus are in a position to control the activity of virtually the whole of the basal ganglia."
},
{
"pmid": "17267664",
"title": "D2 receptors regulate dopamine transporter function via an extracellular signal-regulated kinases 1 and 2-dependent and phosphoinositide 3 kinase-independent mechanism.",
"abstract": "The dopamine transporter (DAT) terminates dopamine (DA) neurotransmission by reuptake of DA into presynaptic neurons. Regulation of DA uptake by D(2) dopamine receptors (D(2)R) has been reported. The high affinity of DA and other DAT substrates for the D(2)R, however, has complicated investigation of the intracellular mechanisms mediating this effect. The present studies used the fluorescent DAT substrate, 4-[4-(diethylamino)-styryl]-N-methylpyridinium iodide (ASP(+)) with live cell imaging techniques to identify the role of two D(2)R-linked signaling pathways, extracellular signal-regulated kinases 1 and 2 (ERK1/2), and phosphoinositide 3 kinase (PI3K) in mediating D(2)R regulation of DAT. Addition of the D(2)/D(3) receptor agonist quinpirole (0.1-10 muM) to human embryonic kidney cells coexpressing human DAT and D(2) receptor (short splice variant, D(2S)R) induced a rapid, concentration-dependent and pertussis toxin-sensitive increase in ASP(+) accumulation. The D(2)/D(3) agonist (S)-(+)-(4aR, 10bR)-3,4,4a, 10b-tetrahydro-4-propyl-2H,5H-[1]benzopyrano-[4,3-b]-1,4-oxazin-9-ol hydrochloride (PD128907) also increased ASP(+) accumulation. D(2S)R activation increased phosphorylation of ERK1/2 and Akt, a major target of PI3K. The mitogen-activated protein kinase kinase inhibitor 2-(2-amino-3-methoxyphenyl)-4H-1-benzopyran-4-one (PD98059) prevented the quinpirole-evoked increase in ASP(+) accumulation, whereas inhibition of PI3K was without effect. Fluorescence flow cytometry and biotinylation studies revealed a rapid increase in DAT cell-surface expression in response to D(2)R stimulation. These experiments demonstrate that D(2S)R stimulation increases DAT cell surface expression and therefore enhances substrate clearance. Furthermore, they show that the increase in DAT function is ERK1/2-dependent but PI3K-independent. Our data also suggest the possibility of a direct physical interaction between DAT and D(2)R. Together, these results suggest a novel mechanism by which D(2S)R autoreceptors may regulate DAT in the central nervous system."
},
{
"pmid": "12429204",
"title": "Oscillatory local field potentials recorded from the subthalamic nucleus of the alert rat.",
"abstract": "Hitherto, high-frequency local field potential oscillations in the upper gamma frequency band (40-80 Hz) have been recorded only from the region of subthalamic nucleus (STN) in parkinsonian patients treated with levodopa. Here we show that local field potentials recorded from the STN in the healthy alert rat also have a spectral peak in the upper gamma band (mean 53 Hz, range 46-70 Hz). The power of this high-frequency oscillatory activity was increased by 30 +/- 4% (+/-SEM) during motor activity compared to periods of alert immobility. It was also increased by 86 +/- 36% by systemic injection of the D2 dopamine receptor agonist quinpirole. The similarities between the high-frequency activities in the STN of the healthy rat and in the levodopa-treated parkinsonian human argue that this oscillatory activity may be physiological in nature and not a consequence of the parkinsonian state."
},
{
"pmid": "11157088",
"title": "Dopamine dependency of oscillations between subthalamic nucleus and pallidum in Parkinson's disease.",
"abstract": "The extent of synchronization within and between the nuclei of the basal ganglia is unknown in Parkinson's disease. The question is an important one because synchronization will increase postsynaptic efficacy at subsequent projection targets. We simultaneously recorded local potentials (LPs) from the globus pallidus interna (GPi) and subthalamic nucleus (STN) in four awake patients after neurosurgery for Parkinson's disease. Nuclei from both sides were recorded in two patients so that a total of six ipsilateral GPi-STN LP recordings were made. Without medication, the power within and the coherence between the GPi and STN was dominated by activity with a frequency <30 Hz. Treatment with the dopamine precursor levodopa reduced the low-frequency activity and resulted in a new peak at approximately 70 Hz. This was evident in the power spectrum from STN and GPi and in the coherence between these nuclei. The phase relationship between the nuclei varied in a complex manner according to frequency band and the presence of exogenous dopaminergic stimulation. Synchronization of activity does occur between pallidum and STN, and its pattern is critically dependent on the level of dopaminergic activity."
},
{
"pmid": "10809012",
"title": "Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.",
"abstract": "The dynamics of networks of sparsely connected excitatory and inhibitory integrate-and-fire neurons are studied analytically. The analysis reveals a rich repertoire of states, including synchronous states in which neurons fire regularly; asynchronous states with stationary global activity and very irregular individual cell activity; and states in which the global activity oscillates but individual cells fire irregularly, typically at rates lower than the global oscillation frequency. The network can switch between these states, provided the external frequency, or the balance between excitation and inhibition, is varied. Two types of network oscillations are observed. In the fast oscillatory state, the network frequency is almost fully controlled by the synaptic time scale. In the slow oscillatory state, the network frequency depends mostly on the membrane time constant. Finite size effects in the asynchronous state are also discussed."
},
{
"pmid": "28725705",
"title": "Parkinson's disease as a system-level disorder.",
"abstract": "Traditionally, the basal ganglia have been considered the main brain region implicated in Parkinson's disease. This single area perspective gives a restricted clinical picture and limits therapeutic approaches because it ignores the influence of altered interactions between the basal ganglia and other cerebral components on Parkinsonian symptoms. In particular, the basal ganglia work closely in concert with cortex and cerebellum to support motor and cognitive functions. This article proposes a theoretical framework for understanding Parkinson's disease as caused by the dysfunction of the entire basal ganglia-cortex-cerebellum system rather than by the basal ganglia in isolation. In particular, building on recent evidence, we propose that the three key symptoms of tremor, freezing, and impairments in action sequencing may be explained by considering partially overlapping neural circuits including basal ganglia, cortical and cerebellar areas. Studying the involvement of this system in Parkinson's disease is a crucial step for devising innovative therapeutic approaches targeting it rather than only the basal ganglia. Possible future therapies based on this different view of the disease are discussed."
},
{
"pmid": "28358814",
"title": "Dysfunctions of the basal ganglia-cerebellar-thalamo-cortical system produce motor tics in Tourette syndrome.",
"abstract": "Motor tics are a cardinal feature of Tourette syndrome and are traditionally associated with an excess of striatal dopamine in the basal ganglia. Recent evidence increasingly supports a more articulated view where cerebellum and cortex, working closely in concert with basal ganglia, are also involved in tic production. Building on such evidence, this article proposes a computational model of the basal ganglia-cerebellar-thalamo-cortical system to study how motor tics are generated in Tourette syndrome. In particular, the model: (i) reproduces the main results of recent experiments about the involvement of the basal ganglia-cerebellar-thalamo-cortical system in tic generation; (ii) suggests an explanation of the system-level mechanisms underlying motor tic production: in this respect, the model predicts that the interplay between dopaminergic signal and cortical activity contributes to triggering the tic event and that the recently discovered basal ganglia-cerebellar anatomical pathway may support the involvement of the cerebellum in tic production; (iii) furnishes predictions on the amount of tics generated when striatal dopamine increases and when the cortex is externally stimulated. These predictions could be important in identifying new brain target areas for future therapies. Finally, the model represents the first computational attempt to study the role of the recently discovered basal ganglia-cerebellar anatomical links. Studying this non-cortex-mediated basal ganglia-cerebellar interaction could radically change our perspective about how these areas interact with each other and with the cortex. Overall, the model also shows the utility of casting Tourette syndrome within a system-level perspective rather than viewing it as related to the dysfunction of a single brain area."
},
{
"pmid": "26873754",
"title": "Consensus Paper: Towards a Systems-Level View of Cerebellar Function: the Interplay Between Cerebellum, Basal Ganglia, and Cortex.",
"abstract": "Despite increasing evidence suggesting the cerebellum works in concert with the cortex and basal ganglia, the nature of the reciprocal interactions between these three brain regions remains unclear. This consensus paper gathers diverse recent views on a variety of important roles played by the cerebellum within the cerebello-basal ganglia-thalamo-cortical system across a range of motor and cognitive functions. The paper includes theoretical and empirical contributions, which cover the following topics: recent evidence supporting the dynamical interplay between cerebellum, basal ganglia, and cortical areas in humans and other animals; theoretical neuroscience perspectives and empirical evidence on the reciprocal influences between cerebellum, basal ganglia, and cortex in learning and control processes; and data suggesting possible roles of the cerebellum in basal ganglia movement disorders. Although starting from different backgrounds and dealing with different topics, all the contributors agree that viewing the cerebellum, basal ganglia, and cortex as an integrated system enables us to understand the function of these areas in radically different ways. In addition, there is unanimous consensus between the authors that future experimental and computational work is needed to understand the function of cerebellar-basal ganglia circuitry in both motor and non-motor functions. The paper reports the most advanced perspectives on the role of the cerebellum within the cerebello-basal ganglia-thalamo-cortical system and illustrates other elements of consensus as well as disagreements and open questions in the field."
},
{
"pmid": "23911926",
"title": "The contribution of brain sub-cortical loops in the expression and acquisition of action understanding abilities.",
"abstract": "Research on action understanding in cognitive neuroscience has led to the identification of a wide \"action understanding network\" mainly encompassing parietal and premotor cortical areas. Within this cortical network mirror neurons are critically involved implementing a neural mechanism according to which, during action understanding, observed actions are reflected in the motor patterns for the same actions of the observer. We suggest that focusing only on cortical areas and processes could be too restrictive to explain important facets of action understanding regarding, for example, the influence of the observer's motor experience, the multiple levels at which an observed action can be understood, and the acquisition of action understanding ability. In this respect, we propose that aside from the cortical action understanding network, sub-cortical processes pivoting on cerebellar and basal ganglia cortical loops could crucially support both the expression and the acquisition of action understanding abilities. Within the paper we will discuss how this extended view can overcome some limitations of the \"pure\" cortical perspective, supporting new theoretical predictions on the brain mechanisms underlying action understanding that could be tested by future empirical investigations."
},
{
"pmid": "1695403",
"title": "Disinhibition as a basic process in the expression of striatal functions.",
"abstract": "During the past decade, electrophysiological approaches have greatly improved understanding of the involvement of the basal ganglia in motor behaviour. This review reports that the basal ganglia contribute to the initiation of movement by arousing executive motor centres via a disinhibitory mechanism. We propose that the basal ganglia output is used as a movement template specifying the motor elements to be engaged in directing movement in space."
},
{
"pmid": "30323275",
"title": "The timing of action determines reward prediction signals in identified midbrain dopamine neurons.",
"abstract": "Animals adapt their behavior in response to informative sensory cues using multiple brain circuits. The activity of midbrain dopaminergic neurons is thought to convey a critical teaching signal: reward-prediction error. Although reward-prediction error signals are thought to be essential to learning, little is known about the dynamic changes in the activity of midbrain dopaminergic neurons as animals learn about novel sensory cues and appetitive rewards. Here we describe a large dataset of cell-attached recordings of identified dopaminergic neurons as naive mice learned a novel cue-reward association. During learning midbrain dopaminergic neuron activity results from the summation of sensory cue-related and movement initiation-related response components. These components are both a function of reward expectation yet they are dissociable. Learning produces an increasingly precise coordination of action initiation following sensory cues that results in apparent reward-prediction error correlates. Our data thus provide new insights into the circuit mechanisms that underlie a critical computation in a highly conserved learning circuit."
},
{
"pmid": "24756517",
"title": "Pharmacological treatment of Parkinson disease: a review.",
"abstract": "IMPORTANCE\nParkinson disease is the second most common neurodegenerative disease worldwide. Although no available therapies alter the underlying neurodegenerative process, symptomatic therapies can improve patient quality of life.\n\n\nOBJECTIVE\nTo provide an evidence-based review of the initial pharmacological management of the classic motor symptoms of Parkinson disease; describe management of medication-related motor complications (such as motor fluctuations and dyskinesia), and other medication adverse effects (nausea, psychosis, and impulse control disorders and related behaviors); and discuss the management of selected nonmotor symptoms of Parkinson disease, including rapid eye movement sleep behavior disorder, cognitive impairment, depression, orthostatic hypotension, and sialorrhea.\n\n\nEVIDENCE REVIEW\nReferences were identified using searches of PubMed between January 1985 and February 2014 for English-language human studies and the full database of the Cochrane Library. The classification of studies by quality (classes I-IV) was assessed using the levels of evidence guidelines from the American Academy of Neurology and the highest-quality data for each topic.\n\n\nRESULTS\nAlthough levodopa is the most effective medication available for treating the motor symptoms of Parkinson disease, in certain instances (eg, mild symptoms, tremor as the only or most prominent symptom, aged <60 years) other medications (eg, monoamine oxidase type B inhibitors [MAOBIs], amantadine, anticholinergics, β-blockers, or dopamine agonists) may be initiated first to avoid levodopa-related motor complications. Motor fluctuations may be managed by modifying the levodopa dosing regimen or by adding several other medications, such as MAOBIs, catechol-O-methyltransferase inhibitors, or dopamine agonists. Impulse control disorders are typically managed by reducing or withdrawing dopaminergic medication, particularly dopamine agonists. Evidence-based management of some nonmotor symptoms is limited by a paucity of high-quality positive studies.\n\n\nCONCLUSIONS AND RELEVANCE\nStrong evidence supports using levodopa and dopamine agonists for motor symptoms at all stages of Parkinson disease. Dopamine agonists and drugs that block dopamine metabolism are effective for motor fluctuations and clozapine is effective for hallucinations. Cholinesterase inhibitors may improve symptoms of dementia and antidepressants and pramipexole may improve depression. Evidence supporting other therapies for motor and nonmotor features is less well established."
},
{
"pmid": "10970430",
"title": "Electrophysiological and morphological characteristics of three subtypes of rat globus pallidus neurone in vitro.",
"abstract": "Neurones of the globus pallidus (GP) have been classified into three subgroups based on the visual inspection of current clamp electrophysiological properties and morphology of biocytin-filled neurones. Type A neurones (132/208; 63 %) were identified by the presence of the time- and voltage-dependent inward rectifier (Ih) and the low-threshold calcium current (It) giving rise to anodal break depolarisations. These cells were quiescent or fired regular spontaneous action potentials followed by biphasic AHPs. Current injection evoked regular activity up to maximum firing frequency of 350 Hz followed by moderate spike frequency adaptation. The somata of type A cells were variable in shape (20 x 12 micrometer) while their dendrites were highly varicose. Type B neurones (66/208; 32 %) exhibited neither Ih nor rebound depolarisations and only a fast monophasic AHP. These cells were spontaneously active while current injection induced irregular patterns of action potential firing up to a frequency of 440 Hz with weak spike frequency adaptation. Morphologically, these cells were the smallest encountered (15 x 10 micrometer), oval in shape with restricted varicose dendritic arborisations. Type C neurones were much rarer (10/208; 5 %). They were identified by the absence of Ih and rebound depolarisations, but did possess a prolonged biphasic AHP. They displayed large A-like potassium currents and ramp-like depolarisations in response to step current injections, which induced firing up to a maximum firing frequency of 310 Hz. These cells were the largest observed (27 x 15 micrometer) with extensive dendritic branching. These results confirm neuronal heterogeneity in the adult rodent GP. The driven activity and population percentage of the three subtypes correlates well with the in vivo studies (Kita & Kitai, 1991). Type A cells appear to correspond to type II neurones of Nambu & Llinas (1994, 1997) while the small diameter type B cells display morphological similarities with those described by Millhouse (1986). The rarely encountered type C cells may well be large cholinergic neurones. These findings provide a cellular basis for the study of intercellular communication and network interactions in the adult rat in vitro."
},
{
"pmid": "29420469",
"title": "Dopamine neuron activity before action initiation gates and invigorates future movements.",
"abstract": "Deciding when and whether to move is critical for survival. Loss of dopamine neurons (DANs) of the substantia nigra pars compacta (SNc) in patients with Parkinson's disease causes deficits in movement initiation and slowness of movement. The role of DANs in self-paced movement has mostly been attributed to their tonic activity, whereas phasic changes in DAN activity have been linked to reward prediction. This model has recently been challenged by studies showing transient changes in DAN activity before or during self-paced movement initiation. Nevertheless, the necessity of this activity for spontaneous movement initiation has not been demonstrated, nor has its relation to initiation versus ongoing movement been described. Here we show that a large proportion of SNc DANs, which did not overlap with reward-responsive DANs, transiently increased their activity before self-paced movement initiation in mice. This activity was not action-specific, and was related to the vigour of future movements. Inhibition of DANs when mice were immobile reduced the probability and vigour of future movements. Conversely, brief activation of DANs when mice were immobile increased the probability and vigour of future movements. Manipulations of dopamine activity after movement initiation did not affect ongoing movements. Similar findings were observed for the initiation and execution of learned action sequences. These findings causally implicate DAN activity before movement initiation in the probability and vigour of future movements."
},
{
"pmid": "11081802",
"title": "The pathophysiology of parkinsonian tremor: a review.",
"abstract": "Parkinsonian tremor is most likely due to oscillating neuronal activity within the CNS. Summarizing all the available evidence, peripheral factors only play a minor role in the generation, maintenance and modulation of PD tremor. Recent studies have shown that not a single but multiple oscillators are responsible. The most likely candidate producing these oscillations is the basal ganglia loop and its topographic organization might be responsible for the separation into different oscillators which, nevertheless, usually produce the same frequency. The neuronal mechanisms underlying these oscillations are not yet clear, but three hypotheses would be compatible with the presently available data from animal models and data recorded in patients. The first is a cortico-subthalamo-pallido-thalamic loop, the second is a pacemaker consisting of the external pallidum and the subthalamic nucleus, and the third is abnormal synchronization due to unknown mechanisms within the whole striato-pallido-thalamic pathway leading to a loss of segregation. Assuming the oscillator within the basal ganglia pathway, the mechanism of stereotactic surgery might be a desynchronization of the activity of the basal ganglia-thalamo-cortical or the cerebello-thalamo-cortical pathway."
},
{
"pmid": "27001837",
"title": "Representation of spontaneous movement by dopaminergic neurons is cell-type selective and disrupted in parkinsonism.",
"abstract": "Midbrain dopaminergic neurons are essential for appropriate voluntary movement, as epitomized by the cardinal motor impairments arising in Parkinson's disease. Understanding the basis of such motor control requires understanding how the firing of different types of dopaminergic neuron relates to movement and how this activity is deciphered in target structures such as the striatum. By recording and labeling individual neurons in behaving mice, we show that the representation of brief spontaneous movements in the firing of identified midbrain dopaminergic neurons is cell-type selective. Most dopaminergic neurons in the substantia nigra pars compacta (SNc), but not in ventral tegmental area or substantia nigra pars lateralis, consistently represented the onset of spontaneous movements with a pause in their firing. Computational modeling revealed that the movement-related firing of these dopaminergic neurons can manifest as rapid and robust fluctuations in striatal dopamine concentration and receptor activity. The exact nature of the movement-related signaling in the striatum depended on the type of dopaminergic neuron providing inputs, the striatal region innervated, and the type of dopamine receptor expressed by striatal neurons. Importantly, in aged mice harboring a genetic burden relevant for human Parkinson's disease, the precise movement-related firing of SNc dopaminergic neurons and the resultant striatal dopamine signaling were lost. These data show that distinct dopaminergic cell types differentially encode spontaneous movement and elucidate how dysregulation of their firing in early Parkinsonism can impair their effector circuits."
},
{
"pmid": "15746431",
"title": "How visual stimuli activate dopaminergic neurons at short latency.",
"abstract": "Unexpected, biologically salient stimuli elicit a short-latency, phasic response in midbrain dopaminergic (DA) neurons. Although this signal is important for reinforcement learning, the information it conveys to forebrain target structures remains uncertain. One way to decode the phasic DA signal would be to determine the perceptual properties of sensory inputs to DA neurons. After local disinhibition of the superior colliculus in anesthetized rats, DA neurons became visually responsive, whereas disinhibition of the visual cortex was ineffective. As the primary source of visual afferents, the limited processing capacities of the colliculus may constrain the visual information content of phasic DA responses."
},
{
"pmid": "22848541",
"title": "On the origin of tremor in Parkinson's disease.",
"abstract": "The exact origin of tremor in Parkinson's disease remains unknown. We explain why the existing data converge on the basal ganglia-thalamo-cortical loop as a tremor generator and consider a conductance-based model of subthalamo-pallidal circuits embedded into a simplified representation of the basal ganglia-thalamo-cortical circuit to investigate the dynamics of this loop. We show how variation of the strength of dopamine-modulated connections in the basal ganglia-thalamo-cortical loop (representing the decreasing dopamine level in Parkinson's disease) leads to the occurrence of tremor-like burst firing. These tremor-like oscillations are suppressed when the connections are modulated back to represent a higher dopamine level (as it would be the case in dopaminergic therapy), as well as when the basal ganglia-thalamo-cortical loop is broken (as would be the case for ablative anti-parkinsonian surgeries). Thus, the proposed model provides an explanation for the basal ganglia-thalamo-cortical loop mechanism of tremor generation. The strengthening of the loop leads to tremor oscillations, while the weakening or disconnection of the loop suppresses them. The loop origin of parkinsonian tremor also suggests that new tremor-suppression therapies may have anatomical targets in different cortical and subcortical areas as long as they are within the basal ganglia-thalamo-cortical loop."
},
{
"pmid": "7501148",
"title": "Early differential diagnosis of Parkinson's disease with 18F-fluorodeoxyglucose and positron emission tomography.",
"abstract": "Early-stage Parkinson's disease (EPD) is often clinically asymmetric. We used 18F-fluorodeoxyglucose (FDG) and PET to assess whether EPD can be detected by a characteristic pattern of regional metabolic asymmetry. To identify this pattern, we studied 10 EPD (Hoehn and Yahr stage I) patients (mean age 61.1 +/- 11.1 years) using 18F-FDG and PET to calculate regional metabolic rates for glucose. The scaled subprofile model (SSM) was applied to metabolic asymmetry measurements for the combined group of EPD patients and normal subjects to identify a specific covariation pattern that discriminated EPD patients from normal subjects. To determine whether this pattern could be used diagnostically, we studied a subsequent group of five presumptive EPD patients (mean age 50.9 +/- 18.3), five normal subjects (mean age 44.6 +/- 15.3), and nine patients with atypical drug-resistant early-stage parkinsonism (APD) (mean age 44.6 +/- 14.0). In each member of this prospective cohort, we calculated the expression of the EPD-related covariation pattern (subject scores) on a case-by-case basis. We also studied 11 of the EPD patients, five patients with APD, and 10 normal subjects with 18F-fluorodopa (FDOPA) and PET to measure presynaptic nigrostriatal dopaminergic function, and we assessed the accuracy of differential diagnosis with both PET methods using discrimination analysis. SSM analysis disclosed a significant topographic contrast profile characterized by covariate basal ganglia and thalamic asymmetries. Subject scores for this profile accurately discriminated EPD patients from normal subjects and APD patients (p < 0.0001). Group assignments into the normal or parkinsonian categories with FDG/PET were comparable to those achieved with FDOPA/PET, although APD and EPD patients were not differentiable by the latter method. Metabolic brain imaging with FDG/PET may be useful in the differential diagnosis of EPD."
},
{
"pmid": "19198667",
"title": "PyNEST: A Convenient Interface to the NEST Simulator.",
"abstract": "The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used."
},
{
"pmid": "24578177",
"title": "Corticolimbic catecholamines in stress: a computational model of the appraisal of controllability.",
"abstract": "Appraisal of a stressful situation and the possibility to control or avoid it is thought to involve frontal-cortical mechanisms. The precise mechanism underlying this appraisal and its translation into effective stress coping (the regulation of physiological and behavioural responses) are poorly understood. Here, we propose a computational model which involves tuning motivational arousal to the appraised stressing condition. The model provides a causal explanation of the shift from active to passive coping strategies, i.e. from a condition characterised by high motivational arousal, required to deal with a situation appraised as stressful, to a condition characterised by emotional and motivational withdrawal, required when the stressful situation is appraised as uncontrollable/unavoidable. The model is motivated by results acquired via microdialysis recordings in rats and highlights the presence of two competing circuits dominated by different areas of the ventromedial prefrontal cortex: these are shown having opposite effects on several subcortical areas, affecting dopamine outflow in the striatum, and therefore controlling motivation. We start by reviewing published data supporting structure and functioning of the neural model and present the computational model itself with its essential neural mechanisms. Finally, we show the results of a new experiment, involving the condition of repeated inescapable stress, which validate most of the model's predictions."
},
{
"pmid": "24600422",
"title": "Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot.",
"abstract": "The effects of striatal dopamine (DA) on behavior have been widely investigated over the past decades, with \"phasic\" burst firings considered as the key expression of a reward prediction error responsible for reinforcement learning. Less well studied is \"tonic\" DA, where putative functions include the idea that it is a regulator of vigor, incentive salience, disposition to exert an effort and a modulator of approach strategies. We present a model combining tonic and phasic DA to show how different outflows triggered by either intrinsically or extrinsically motivating stimuli dynamically affect the basal ganglia by impacting on a selection process this system performs on its cortical input. The model, which has been tested on the simulated humanoid robot iCub interacting with a mechatronic board, shows the putative functions ascribed to DA emerging from the combination of a standard computational mechanism coupled to a differential sensitivity to the presence of DA across the striatum."
},
{
"pmid": "17973325",
"title": "Paradoxical aspects of parkinsonian tremor.",
"abstract": "Although resting tremor is the most identifiable sign of Parkinson's disease, its underlying basis appears to be the most complex of the cardinal signs. The variable relationship of resting tremor to other symptoms of PD has implications for diagnosis, prognosis, medical and surgical treatment. Structural lesions very rarely cause classic resting tremor, with likely contributions to tremor by a network of neurons both within and outside the basal ganglia. Patients with only resting tremor show dopaminergic deficits with radioligand imaging, but severity of tremor correlates poorly in such dopamine imaging studies. Correlation of tremor severity to changes in radioligand studies is also limited by the use of mostly qualitative measures of tremor severity. A complex pharmacologic basis of parkinsonian resting tremor is supported by treatment studies. Although levodopa is clearly effective for resting tremor, several agents have shown efficacy that appears to be superior or additive to that of levodopa including anticholinergics, clozapine, pramipexole, and budipine. Although the thalamus has the greatest body of evidence supporting its role as an effective target for surgical treatment of tremor, recent studies suggest that the subthalamic nucleus may be a reasonable alternative target for patients with Parkinson's disease and severe tremor as the predominant symptom."
},
{
"pmid": "9021899",
"title": "Dopamine selects glutamatergic inputs to neostriatal neurons.",
"abstract": "Glutamatergic synaptic potentials induced by micromolar concentrations of the potassium conductance blocker 4-aminopyridine (4-AP) were recorded intracellularly from rat neostriatal neurons in the presence of 10 microM bicuculline (BIC). These synaptic potentials originate from neostriatal cortical and thalamic afferents and were completely blocked by 10 microM 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) plus 100 microM D-2-amino-5-phosphonovaleric acid (2-APV). Their inter-event time intervals could be fitted to exponential distributions, suggesting that they are induced randomly. Their amplitude distributions had most counts around 1 mV and fewer counts with values up to 5 mV. Since input resistance of the recorded neurons is about 40 M omega, the amplitudes agree to quantal size measurements in mammalian central neurons. The action of a D2 agonist, quinpirole, was studied on the frequency of these events. Mean amplitude of synaptic potentials was preserved in the presence of 2-10 microM quinpirole, but the frequency of 4-AP-induced glutamatergic synaptic potentials was reduced in 35% of cases. The effect was blocked by the D2 antagonist sulpiride (10 microM). Input resistance, membrane potential, or firing threshold did not change during quinpirole effect, suggesting a presynaptic site of action for quinpirole in some but not all glutamatergic afferents that make contact on a single cell. The present experiments show that dopaminergic presynaptic modulation of glutamatergic transmission in the neostriatum does not affect all stimulated afferents, suggesting that it is selective towards some of them. This may control the quality and quantity of afferent flow upon neostriatal neurons."
},
{
"pmid": "15701239",
"title": "Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism.",
"abstract": "Dopamine (DA) depletion in the basal ganglia (BG) of Parkinson's patients gives rise to both frontal-like and implicit learning impairments. Dopaminergic medication alleviates some cognitive deficits but impairs those that depend on intact areas of the BG, apparently due to DA ''overdose.'' These findings are difficult to accommodate with verbal theories of BG/DA function, owing to complexity of system dynamics: DA dynamically modulates function in the BG, which is itself a modulatory system. This article presents a neural network model that instantiates key biological properties and provides insight into the underlying role of DA in the BG during learning and execution of cognitive tasks. Specifically, the BG modulates the execution of ''actions'' (e.g., motor different parts of the frontal cortex. Phasic changes in DA, which occur during error feedback, dynamically modulate the BG threshold for facilitating/suppressing a cortical command in response to particular stimuli. Reduced dynamic range of DA explains Parkinson and DA overdose deficits with a single underlying dysfunction, despite overall differences in raw DA levels. Simulated Parkinsonism and medication effects provide a theoretical basis for behavioral data in probabilistic classification and reversal tasks. The model also provides novel testable predictions for neuropsychological and pharmacological studies, and motivates further investigation of BG/DA interactions with the prefrontal cortex in working memory."
},
{
"pmid": "12397440",
"title": "Three-dimensional electrophysiological topography of the rat corticostriatal system.",
"abstract": "Projections from the cerebral cortex are the major afferents of the caudoputamen and probably determine the functions subserved by each region of the nucleus. The corticostriatal system has been mapped using cytological techniques which give little information on the physiological importance of projections from individual cortical areas. The objective of this study was to characterize the three-dimensional topography of the corticostriatal system in the rat and to determine the physiological significance of these projections using electrophysiological techniques. Eight functionally distinct areas of the cerebral cortex (prefrontal, primary motor, rostral and caudal primary somatosensory, hindlimb, auditory, occipital and primary visual) were stimulated while recording the multiple unit activity in seven dorsal and seven ventral areas of the caudoputamen. Each stimulation site produced a distinctive pattern of activation within the caudoputamen. There was also a large site-dependent variation in electrophysiological activation produced by each stimulation. The motor and somatosensory areas produced the most powerful overall activation. In addition, a number of trends were obvious. There was a rostrocaudal topographical relationship between the site of stimulation and the area of the caudoputamen activated. Furthermore, more caudally and medially placed stimulation sites produced greater dorsal activation of the caudoputamen relative to ventral."
},
{
"pmid": "9221793",
"title": "Prolonged and extrasynaptic excitatory action of dopamine mediated by D1 receptors in the rat striatum in vivo.",
"abstract": "The spatiotemporal characteristics of the dopaminergic transmission mediated by D1 receptors were investigated in vivo. For this purpose dopamine (DA) release was evoked in the striatum of anesthetized rats by train electrical stimulations of the medial forebrain bundle (one to four pulses at 15 Hz), which mimicked the spontaneous activity of dopaminergic neurons. The resulting dopamine overflow was electrochemically monitored in real time in the extracellular space. This evoked DA release induced a delayed increase in discharge activity in a subpopulation of single striatal neurons. This excitation was attributable to stimulation of D1 receptors by released DA because it was abolished by acute 6-hydroxydopamine lesion and strongly reduced by the D1 antagonist SCH 23390. Striatal neurons exhibiting this delayed response were also strongly excited by intravenous administration of the D1 agonist SKF 82958. Whereas the DA overflow was closely time-correlated with stimulation, the excitatory response mediated by DA started 200 msec after release and lasted for up to 1 sec. Moreover, functional evidence presented here combined with previous morphological data show that D1 receptors are stimulated by DA diffusing up to 12 micron away from release sites in the extrasynaptic extracellular space. In conclusion, DA released by bursts of action potentials exerts, via D1 receptors, a delayed and prolonged excitatory influence on target neurons. This phasic transmission occurs outside synaptic clefts but still exhibits a high degree of spatial specificity."
},
{
"pmid": "11417052",
"title": "A computational model of action selection in the basal ganglia. I. A new functional anatomy.",
"abstract": "We present a biologically plausible model of processing intrinsic to the basal ganglia based on the computational premise that action selection is a primary role of these central brain structures. By encoding the propensity for selecting a given action in a scalar value (the salience), it is shown that action selection may be recast in terms of signal selection. The generic properties of signal selection are defined and neural networks for this type of computation examined. A comparison between these networks and basal ganglia anatomy leads to a novel functional decomposition of the basal ganglia architecture into 'selection' and 'control' pathways. The former pathway performs the selection per se via a feedforward off-centre on-surround network. The control pathway regulates the action of the selection pathway to ensure its effective operation, and synergistically complements its dopaminergic modulation. The model contrasts with the prevailing functional segregation of basal ganglia into 'direct' and 'indirect' pathways."
},
{
"pmid": "15271492",
"title": "Computational models of the basal ganglia: from robots to membranes.",
"abstract": "With the rapid accumulation of neuroscientific data comes a pressing need to develop models that can explain the computational processes performed by the basal ganglia. Relevant biological information spans a range of structural levels, from the activity of neuronal membranes to the role of the basal ganglia in overt behavioural control. This viewpoint presents a framework for understanding the aims, limitations and methods for testing of computational models across all structural levels. We identify distinct modelling strategies that can deliver important and complementary insights into the nature of problems the basal ganglia have evolved to solve, and describe methods that are used to solve them."
},
{
"pmid": "19162084",
"title": "A neurocomputational model of tonic and phasic dopamine in action selection: a comparison with cognitive deficits in Parkinson's disease.",
"abstract": "The striatal dopamine signal has multiple facets; tonic level, phasic rise and fall, and variation of the phasic rise/fall depending on the expectation of reward/punishment. We have developed a network model of the striatal direct pathway using an ionic current level model of the medium spiny neuron that incorporates currents sensitive to changes in the tonic level of dopamine. The model neurons in the network learn action selection based on a novel set of mathematical rules that incorporate the phasic change in the dopamine signal. This network model is capable of learning to perform a sequence learning task that in humans is thought to be dependent on the basal ganglia. When both tonic and phasic levels of dopamine are decreased, as would be expected in unmedicated Parkinson's disease (PD), the model reproduces the deficits seen in a human PD group off medication. When the tonic level is increased to normal, but with reduced phasic increases and decreases in response to reward and punishment, respectively, as would be expected in PD medicated with L-Dopa, the model again reproduces the human data. These findings support the view that the cognitive dysfunctions seen in Parkinson's disease are not solely either due to the decreased tonic level of dopamine or to the decreased responsiveness of the phasic dopamine signal to reward and punishment, but to a combination of the two factors that varies dependent on disease stage and medication status."
},
{
"pmid": "23834737",
"title": "Power spectral density analysis of physiological, rest and action tremor in Parkinson's disease patients treated with deep brain stimulation.",
"abstract": "BACKGROUND\nObservation of the signals recorded from the extremities of Parkinson's disease patients showing rest and/or action tremor reveal a distinct high power resonance peak in the frequency band corresponding to tremor. The aim of the study was to investigate, using quantitative measures, how clinically effective and less effective deep brain stimulation protocols redistribute movement power over the frequency bands associated with movement, pathological and physiological tremor, and whether normal physiological tremor may reappear during those periods that tremor is absent.\n\n\nMETHODS\nThe power spectral density patterns of rest and action tremor were studied in 7 Parkinson's disease patients treated with (bilateral) deep brain stimulation of the subthalamic nucleus. Two tests were carried out: 1) the patient was sitting at rest; 2) the patient performed a hand or foot tapping movement. Each test was repeated four times for each extremity with different stimulation settings applied during each repetition. Tremor intermittency was taken into account by classifying each 3-second window of the recorded angular velocity signals as a tremor or non-tremor window.\n\n\nRESULTS\nThe distribution of power over the low frequency band (<3.5 Hz - voluntary movement), tremor band (3.5-7.5 Hz) and high frequency band (>7.5 Hz - normal physiological tremor) revealed that rest and action tremor show a similar power-frequency shift related to tremor absence and presence: when tremor is present most power is contained in the tremor frequency band; when tremor is absent lower frequencies dominate. Even under resting conditions a relatively large low frequency component became prominent, which seemed to compensate for tremor. Tremor absence did not result in the reappearance of normal physiological tremor.\n\n\nCONCLUSION\nParkinson's disease patients continuously balance between tremor and tremor suppression or compensation expressed by power shifts between the low frequency band and the tremor frequency band during rest and voluntary motor actions. This balance shows that the pathological tremor is either on or off, with the latter state not resembling that of a healthy subject. Deep brain stimulation can reverse the balance thereby either switching tremor on or off."
},
{
"pmid": "29119634",
"title": "The cerebral basis of Parkinsonian tremor: A network perspective.",
"abstract": "Tremor in Parkinson's disease is a poorly understood sign. Although it is one of the clinical hallmarks of the disease, its pathophysiology remains unclear. It is clear that tremor involves different neural mechanisms than bradykinesia and rigidity, the other core motor signs of Parkinson's disease. In particular, the role of dopamine in tremor has been heavily debated given clinical observations that tremor has a variable response to dopaminergic medication. From a neuroscience perspective, tremor is also a special sign; unlike other motor signs, it has a clear electrophysiological signature (frequency, phase, and power). These unique features of tremor, and newly available neuroimaging methods, have sparked investigations into the pathophysiology of tremor. In this review, evidence will be discussed for the idea that parkinsonian tremor results from increased interactions between the basal ganglia and the cerebello-thalamo-cortical circuit, driven by altered dopaminergic projections to nodes within both circuits, and modulated by context-dependent factors, such as psychological stress. Models that incorporate all of these features may help our understanding of the pathophysiology of tremor and interindividual differences between patients. One example that will be discussed in this article is the \"dimmer-switch\" model. According to this model, cerebral activity related to parkinsonian tremor first arises in the basal ganglia and is then propagated to the cerebello-thalamo-cortical circuit, where the tremor rhythm is maintained and amplified. In the future, detailed knowledge about the architecture of the tremor circuitry in individual patients (\"tremor fingerprints\") may provide new, mechanism-based treatments for this debilitating motor sign. © 2017 International Parkinson and Movement Disorder Society."
},
{
"pmid": "22382359",
"title": "Cerebral causes and consequences of parkinsonian resting tremor: a tale of two circuits?",
"abstract": "Tremor in Parkinson's disease has several mysterious features. Clinically, tremor is seen in only three out of four patients with Parkinson's disease, and tremor-dominant patients generally follow a more benign disease course than non-tremor patients. Pathophysiologically, tremor is linked to altered activity in not one, but two distinct circuits: the basal ganglia, which are primarily affected by dopamine depletion in Parkinson's disease, and the cerebello-thalamo-cortical circuit, which is also involved in many other tremors. The purpose of this review is to integrate these clinical and pathophysiological features of tremor in Parkinson's disease. We first describe clinical and pathological differences between tremor-dominant and non-tremor Parkinson's disease subtypes, and then summarize recent studies on the pathophysiology of tremor. We also discuss a newly proposed 'dimmer-switch model' that explains tremor as resulting from the combined actions of two circuits: the basal ganglia that trigger tremor episodes and the cerebello-thalamo-cortical circuit that produces the tremor. Finally, we address several important open questions: why resting tremor stops during voluntary movements, why it has a variable response to dopaminergic treatment, why it indicates a benign Parkinson's disease subtype and why its expression decreases with disease progression."
},
{
"pmid": "10893428",
"title": "Role of the basal ganglia in the control of purposive saccadic eye movements.",
"abstract": "In addition to their well-known role in skeletal movements, the basal ganglia control saccadic eye movements (saccades) by means of their connection to the superior colliculus (SC). The SC receives convergent inputs from cerebral cortical areas and the basal ganglia. To make a saccade to an object purposefully, appropriate signals must be selected out of the cortical inputs, in which the basal ganglia play a crucial role. This is done by the sustained inhibitory input from the substantia nigra pars reticulata (SNr) to the SC. This inhibition can be removed by another inhibition from the caudate nucleus (CD) to the SNr, which results in a disinhibition of the SC. The basal ganglia have another mechanism, involving the external segment of the globus pallidus and the subthalamic nucleus, with which the SNr-SC inhibition can further be enhanced. The sensorimotor signals carried by the basal ganglia neurons are strongly modulated depending on the behavioral context, which reflects working memory, expectation, and attention. Expectation of reward is a critical determinant in that the saccade that has been rewarded is facilitated subsequently. The interaction between cortical and dopaminergic inputs to CD neurons may underlie the behavioral adaptation toward purposeful saccades."
},
{
"pmid": "25954517",
"title": "Dopamine receptors and Parkinson's disease.",
"abstract": "Parkinson's disease (PD) is a progressive extrapyramidal motor disorder. Pathologically, this disease is characterized by the selective dopaminergic (DAergic) neuronal degeneration in the substantia nigra. Correcting the DA deficiency in PD with levodopa (L-dopa) significantly attenuates the motor symptoms; however, its effectiveness often declines, and L-dopa-related adverse effects emerge after long-term treatment. Nowadays, DA receptor agonists are useful medication even regarded as first choice to delay the starting of L-dopa therapy. In advanced stage of PD, they are also used as adjunct therapy together with L-dopa. DA receptor agonists act by stimulation of presynaptic and postsynaptic DA receptors. Despite the usefulness, they could be causative drugs for valvulopathy and nonmotor complication such as DA dysregulation syndrome (DDS). In this paper, physiological characteristics of DA receptor familyare discussed. We also discuss the validity, benefits, and specific adverse effects of pharmaceutical DA receptor agonist."
},
{
"pmid": "25099916",
"title": "Origins and suppression of oscillations in a computational model of Parkinson's disease.",
"abstract": "Efficacy of deep brain stimulation (DBS) for motor signs of Parkinson's disease (PD) depends in part on post-operative programming of stimulus parameters. There is a need for a systematic approach to tuning parameters based on patient physiology. We used a physiologically realistic computational model of the basal ganglia network to investigate the emergence of a 34 Hz oscillation in the PD state and its optimal suppression with DBS. Discrete time transfer functions were fit to post-stimulus time histograms (PSTHs) collected in open-loop, by simulating the pharmacological block of synaptic connections, to describe the behavior of the basal ganglia nuclei. These functions were then connected to create a mean-field model of the closed-loop system, which was analyzed to determine the origin of the emergent 34 Hz pathological oscillation. This analysis determined that the oscillation could emerge from the coupling between the globus pallidus external (GPe) and subthalamic nucleus (STN). When coupled, the two resonate with each other in the PD state but not in the healthy state. By characterizing how this oscillation is affected by subthreshold DBS pulses, we hypothesize that it is possible to predict stimulus frequencies capable of suppressing this oscillation. To characterize the response to the stimulus, we developed a new method for estimating phase response curves (PRCs) from population data. Using the population PRC we were able to predict frequencies that enhance and suppress the 34 Hz pathological oscillation. This provides a systematic approach to tuning DBS frequencies and could enable closed-loop tuning of stimulation parameters."
},
{
"pmid": "27398617",
"title": "Rapid signalling in distinct dopaminergic axons during locomotion and reward.",
"abstract": "Dopaminergic projection axons from the midbrain to the striatum are crucial for motor control, as their degeneration in Parkinson disease results in profound movement deficits. Paradoxically, most recording methods report rapid phasic dopamine signalling (~100-ms bursts) in response to unpredicted rewards, with little evidence for movement-related signalling. The leading model posits that phasic signalling in striatum-targeting dopamine neurons drives reward-based learning, whereas slow variations in firing (tens of seconds to minutes) in these same neurons bias animals towards or away from movement. However, current methods have provided little evidence to support or refute this model. Here, using new optical recording methods, we report the discovery of rapid phasic signalling in striatum-targeting dopaminergic axons that is associated with, and capable of triggering, locomotion in mice. Axons expressing these signals were largely distinct from those that responded to unexpected rewards. These results suggest that dopaminergic neuromodulation can differentially impact motor control and reward learning with sub-second precision, and indicate that both precise signal timing and neuronal subtype are important parameters to consider in the treatment of dopamine-related disorders."
},
{
"pmid": "29666208",
"title": "Insights into Parkinson's disease from computational models of the basal ganglia.",
"abstract": "Movement disorders arise from the complex interplay of multiple changes to neural circuits. Successful treatments for these disorders could interact with these complex changes in myriad ways, and as a consequence their mechanisms of action and their amelioration of symptoms are incompletely understood. Using Parkinson's disease as a case study, we review here how computational models are a crucial tool for taming this complexity, across causative mechanisms, consequent neural dynamics and treatments. For mechanisms, we review models that capture the effects of losing dopamine on basal ganglia function; for dynamics, we discuss models that have transformed our understanding of how beta-band (15-30 Hz) oscillations arise in the parkinsonian basal ganglia. For treatments, we touch on the breadth of computational modelling work trying to understand the therapeutic actions of deep brain stimulation. Collectively, models from across all levels of description are providing a compelling account of the causes, symptoms and treatments for Parkinson's disease."
},
{
"pmid": "17167083",
"title": "A physiologically plausible model of action selection and oscillatory activity in the basal ganglia.",
"abstract": "The basal ganglia (BG) have long been implicated in both motor function and dysfunction. It has been proposed that the BG form a centralized action selection circuit, resolving conflict between multiple neural systems competing for access to the final common motor pathway. We present a new spiking neuron model of the BG circuitry to test this proposal, incorporating all major features and many physiologically plausible details. We include the following: effects of dopamine in the subthalamic nucleus (STN) and globus pallidus (GP), transmission delays between neurons, and specific distributions of synaptic inputs over dendrites. All main parameters were derived from experimental studies. We find that the BG circuitry supports motor program selection and switching, which deteriorates under dopamine-depleted and dopamine-excessive conditions in a manner consistent with some pathologies associated with those dopamine states. We also validated the model against data describing oscillatory properties of BG. We find that the same model displayed detailed features of both gamma-band (30-80 Hz) and slow (approximately 1 Hz) oscillatory phenomena reported by Brown et al. (2002) and Magill et al. (2001), respectively. Only the parameters required to mimic experimental conditions (e.g., anesthetic) or manipulations (e.g., lesions) were changed. From the results, we derive the following novel predictions about the STN-GP feedback loop: (1) the loop is functionally decoupled by tonic dopamine under normal conditions and recoupled by dopamine depletion; (2) the loop does not show pacemaking activity under normal conditions in vivo (but does after combined dopamine depletion and cortical lesion); (3) the loop has a resonant frequency in the gamma-band."
},
{
"pmid": "18244602",
"title": "Simple model of spiking neurons.",
"abstract": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC."
},
{
"pmid": "1822537",
"title": "Membrane properties and synaptic responses of rat striatal neurones in vitro.",
"abstract": "1. A tissue slice containing a section of striatum was cut obliquely from rat brain so as to preserve adjacent cortex and pallidum. Intracellular recordings were made from 368 neurones, using either conventional or tight-seal configurations. 2. Two types of neurone were distinguished electrophysiologically. Principal cells (96%) had very negative resting potentials (-89 mV) and a low input resistance at the resting membrane potential (39 M omega): membrane conductance (10 nS at -65 mV) increased within tens of milliseconds after the onset of hyperpolarization (99 nS at -120 mV). Secondary cells (4%) had less negative resting potentials (-60 mV) and a higher input resistance (117 m omega at the resting potential): hyperpolarization caused an inward current to develop over hundreds of milliseconds that had the properties of H-current. 3. Most principal cells were activated antidromically by electrical stimulation of the globus pallidus or internal capsule. Intracellular labelling with biocytin showed that principal cells had a medium sized soma (10-18 microns), extensive dendritic trees densely studded with spines and, in some cases, a main axon which extended towards the globus pallidus. 4. Electrical stimulation of the corpus callosum or external capsule evoked a depolarizing postsynaptic potential. This synaptic potential was reversibly blocked by a combination of 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX, 10 microM) and DL-2-amino-5-phosphonovaleric acid (APV, 30 microM), but was unaffected by bicuculline (30 microM) and picrotoxin (100 microM). The underlying synaptic current had a fast component (time to peak about 4 ms), the amplitude of which was linearly related to membrane potential and which was blocked by CNQX; in CNQX the synaptic current had a slower component (time to peak about 10 ms) which showed voltage dependence typical of N-methyl-D-aspartate (NMDA) receptors. Both currents reversed at -5 mV. 5. Focal electrical stimulation within the striatum (100-300 microns from the site of intracellular recording) evoked a synaptic potential that was partially blocked (45-95%) by CNQX and APV: the remaining synaptic potential was blocked by bicuculline (30 microM). The bicuculline-sensitive synaptic current reversed at the chloride equilibrium potential. 6. The findings confirm that the majority of neostriatal neurones (principal cells, medium spiny neurones) project to the pallidum and receive synaptic inputs from cerebral cortex mediated by an excitatory amino acid acting through NMDA and non-NMDA receptors. These cells also receive synaptic inputs from intrinsic striatal neurones mediated by GABA.(ABSTRACT TRUNCATED AT 400 WORDS)"
},
{
"pmid": "20651684",
"title": "Start/stop signals emerge in nigrostriatal circuits during sequence learning.",
"abstract": "Learning new action sequences subserves a plethora of different abilities such as escaping a predator, playing the piano, or producing fluent speech. Proper initiation and termination of each action sequence is critical for the organization of behaviour, and is compromised in nigrostriatal disorders like Parkinson's and Huntington's diseases. Using a self-paced operant task in which mice learn to perform a particular sequence of actions to obtain an outcome, we found neural activity in nigrostriatal circuits specifically signalling the initiation or the termination of each action sequence. This start/stop activity emerged during sequence learning, was specific for particular actions, and did not reflect interval timing, movement speed or action value. Furthermore, genetically altering the function of striatal circuits disrupted the development of start/stop activity and selectively impaired sequence learning. These results have important implications for understanding the functional organization of actions and the sequence initiation and termination impairments observed in basal ganglia disorders."
},
{
"pmid": "18394571",
"title": "Mechanisms and targets of deep brain stimulation in movement disorders.",
"abstract": "Chronic electrical stimulation of the brain, known as deep brain stimulation (DBS), has become a preferred surgical treatment for medication-refractory movement disorders. Despite its remarkable clinical success, the therapeutic mechanisms of DBS are still not completely understood, limiting opportunities to improve treatment efficacy and simplify selection of stimulation parameters. This review addresses three questions essential to understanding the mechanisms of DBS. 1) How does DBS affect neuronal tissue in the vicinity of the active electrode or electrodes? 2) How do these changes translate into therapeutic benefit on motor symptoms? 3) How do these effects depend on the particular site of stimulation? Early hypotheses proposed that stimulation inhibited neuronal activity at the site of stimulation, mimicking the outcome of ablative surgeries. Recent studies have challenged that view, suggesting that although somatic activity near the DBS electrode may exhibit substantial inhibition or complex modulation patterns, the output from the stimulated nucleus follows the DBS pulse train by direct axonal excitation. The intrinsic activity is thus replaced by high-frequency activity that is time-locked to the stimulus and more regular in pattern. These changes in firing pattern are thought to prevent transmission of pathologic bursting and oscillatory activity, resulting in the reduction of disease symptoms through compensatory processing of sensorimotor information. Although promising, this theory does not entirely explain why DBS improves motor symptoms at different latencies. Understanding these processes on a physiological level will be critically important if we are to reach the full potential of this powerful tool."
},
{
"pmid": "25904081",
"title": "Parkinson's disease.",
"abstract": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease."
},
{
"pmid": "27266635",
"title": "Default mode network differences between rigidity- and tremor-predominant Parkinson's disease.",
"abstract": "BACKGROUND\nParkinson's disease (PD) traditionally is characterized by tremor, rigidity, and bradykinesia, although cognitive impairment also is a common symptom. The clinical presentation of PD is heterogeneous and associated with different risk factors for developing cognitive impairment. PD patients with primary akinetic/rigidity (PDAR) are more likely to develop cognitive deficits compared to those with tremor-predominant symptoms (PDT). Because cognitive impairment in PD appears to be related to changes in the default mode network (DMN), this study tested the hypothesis that DMN integrity is different between PDAR and PDT subtypes.\n\n\nMETHOD\nResting state fMRI (rs-fMRI) and whole brain volumetric data were obtained from 17 PDAR, 15 PDT and 24 healthy controls (HCs) using a 3T scanner. PD patients were matched closely to HCs for demographic and cognitive variables, and showed no symptoms of dementia. Voxel-based morphometry (VBM) was used to examine brain gray matter (GM) volume changes between groups. Independent component analysis (ICA) interrogated differences in the DMN among PDAR, PDT, and HC.\n\n\nRESULTS\nThere was decreased activity in the left inferior parietal cortex (IPC) and the left posterior cingulate cortex (PCC) within the DMN between PDAR and both HC and PDT subjects, even after controlling for multiple comparisons, but not between PDT and HC. GM volume differences between groups were detected at a lower threshold (p < 0.001, uncorrected). Resting state activity in IPC and PCC were correlated with some measures of cognitive performance in PD but not in HC.\n\n\nCONCLUSION\nThis is the first study to demonstrate DMN differences between cognitively comparable PDAR and PDT subtypes. The DMN differences between PD and HC appear to be driven by the PDAR subtype. Further studies are warranted to understand the underlying neural mechanisms and their relevance to clinical and cognitive outcomes in PDAR and PDT subtypes."
},
{
"pmid": "6303502",
"title": "Pallidal inputs to subthalamus: intracellular analysis.",
"abstract": "Neuronal responses of the subthalamic nucleus (STH) to stimulation of the globus pallidus (GP) and the substantia nigra (SN) were studied by intracellular recording in the decorticated rat. (1) GP and SN stimulation evoked antidromic spikes in STH neurons with a mean latency of 1.2 ms and 1.1 ms, respectively. Based on the above latencies, the mean conduction velocity of the STH neurons projecting toward GP was estimated to be 2.5 m/s, and that toward SN was 1.4 m/s. Many STH neurons could be activated following stimulation of both GP and SN, indicating that single STH neurons project to two diversely distant areas. In spite of differences in conduction distance of GP and SN from STH, differences in the conduction velocities of bifurcating axons make it possible for a simultaneous arrival of impulses in the target areas to which these STH neurons project. (2) GP stimulation evoked short duration (5-24 ms) hyperpolarizing potentials which were usually followed by depolarizing potentials with durations of 10-20 ms. These potentials were tested by intracellular current applications and intracellular injections of chloride ions. The results indicated that the hyper- and depolarizing potentials were IPSPs and EPSPs respectively. These IPSPs were considered to be monosynaptic in nature since changes in the stimulus intensities of GP did not alter the latency of IPSPs. The mean latency of the IPSPs was 1.3 ms. Based on the above mean latency the mean conduction velocity of GP axons projecting to STH was estimated to be 3.8 m/s. (3) Analysis of electrical properties of STH neurons indicated that: (i) input resistance estimated by a current-voltage relationship ranged from 9 to 28 M omega; (ii) the membrane showed rectification in the hyperpolarizing direction; (iii) direct stimulation of neurons by depolarizing current pulses produced repetitive firings with frequencies up to 500 Hz. (4) Morphology of the recorded STH neurons was identified by intracellular labeling of neurons with horseradish peroxidase. Light microscopic analysis indicated that the recorded neurons were Golgi type I neurons with bifurcating axons projecting toward GP and SN."
},
{
"pmid": "1810628",
"title": "Intracellular study of rat globus pallidus neurons: membrane properties and responses to neostriatal, subthalamic and nigral stimulation.",
"abstract": "Physiological properties of globus pallidus (GP) neurons were studied intracellularly in anesthetized rats. More than 70% of the neurons exhibited continuous repetitive firing of 2-40 Hz, while others exhibited periodic burst firing or no firing. The repetitively firing neurons exhibited the following properties: spike accommodation; spike frequency adaptation; continuous firing with a frequency of about 100 Hz generated by intracellular current injections; fast anomalous rectification; ramp-shaped depolarization upon injection of depolarizing current; and post-active hyperpolarization. The burst firing neurons evoked a large depolarization with multiple spikes in response to depolarizing current, and a similar response was observed after the termination of hyperpolarizing current. The few neurons which did not fire spontaneous spikes exhibited strong spike accommodation when they were stimulated by current injections. The continuously firing neurons were antidromically activated by stimulation of the neostriatum (Str) (23 of 68), the subthalamic nucleus (STh) (55 of 75), and the substantia nigra (SN) (25 of 46). The antidromic latencies of the 3 stimulus sites were very similar (about 1 ms). None of the burst firing neurons were antidromically activated. Three non-firing neurons evoked antidromic responses only after Str stimulation. Only repetitively firing neurons evoked postsynaptic responses following stimulation of the Str and the STh. Stimulation of the Str evoked initial small EPSPs with latencies of 2-4 ms and strong, short duration IPSPs with latencies of 2-12 ms. Stimulation of the STh evoked short latency EPSPs overlapped with IPSPs. Frequently, these responses induced by Str and STh stimulation were followed by other EPSPs lasting 50-100 ms. These results indicated: (1) that the GP contains at least 3 electrophysiologically different types of neurons; (2) that GP projections to the Str, the STh, and the SN are of short latency pathways; (3) that Str stimulation evokes short latency EPSPs followed by IPSPs and late EPSPs in GP neurons; and (4) that STh stimulation evokes short latency EPSPs overlapped with short latency IPSPs and late EPSPs in GP neurons."
},
{
"pmid": "22028684",
"title": "The role of inhibition in generating and controlling Parkinson's disease oscillations in the Basal Ganglia.",
"abstract": "Movement disorders in Parkinson's disease (PD) are commonly associated with slow oscillations and increased synchrony of neuronal activity in the basal ganglia. The neural mechanisms underlying this dynamic network dysfunction, however, are only poorly understood. Here, we show that the strength of inhibitory inputs from striatum to globus pallidus external (GPe) is a key parameter controlling oscillations in the basal ganglia. Specifically, the increase in striatal activity observed in PD is sufficient to unleash the oscillations in the basal ganglia. This finding allows us to propose a unified explanation for different phenomena: absence of oscillation in the healthy state of the basal ganglia, oscillations in dopamine-depleted state and quenching of oscillations under deep-brain-stimulation (DBS). These novel insights help us to better understand and optimize the function of DBS protocols. Furthermore, studying the model behavior under transient increase of activity of the striatal neurons projecting to the indirect pathway, we are able to account for both motor impairment in PD patients and for reduced response inhibition in DBS implanted patients."
},
{
"pmid": "16571765",
"title": "Competition between feedback loops underlies normal and pathological dynamics in the basal ganglia.",
"abstract": "Experiments performed in normal animals suggest that the basal ganglia (BG) are crucial in motor program selection. BG are also involved in movement disorders. In particular, BG neuronal activity in parkinsonian animals and patients is more oscillatory and more synchronous than in normal individuals. We propose a new model for the function and dysfunction of the motor part of BG. We hypothesize that the striatum, the subthalamic nucleus, the internal pallidum (GPi), the thalamus, and the cortex are involved in closed feedback loops. The direct (cortex-striatum-GPi-thalamus-cortex) and the hyperdirect loops (cortex-subthalamic nucleus-GPi-thalamus-cortex), which have different polarities, play a key role in the model. We show that the competition between these two loops provides the BG-cortex system with the ability to perform motor program selection. Under the assumption that dopamine potentiates corticostriatal synaptic transmission, we demonstrate that, in our model, moderate dopamine depletion leads to a complete loss of action selection ability. High depletion can lead to synchronous oscillations. These modifications of the network dynamical state stem from an imbalance between the feedback in the direct and hyperdirect loops when dopamine is depleted. Our model predicts that the loss of selection ability occurs before the appearance of oscillations, suggesting that Parkinson's disease motor impairments are not directly related to abnormal oscillatory activity. Another major prediction of our model is that synchronous oscillations driven by the hyperdirect loop appear in BG after inactivation of the striatum."
},
{
"pmid": "21223899",
"title": "Synchronized neuronal oscillations and their role in motor processes.",
"abstract": "Recant data on the relationship of brain rhythms and the simultaneous oscillatory discharge of single units to motor preparation and performance have largely come from monkey and human studies and have failed to converge on a function. However, when these data are viewed in the context of older data from cats and rodents, some consistent patterns begin to emerge. Synchronous oscillatory activity, at any frequency, may be an integrative sensorimotor mechanism for gathering information that can be used to guide subsequent motor actions. There is also considerable evidence that brain rhythms can entrain motor unit activity. It is not clear yet whether the latter influence is a means of organizing muscle phase relationships within motor acts, or is simply a 'test pulse' strategy for checking current muscle conditions. Moreover, although the traditional association of faster brain rhythms with higher levels of arousal remains valid, arousal levels are correlated so tightly with the dynamics of sensorimotor control that it may not be possible to dissociate the two."
},
{
"pmid": "11566503",
"title": "Dopamine regulates the impact of the cerebral cortex on the subthalamic nucleus-globus pallidus network.",
"abstract": "The subthalamic nucleus-globus pallidus network plays a central role in basal ganglia function and dysfunction. To determine whether the relationship between activity in this network and the principal afferent of the basal ganglia, the cortex, is altered in a model of Parkinson's disease, we recorded unit activity in the subthalamic nucleus-globus pallidus network together with cortical electroencephalogram in control and 6-hydroxydopamine-lesioned rats under urethane anaesthesia. Subthalamic nucleus neurones in control and 6-hydroxydopamine-lesioned animals exhibited low-frequency oscillatory activity, which was tightly correlated with cortical slow-wave activity (approximately 1 Hz). The principal effect of dopamine depletion was that subthalamic nucleus neurones discharged more intensely (233% of control) and globus pallidus neurones developed low-frequency oscillatory firing patterns, without changes in mean firing rate. Ipsilateral cortical ablation largely abolished low-frequency oscillatory activity in the subthalamic nucleus and globus pallidus. These data suggest that abnormal low-frequency oscillatory activity in the subthalamic nucleus-globus pallidus network in the dopamine-depleted state is generated by the inappropriate processing of rhythmic cortical input. A component (15-20%) of the network still oscillated following cortical ablation in 6-hydroxydopamine-lesioned animals, implying that intrinsic properties may also pattern activity when dopamine levels are reduced. The response of the network to global activation was altered by 6-hydroxydopamine lesions. Subthalamic nucleus neurones were excited to a greater extent than in control animals and the majority of globus pallidus neurones were inhibited, in contrast to the excitation elicited in control animals. Inhibitory responses of globus pallidus neurones were abolished by cortical ablation, suggesting that the indirect pathway is augmented abnormally during activation of the dopamine-depleted brain. Taken together, these results demonstrate that both the rate and pattern of activity of subthalamic nucleus and globus pallidus neurones are altered profoundly by chronic dopamine depletion. Furthermore, the relative contribution of rate and pattern to aberrant information coding is intimately related to the state of activation of the cerebral cortex."
},
{
"pmid": "27366343",
"title": "Pathophysiology of Motor Dysfunction in Parkinson's Disease as the Rationale for Drug Treatment and Rehabilitation.",
"abstract": "Cardinal motor features of Parkinson's disease (PD) include bradykinesia, rest tremor, and rigidity, which appear in the early stages of the disease and largely depend on dopaminergic nigrostriatal denervation. Intermediate and advanced PD stages are characterized by motor fluctuations and dyskinesia, which depend on complex mechanisms secondary to severe nigrostriatal loss and to the problems related to oral levodopa absorption, and motor and nonmotor symptoms and signs that are secondary to marked dopaminergic loss and multisystem neurodegeneration with damage to nondopaminergic pathways. Nondopaminergic dysfunction results in motor problems, including posture, balance and gait disturbances, and fatigue, and nonmotor problems, encompassing depression, apathy, cognitive impairment, sleep disturbances, pain, and autonomic dysfunction. There are a number of symptomatic drugs for PD motor signs, but the pharmacological resources for nonmotor signs and symptoms are limited, and rehabilitation may contribute to their treatment. The present review will focus on classical notions and recent insights into the neuropathology, neuropharmacology, and neurophysiology of motor dysfunction of PD. These pieces of information represent the basis for the pharmacological, neurosurgical, and rehabilitative approaches to PD."
},
{
"pmid": "26537483",
"title": "Selection of cortical dynamics for motor behaviour by the basal ganglia.",
"abstract": "The basal ganglia and cortex are strongly implicated in the control of motor preparation and execution. Re-entrant loops between these two brain areas are thought to determine the selection of motor repertoires for instrumental action. The nature of neural encoding and processing in the motor cortex as well as the way in which selection by the basal ganglia acts on them is currently debated. The classic view of the motor cortex implementing a direct mapping of information from perception to muscular responses is challenged by proposals viewing it as a set of dynamical systems controlling muscles. Consequently, the common idea that a competition between relatively segregated cortico-striato-nigro-thalamo-cortical channels selects patterns of activity in the motor cortex is no more sufficient to explain how action selection works. Here, we contribute to develop the dynamical view of the basal ganglia-cortical system by proposing a computational model in which a thalamo-cortical dynamical neural reservoir is modulated by disinhibitory selection of the basal ganglia guided by top-down information, so that it responds with different dynamics to the same bottom-up input. The model shows how different motor trajectories can so be produced by controlling the same set of joint actuators. Furthermore, the model shows how the basal ganglia might modulate cortical dynamics by preserving coarse-grained spatiotemporal information throughout cortico-cortical pathways."
},
{
"pmid": "17611263",
"title": "Why don't we move faster? Parkinson's disease, movement vigor, and implicit motivation.",
"abstract": "People generally select a similar speed for a given motor task, such as reaching for a cup. One well established determinant of movement time is the speed-accuracy trade-off: movement time increases with the accuracy requirement. A second possible determinant is the energetic cost of making a movement. Parkinson's disease (PD), a condition characterized by generalized movement slowing (bradykinesia), provides the opportunity to directly explore this second possibility. We compared reaching movements of patients with PD with those of control subjects in a speed-accuracy trade-off task comprising conditions of increasing difficulty. Subjects completed as many trials as necessary to make 20 movements within a required speed range (trials to criterion, N(c)). Difficulty was reflected in endpoint accuracy and N(c). Patients were as accurate as control subjects in all conditions (i.e., PD did not affect the speed-accuracy trade-off). However, N(c) was consistently higher in patients, indicating reluctance to move fast although accuracy was not compromised. Specifically, the dependence of N(c) on movement energy cost (slope S(N)) was steeper in patients than in control subjects. This difference in S(N) suggests that bradykinesia represents an implicit decision not to move fast because of a shift in the cost/benefit ratio of the energy expenditure needed to move at normal speed. S(N) was less steep, but statistically significant, in control subjects, which demonstrates a role for energetic cost in the normal control of movement speed. We propose that, analogous to the established role of dopamine in explicit reward-seeking behavior, the dopaminergic projection to the striatum provides a signal for implicit \"motor motivation.\""
},
{
"pmid": "10719151",
"title": "Basal ganglia and cerebellar loops: motor and cognitive circuits.",
"abstract": "The traditional view that the basal ganglia and cerebellum are simply involved in the control of movement has been challenged in recent years. One of the pivotal reasons for this reappraisal has been new information about basal ganglia and cerebellar connections with the cerebral cortex. In essence, recent anatomical studies have revealed that these connections are organized into discrete circuits or 'loops'. Rather than serving as a means for widespread cortical areas to gain access to the motor system, these loops reciprocally interconnect a large and diverse set of cerebral cortical areas with the basal ganglia and cerebellum. The properties of neurons within the basal ganglia or cerebellar components of these circuits resembles the properties of neurons within the cortical areas subserved by these loops. For example, neuronal activity within basal ganglia and cerebellar loops with motor areas of the cerebral cortex is highly correlated with parameters of movement, while neuronal activity within basal ganglia and cerebellar loops with areas of the prefrontal cortex is more related to aspects of cognitive function. Thus, individual loops appear to be involved in distinct behavioral functions. Studies of basal ganglia and cerebellar pathology support this conclusion. Damage to the basal ganglia or cerebellar components of circuits with motor areas of cortex leads to motor symptoms, whereas damage of the subcortical components of circuits with non-motor areas of cortex causes higher-order deficits. In this report, we review some of the new anatomical, physiological and behavioral findings that have contributed to a reappraisal of function concerning the basal ganglia and cerebellar loops with the cerebral cortex."
},
{
"pmid": "8124079",
"title": "Basal ganglia intrinsic circuits and their role in behavior.",
"abstract": "There have been significant recent advances in the understanding of basal ganglia circuitry and its role in behavior. Important areas of work in the past year include, firstly, the role of striatal neurons in early phases of movement and, secondly, further characterization of the intrinsic circuitry with emphasis on the importance of the subthalamic nucleus and its connections. A conceptual model of basal ganglia inhibition of competing motor programs is discussed."
},
{
"pmid": "17706780",
"title": "Mechanisms of action of deep brain stimulation(DBS) .",
"abstract": "Deep brain stimulation (DBS) is remarkably effective for a range of neurological and psychiatric disorders that have failed pharmacological and cell transplant therapies. Clinical investigations are underway for a variety of other conditions. Yet, the therapeutic mechanisms of action are unknown. In addition, DBS research demonstrates the need to re-consider many hypotheses regarding basal ganglia physiology and pathophysiology such as the notion that increased activity in the globus pallidus internal segment is causal to Parkinson's disease symptoms. Studies reveal a variety of apparently discrepant results. At the least, it is unclear which DBS effects are therapeutically effective. This systematic review attempts to organize current DBS research into a series of unifying themes or issues such as whether the therapeutic effects are local or systems-wide or whether the effects are related to inhibition or excitation. A number of alternative hypotheses are offered for consideration including suppression of abnormal activity, striping basal ganglia output of misinformation, reduction of abnormal stochastic resonance effects due to increased noise in the disease state, and reinforcement of dynamic modulation of neuronal activity by resonance effects."
},
{
"pmid": "27422450",
"title": "Motor symptoms in Parkinson's disease: A unified framework.",
"abstract": "Parkinson's disease (PD) is characterized by a range of motor symptoms. Besides the cardinal symptoms (akinesia and bradykinesia, tremor and rigidity), PD patients show additional motor deficits, including: gait disturbance, impaired handwriting, grip force and speech deficits, among others. Some of these motor symptoms (e.g., deficits of gait, speech, and handwriting) have similar clinical profiles, neural substrates, and respond similarly to dopaminergic medication and deep brain stimulation (DBS). Here, we provide an extensive review of the clinical characteristics and neural substrates of each of these motor symptoms, to highlight precisely how PD and its medical and surgical treatments impact motor symptoms. In conclusion, we offer a unified framework for understanding the range of motor symptoms in PD. We argue that various motor symptoms in PD reflect dysfunction of neural structures responsible for action selection, motor sequencing, and coordination and execution of movement."
},
{
"pmid": "28979203",
"title": "Parkinson's Disease Subtypes Identified from Cluster Analysis of Motor and Non-motor Symptoms.",
"abstract": "Parkinson's disease is now considered a complex, multi-peptide, central, and peripheral nervous system disorder with considerable clinical heterogeneity. Non-motor symptoms play a key role in the trajectory of Parkinson's disease, from prodromal premotor to end stages. To understand the clinical heterogeneity of Parkinson's disease, this study used cluster analysis to search for subtypes from a large, multi-center, international, and well-characterized cohort of Parkinson's disease patients across all motor stages, using a combination of cardinal motor features (bradykinesia, rigidity, tremor, axial signs) and, for the first time, specific validated rater-based non-motor symptom scales. Two independent international cohort studies were used: (a) the validation study of the Non-Motor Symptoms Scale (n = 411) and (b) baseline data from the global Non-Motor International Longitudinal Study (n = 540). k-means cluster analyses were performed on the non-motor and motor domains (domains clustering) and the 30 individual non-motor symptoms alone (symptoms clustering), and hierarchical agglomerative clustering was performed to group symptoms together. Four clusters are identified from the domains clustering supporting previous studies: mild, non-motor dominant, motor-dominant, and severe. In addition, six new smaller clusters are identified from the symptoms clustering, each characterized by clinically-relevant non-motor symptoms. The clusters identified in this study present statistical confirmation of the increasingly important role of non-motor symptoms (NMS) in Parkinson's disease heterogeneity and take steps toward subtype-specific treatment packages."
},
{
"pmid": "20851193",
"title": "Parkinson's disease tremor-related metabolic network: characterization, progression, and treatment effects.",
"abstract": "The circuit changes that mediate parkinsonian tremor, while likely differing from those underlying akinesia and rigidity, are not precisely known. In this study, to identify a specific metabolic brain network associated with this disease manifestation, we used FDG PET to scan nine tremor dominant Parkinson's disease (PD) patients at baseline and during ventral intermediate (Vim) thalamic nucleus deep brain stimulation (DBS). Ordinal trends canonical variates analysis (OrT/CVA) was performed on the within-subject scan data to detect a significant spatial covariance pattern with consistent changes in subject expression during stimulation-mediated tremor suppression. The metabolic pattern was characterized by covarying increases in the activity of the cerebellum/dentate nucleus and primary motor cortex, and, to a less degree, the caudate/putamen. Vim stimulation resulted in consistent reductions in pattern expression (p<0.005, permutation test). In the absence of stimulation, pattern expression values (subject scores) correlated significantly (r=0.85, p<0.02) with concurrent accelerometric measurements of tremor amplitude. To validate this spatial covariance pattern as an objective network biomarker of PD tremor, we prospectively quantified its expression on an individual subject basis in independent PD populations. The resulting subject scores for this PD tremor-related pattern (PDTP) were found to exhibit: (1) excellent test-retest reproducibility (p<0.0001); (2) significant correlation with independent clinical ratings of tremor (r=0.54, p<0.001) but not akinesia-rigidity; and (3) significant elevations (p<0.02) in tremor dominant relative to atremulous PD patients. Following validation, we assessed the natural history of PDTP expression in early stage patients scanned longitudinally with FDG PET over a 4-year interval. Significant increases in PDTP expression (p<0.01) were evident in this cohort over time; rate of progression, however, was slower than for the PD-related akinesia/rigidity pattern (PDRP). We also determined whether PDTP expression is modulated by interventions specifically directed at parkinsonian tremor. While Vim DBS was associated with changes in PDTP (p<0.001) but not PDRP expression, subthalamic nucleus (STN) DBS reduced the activity of both networks (p<0.05). PDTP expression was suppressed more by Vim than by STN stimulation (p<0.05). These findings suggest that parkinsonian tremor is mediated by a distinct metabolic network involving primarily cerebello-thalamo-cortical pathways. Indeed, effective treatment of this symptom is associated with significant reduction in PDTP expression. Quantification of treatment-mediated changes in both PDTP and PDRP scores can provide an objective means of evaluating the differential effects of novel antiparkinsonian interventions on the different motor features of the disorder."
},
{
"pmid": "3427482",
"title": "Intracellular study of rat substantia nigra pars reticulata neurons in an in vitro slice preparation: electrical membrane properties and response characteristics to subthalamic stimulation.",
"abstract": "The electrical membrane properties of substantia nigra pars reticulata (SNR) neurons and their postsynaptic responses to stimulation of the subthalamic nucleus (STH) were studied in an in vitro slice preparation. SNR neurons were divided into two types based on their electrical membrane properties. Type-I neurons possessed (1) spontaneous repetitive firings, (2) short-duration action potentials, (3) less prominent spike accommodations, and (4) a strong delayed rectification during membrane depolarization. Type-II neurons had (1) no spontaneous firings, (2) long-duration action potentials, (3) a prominent spike accommodation, (4) a relatively large post-active hyperpolarization, and (5) a less prominent delayed rectification. These membrane properties were very similar to those observed in substantia nigra pars compacta (SNC) neurons in slice preparations. Features common to both types of neurons include that (1) the input resistance was similar, (2) they showed an anomalous rectification during strong hyperpolarizations, and (3) they were capable of generating Ca potentials. Intracellular responses of both types of SNR neurons to STH stimulation consisted of initial short-duration monosynaptic excitatory postsynaptic potentials (EPSPs) and a short-duration inhibitory postsynaptic potential (IPSP) followed by a long-duration depolarization. The IPSP was markedly suppressed by application of bicuculline methiodide and the polarity was reversed by intracellular injection of Cl-. In the preparations obtained from internal capsule-transected rats, STH-induced EPSPs had much longer durations than those observed in the normal preparations, while the amplitude of IPSPs and succeeding small-amplitude long-duration depolarizations was small. The results indicated that SNR contains two electrophysiologically different types of neurons, and that both types of neurons receive monosynaptic EPSPs from STH and IPSPs from areas rostral to STH."
},
{
"pmid": "9130783",
"title": "Electrophysiological studies of rat substantia nigra neurons in an in vitro slice preparation after middle cerebral artery occlusion.",
"abstract": "We studied sequential changes in electrophysiological profiles of the ipsilateral substantia nigra neurons in an in vitro slice preparation obtained from the middle cerebral artery-occluded rats. Histological examination revealed marked atrophy and neurodegeneration in the ipsilateral substantia nigra pars reticulata at 14 days after middle cerebral artery occlusion. Compared with the control group, there was no significant change in electrical membrane properties and synaptic responses of substantia nigra pars reticulata neurons examined at one to two weeks after middle cerebral artery occlusion. On the other hand, there was a significant increase in the input resistance and spontaneous firing rate of substantia nigra pars compacta neurons at 13-16 days after middle cerebral artery occlusion. Furthermore, inhibitory postsynaptic potentials evoked by stimulation of the subthalamus in substantia nigra pars compacta neurons was suppressed at five to eight days after middle cerebral artery occlusion. At the same time excitatory postsynaptic potentials evoked by the subthalamic stimulation was increased. Bath application of bicuculline methiodide (50 microM), a GABA(A) receptor antagonist, significantly increased the firing rate of substantia nigra pars compacta neurons from intact rats. These results strongly suggest that changes in electrophysiological responses observed in substantia nigra pars compacta neurons is caused by degeneration of GABAergic afferents from the substantia nigra pars reticulata following middle cerebral artery occlusion. While previous studies indirectly suggested that hyperexcitation due to deafferentation from the neostriatum may be a major underlying mechanism in delayed degeneration of substantia nigra pars reticulata neurons after middle cerebral artery occlusion, the present electrophysiological experiments provide evidence of hyperexcitation in substantia nigra pars compacta neurons but not in pars reticulata neurons at the chronic phase of striatal infarction."
},
{
"pmid": "12067746",
"title": "Functional significance of the cortico-subthalamo-pallidal 'hyperdirect' pathway.",
"abstract": "How the motor-related cortical areas modulate the activity of the output nuclei of the basal ganglia is an important issue for understanding the mechanisms of motor control by the basal ganglia. The cortico-subthalamo-pallidal 'hyperdirect' pathway conveys powerful excitatory effects from the motor-related cortical areas to the globus pallidus, bypassing the striatum, with shorter conduction time than effects conveyed through the striatum. We emphasize the functional significance of the 'hyperdirect' pathway and propose a dynamic 'center-surround model' of basal ganglia function in the control of voluntary limb movements. When a voluntary movement is about to be initiated by cortical mechanisms, a corollary signal conveyed through the cortico-subthalamo-pallidal 'hyperdirect' pathway first inhibits large areas of the thalamus and cerebral cortex that are related to both the selected motor program and other competing programs. Then, another corollary signal through the cortico-striato-pallidal 'direct' pathway disinhibits their targets and releases only the selected motor program. Finally, the third corollary signal possibly through the cortico-striato-external pallido-subthalamo-internal pallidal 'indirect' pathway inhibits their targets extensively. Through this sequential information processing, only the selected motor program is initiated, executed and terminated at the selected timing, whereas other competing programs are canceled."
},
{
"pmid": "17031711",
"title": "Tonic dopamine: opportunity costs and the control of response vigor.",
"abstract": "RATIONALE\nDopamine neurotransmission has long been known to exert a powerful influence over the vigor, strength, or rate of responding. However, there exists no clear understanding of the computational foundation for this effect; predominant accounts of dopamine's computational function focus on a role for phasic dopamine in controlling the discrete selection between different actions and have nothing to say about response vigor or indeed the free-operant tasks in which it is typically measured.\n\n\nOBJECTIVES\nWe seek to accommodate free-operant behavioral tasks within the realm of models of optimal control and thereby capture how dopaminergic and motivational manipulations affect response vigor.\n\n\nMETHODS\nWe construct an average reward reinforcement learning model in which subjects choose both which action to perform and also the latency with which to perform it. Optimal control balances the costs of acting quickly against the benefits of getting reward earlier and thereby chooses a best response latency.\n\n\nRESULTS\nIn this framework, the long-run average rate of reward plays a key role as an opportunity cost and mediates motivational influences on rates and vigor of responding. We review evidence suggesting that the average reward rate is reported by tonic levels of dopamine putatively in the nucleus accumbens.\n\n\nCONCLUSIONS\nOur extension of reinforcement learning models to free-operant tasks unites psychologically and computationally inspired ideas about the role of tonic dopamine in striatum, explaining from a normative point of view why higher levels of dopamine might be associated with more vigorous responding."
},
{
"pmid": "9863560",
"title": "Functional neuroanatomy of the basal ganglia as studied by dual-probe microdialysis.",
"abstract": "Dual probe microdialysis was employed in intact rat brain to investigate the effect of intrastriatal perfusion with selective dopamine D1 and D2 receptor agonists and with c-fos antisense oligonucleotide on (a) local GABA release in the striatum; (b) the internal segment of the globus pallidus and the substantia nigra pars reticulata, which is the output site of the strionigral GABA pathway; and (c) the external segment of the globus pallidus, which is the output site of the striopallidal GABA pathway. The data provide functional in vivo evidence for a selective dopamine D1 receptor-mediated activation of the direct strionigral GABA pathway and a selective dopamine D2 receptor inhibition of the indirect striopallidal GABA pathway and provides a neuronal substrate for parallel processing in the basal ganglia regulation of motor function. Taken together, these findings offer new therapeutic strategies for the treatment of dopamine-linked disorders such as Parkinson's disease, Huntington's disease, and schizophrenia."
},
{
"pmid": "15824341",
"title": "Acute akinesia in Parkinson disease.",
"abstract": "OBJECTIVE\nTo assess acute akinesia in patients with Parkinson disease (PD) (\"acute akinesia\" defined as a sudden deterioration in motor performance that persists for > or =48 hours despite treatment).\n\n\nMETHODS\nThe study population was a cohort of 675 patients followed regularly for 12 years in the authors' outpatient clinic. All patients were studied when acute akinesia led to hospitalization. Unified Parkinson's Disease Rating Scale (UPDRS) scores were rated during the akinetic state and compared with ratings obtained 1.6 +/- 0.9 months before the onset or after recovery.\n\n\nRESULTS\nTwenty-six patients developed acute akinesia; in 17 of the 26 patients, new akinetic symptoms first manifested at the onset of an infectious disease or after surgery and appeared unrelated to changes in treatment or altered levodopa kinetics. In nine patients, acute akinesia developed concurrently with gastrointestinal diseases or drug manipulations showed features of neuroleptic malignant syndrome. Acute akinesia severe enough to increase the UPDRS Motor Subscale score by 31.4 +/- 12.8 appeared within 2 to 3 days and persisted for 11.2 +/- 6.2 days despite attempts to increase the dopaminergic drug dose or administer continuous subcutaneous apomorphine infusion. Symptomatic recovery began 4 to 26 days after the onset of acute akinesia and appeared incomplete in 10 patients. Four patients of 26 died despite treatment. Levodopa kinetics were normal in all patients without gastrointestinal disease and in one patient with gastric stasis.\n\n\nCONCLUSIONS\nAcute akinesia is a life-threatening complication of Parkinson disease (PD). It is unlike the \"wearing-off\" phenomenon that occurs when dopaminergic drug levels decline and responds to dopaminergic rescue drugs. Acute akinesia may be a clinical entity distinct from the previously described PD motor fluctuations."
},
{
"pmid": "26683341",
"title": "Computational Models Describing Possible Mechanisms for Generation of Excessive Beta Oscillations in Parkinson's Disease.",
"abstract": "In Parkinson's disease, an increase in beta oscillations within the basal ganglia nuclei has been shown to be associated with difficulty in movement initiation. An important role in the generation of these oscillations is thought to be played by the motor cortex and by a network composed of the subthalamic nucleus (STN) and the external segment of globus pallidus (GPe). Several alternative models have been proposed to describe the mechanisms for generation of the Parkinsonian beta oscillations. However, a recent experimental study of Tachibana and colleagues yielded results which are challenging for all published computational models of beta generation. That study investigated how the presence of beta oscillations in a primate model of Parkinson's disease is affected by blocking different connections of the STN-GPe circuit. Due to a large number of experimental conditions, the study provides strong constraints that any mechanistic model of beta generation should satisfy. In this paper we present two models consistent with the data of Tachibana et al. The first model assumes that Parkinsonian beta oscillation are generated in the cortex and the STN-GPe circuits resonates at this frequency. The second model additionally assumes that the feedback from STN-GPe circuit to cortex is important for maintaining the oscillations in the network. Predictions are made about experimental evidence that is required to differentiate between the two models, both of which are able to reproduce firing rates, oscillation frequency and effects of lesions carried out by Tachibana and colleagues. Furthermore, an analysis of the models reveals how the amplitude and frequency of the generated oscillations depend on parameters."
},
{
"pmid": "22805067",
"title": "Improved conditions for the generation of beta oscillations in the subthalamic nucleus--globus pallidus network.",
"abstract": "A key pathology in the development of Parkinson's disease is the occurrence of persistent beta oscillations, which are correlated with difficulty in movement initiation. We investigated the network model composed of the subthalamic nucleus (STN) and globus pallidus (GP) developed by A. Nevado Holgado et al. [(2010) Journal of Neuroscience, 30, 12340-12352], who identified the conditions under which this circuit could generate beta oscillations. Our work extended their analysis by deriving improved analytic stability conditions for realistic values of the synaptic transmission delay between STN and GP neurons. The improved conditions were significantly closer to the results of simulations for the range of synaptic transmission delays measured experimentally. Furthermore, our analysis explained how changes in cortical and striatal input to the STN-GP network influenced oscillations generated by the circuit. As we have identified when a system of mutually connected populations of excitatory and inhibitory neurons can generate oscillations, our results may also find applications in the study of neural oscillations produced by assemblies of excitatory and inhibitory neurons in other brain regions."
},
{
"pmid": "15728849",
"title": "Rhythmic bursting in the cortico-subthalamo-pallidal network during spontaneous genetically determined spike and wave discharges.",
"abstract": "Absence seizures are characterized by impairment of consciousness associated with bilaterally synchronous spike-and-wave discharges (SWDs) in the electroencephalogram (EEG), which reflect paroxysmal oscillations in thalamocortical networks. Although recent studies suggest that the subthalamic nucleus (STN) provides an endogenous control system that influences the occurrence of absence seizures, the mechanisms of propagation of cortical epileptic discharges in the STN have never been explored. The present study provides the first description of the electrophysiological activity in the cortico-subthalamo-pallidal network during absence seizures in the genetic absence epilepsy rats from Strasbourg, a well established model of absence epilepsy. In corticosubthalamic neurons, the SWDs were associated with repetitive suprathreshold depolarizations correlated with EEG spikes. These cortical paroxysms were reflected in the STN by synchronized, rhythmic, high-frequency bursts of action potentials. Intracellular recordings revealed that the intraburst pattern in STN neurons was sculpted by an early depolarizing synaptic potential, followed by a short hyperpolarization and a rebound of excitation. The rhythmic hyperpolarizations in STN neurons during SWDs likely originate from a subpopulation of pallidal neurons exhibiting rhythmic bursting temporally correlated with the EEG spikes. The repetitive discharges in STN neurons accompanying absence seizures might convey powerful excitation to basal ganglia output nuclei and, consequently, may participate in the control of thalamocortical SWDs."
},
{
"pmid": "25086269",
"title": "Serotonin in Parkinson's disease.",
"abstract": "Parkinson's disease is a chronic neurodegenerative disorder characterized by the motor symptoms of bradykinesia, tremor, rigidity and postural instability. However, non-motor symptoms such as chronic fatigue, depression, dementia and sleep disturbances are also frequent and play a significant role with negative consequences in the quality of life of patients with Parkinson's disease. Although the progressive dopaminergic denervation is the cardinal pathology in the brains of patients with Parkinson's disease, others systems such as the serotonergic are affected as well. Over the last decade, several lines of evidence suggest that a progressive and non-linear loss of serotonergic terminals takes place in Parkinson's disease, though this is at a slower pace compared to the dopaminergic loss. Several studies have indicated that serotonergic dysfunction in Parkinson's disease is associated with the development of motor and non-motor symptoms and complications. Here, we aim to review the current evidence with regards to the serotonergic pathology in Parkinson's disease and its relevance to the development of clinical symptoms. We are primarily revising in vivo human studies from research with positron emission tomography molecular imaging."
},
{
"pmid": "10362291",
"title": "The basal ganglia: a vertebrate solution to the selection problem?",
"abstract": "A selection problem arises whenever two or more competing systems seek simultaneous access to a restricted resource. Consideration of several selection architectures suggests there are significant advantages for systems which incorporate a central switching mechanism. We propose that the vertebrate basal ganglia have evolved as a centralized selection device, specialized to resolve conflicts over access to limited motor and cognitive resources. Analysis of basal ganglia functional architecture and its position within a wider anatomical framework suggests it can satisfy many of the requirements expected of an efficient selection mechanism."
},
{
"pmid": "23745108",
"title": "Computational studies of the role of serotonin in the basal ganglia.",
"abstract": "It has been well established that serotonin (5-HT) plays an important role in the striatum. For example, during levodopa therapy for Parkinson's disease (PD), the serotonergic projections from the dorsal raphe nucleus (DRN) release dopamine as a false transmitter, and there are strong indications that this pulsatile release is connected to dyskinesias that reduce the effectiveness of the therapy. Here we present hypotheses about the functional role of 5-HT in the normal striatum and present computational studies showing the feasibility of these hypotheses. Dopaminergic projections to the striatum inhibit the medium spiny neurons (MSN) in the striatopalladal (indirect) pathway and excite MSNs in the striatonigral (direct) pathway. It has long been hypothesized that the effect of dopamine (DA) depletion caused by the loss of SNc cells in PD is to change the \"balance\" between the pathways to favor the indirect pathway. Originally, \"balance\" was understood to mean equal firing rates, but now it is understood that the level of DA affects the patterns of firing in the two pathways too. There are dense 5-HT projections to the striatum from the dorsal raphe nucleus and it is known that increased 5-HT in the striatum facilitates DA release from DA terminals. The direct pathway excites various cortical nuclei and some of these nuclei send inhibitory projections to the DRN. Our hypothesis is that this feedback circuit from the striatum to the cortex to the DRN to the striatum serves to stabilize the balance between the direct and indirect pathways, and this is confirmed by our model calculations. Our calculations also show that this circuit contributes to the stability of the dopamine concentration in the striatum as SNc cells die during Parkinson's disease progression (until late phase). There may be situations in which there are physiological reasons to \"unbalance\" the direct and indirect pathways, and we show that projections to the DRN from the cortex or other brain regions could accomplish this task."
},
{
"pmid": "11522580",
"title": "The subthalamic nucleus in Parkinson's disease: somatotopic organization and physiological characteristics.",
"abstract": "Single-cell recording of the subthalamic nucleus (STN) was undertaken in 14 patients with Parkinson's disease submitted to surgery. Three hundred and fifty neurones were recorded and assessed for their response to passive and active movements. Thirty-two per cent were activated by passive and active movement of the limbs, oromandibular region and abdominal wall. All neurones with sensorimotor responses were in the dorsolateral region of the STN. Arm-related neurones were lateral (> or =14 mm plane) to leg-related neurones, which were found more medially (< or =12 mm). Representation of the oromandibular musculature was in the middle of the sensorimotor region (approximately 13 mm plane) and ventral to the arm and leg. Two hundred neurones were adequately isolated for 'off-line' analysis. The mean frequency of discharge was 33 +/- 17 Hz (13-117 Hz). Three types of neuronal discharges were distinguished: irregular (60.5%), tonic (24%) and oscillatory (15.5 %). They were statistically differentiated on the basis of their mean firing frequency and the coefficient of variation of the interspike interval. Neurones responding to movement were of the irregular or tonic type, and were found in the dorsolateral region of the STN. Neurones with oscillatory and low frequency activity did not respond to movement and were in the ventral one-third of the nucleus. Thirty-eight tremor-related neurones were recorded. The majority (84%) of these were sensitive to movement and were located in the dorsolateral region of the STN. Cross power analysis (n = 16) between the rhythmic neuronal activity and tremor in the limbs showed a peak frequency of 5 Hz (4-8 Hz). Neuronal activity of the substantia nigra pars reticulata was recorded 0.5-3 mm below the STN. Eighty neurones were recorded 'on-line' and 27 were isolated for 'off-line' analysis. A tonic pattern of discharge characterized by a mean firing rate of 71 +/- 28 Hz (35-122 Hz) with a mean coefficient of variation of the interspike interval of 0.85 +/- 0.29 ms was found. In only three neurones (11%) was there a response to sensorimotor stimulation. The findings of this study indicate that the somatotopic arrangement and electrophysiological features of the STN in Parkinson's disease patients are similar to those found in monkeys."
},
{
"pmid": "15708631",
"title": "Somatotopy in the basal ganglia: experimental and clinical evidence for segregated sensorimotor channels.",
"abstract": "Growing experimental and clinical evidence supports the notion that the cortico-basal ganglia-thalamo-cortical loops proceed along parallel circuits linking cortical and subcortical regions subserving the processing of sensorimotor, associative and affective tasks. In particular, there is evidence that a strict topographic segregation is maintained during the processing of sensorimotor information flowing from cortical motor areas to the sensorimotor areas of the basal ganglia. The output from the basal ganglia to the motor thalamus, which projects back to neocortical motor areas, is also organized into topographically segregated channels. This high degree of topographic segregation is demonstrated by the presence of a well-defined somatotopic organization in the sensorimotor areas of the basal ganglia. The presence of body maps in the basal ganglia has become clinically relevant with the increasing use of surgical procedures, such as lesioning or deep brain stimulation, which are selectively aimed at restricted subcortical targets in the sensorimotor loop such as the subthalamic nucleus (STN) or the globus pallidus pars interna (GPi). The ability to ameliorate the motor control dysfunction without producing side effects related to interference with non-motor circuits subserving associative or affective processing requires the ability to target subcortical areas particularly involved in sensorimotor processing (currently achieved only by careful intraoperative microelectrode mapping). The goal of this article is to review current knowledge about the somatotopic segregation of basal ganglia sensorimotor areas and outline in detail what is known about their body maps."
},
{
"pmid": "14534241",
"title": "Conditional ablation of striatal neuronal types containing dopamine D2 receptor disturbs coordination of basal ganglia function.",
"abstract": "Dopamine (DA) exerts synaptic organization of basal ganglia circuitry through a variety of neuronal populations in the striatum. We performed conditional ablation of striatal neuronal types containing DA D2 receptor (D2R) by using immunotoxin-mediated cell targeting. Mutant mice were generated that express the human interleukin-2 receptor alpha-subunit under the control of the D2R gene. Intrastriatal immunotoxin treatment of the mutants eliminated the majority of the striatopallidal medium spiny neurons and cholinergic interneurons. The elimination of these neurons caused hyperactivity of spontaneous movement and reduced motor activation in response to DA stimulation. The elimination also induced upregulation of GAD gene expression in the globus pallidus (GP) and downregulation of cytochrome oxidase activity in the subthalamic nucleus (STN), whereas it attenuated DA-induced expression of the immediate-early genes (IEGs) in the striatonigral neurons. In addition, chemical lesion of cholinergic interneurons did not alter spontaneous movement but caused a moderate enhancement in DA-induced motor activation. This enhancement of the behavior was accompanied by an increase in the IEG expression in the striatonigral neurons. These data suggest that ablation of the striatopallidal neurons causes spontaneous hyperactivity through modulation of the GP and STN activity and that the ablation leads to the reduction in DA-induced behavior at least partly through attenuation of the striatonigral activity as opposed to the influence of cholinergic cell lesion. We propose a possible model in which the striatopallidal neurons dually regulate motor behavior dependent on the state of DA transmission through coordination of the basal ganglia circuitry."
},
{
"pmid": "9658025",
"title": "Predictive reward signal of dopamine neurons.",
"abstract": "The effects of lesions, receptor blocking, electrical self-stimulation, and drugs of abuse suggest that midbrain dopamine systems are involved in processing reward information and learning approach behavior. Most dopamine neurons show phasic activations after primary liquid and food rewards and conditioned, reward-predicting visual and auditory stimuli. They show biphasic, activation-depression responses after stimuli that resemble reward-predicting stimuli or are novel or particularly salient. However, only few phasic activations follow aversive stimuli. Thus dopamine neurons label environmental stimuli with appetitive value, predict and detect rewards and signal alerting and motivating events. By failing to discriminate between different rewards, dopamine neurons appear to emit an alerting message about the surprising presence or absence of rewards. All responses to rewards and reward-predicting stimuli depend on event predictability. Dopamine neurons are activated by rewarding events that are better than predicted, remain uninfluenced by events that are as good as predicted, and are depressed by events that are worse than predicted. By signaling rewards according to a prediction error, dopamine responses have the formal characteristics of a teaching signal postulated by reinforcement learning theories. Dopamine responses transfer during learning from primary rewards to reward-predicting stimuli. This may contribute to neuronal mechanisms underlying the retrograde action of rewards, one of the main puzzles in reinforcement learning. The impulse response releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons. This signal may improve approach behavior by providing advance reward information before the behavior occurs, and may contribute to learning by modifying synaptic transmission. The dopamine reward signal is supplemented by activity in neurons in striatum, frontal cortex, and amygdala, which process specific reward information but do not emit a global reward prediction error signal. A cooperation between the different reward signals may assure the use of specific rewards for selectively reinforcing behaviors. Among the other projection systems, noradrenaline neurons predominantly serve attentional mechanisms and nucleus basalis neurons code rewards heterogeneously. Cerebellar climbing fibers signal errors in motor performance or errors in the prediction of aversive events to cerebellar Purkinje cells. Most deficits following dopamine-depleting lesions are not easily explained by a defective reward signal but may reflect the absence of a general enabling function of tonic levels of extracellular dopamine. Thus dopamine systems may have two functions, the phasic transmission of reward information and the tonic enabling of postsynaptic neurons."
},
{
"pmid": "9881853",
"title": "Microcircuitry of the direct and indirect pathways of the basal ganglia.",
"abstract": "Our understanding of the organization of the basal ganglia has advanced markedly over the last 10 years, mainly due to increased knowledge of their anatomical, neurochemical and physiological organization. These developments have led to a unifying model of the functional organization of the basal ganglia in both health and disease. The hypothesis is based on the so-called \"direct\" and \"indirect\" pathways of the flow of cortical information through the basal ganglia and has profoundly influenced the field of basal ganglia research, providing a framework for anatomical, physiological and clinical studies. The recent introduction of powerful techniques for the analysis of neuronal networks has led to further developments in our understanding of the basal ganglia. The objective of this commentary is to build upon the established model of the basal ganglia connectivity and review new anatomical findings that lead to the refinement of some aspects of the model. Four issues will be discussed. (1) The existence of several routes for the flow of cortical information along \"indirect\" pathways. (2) The synaptic convergence of information flowing through the \"direct\" and \"indirect\" pathways at the single-cell level in the basal ganglia output structures. (3) The convergence of functionally diverse information from the globus pallidus and the ventral pallidum at different levels of the basal ganglia. (4) The interconnections between the two divisions of the pallidal complex and the subthalamic nucleus and the characterization of the neuronal network underlying the indirect pathways. The findings summarized in this commentary confirm and elaborate the models of the direct and indirect pathways of information flow through the basal ganglia and provide a morphological framework for future studies."
},
{
"pmid": "15331233",
"title": "The thalamostriatal system: a highly specific network of the basal ganglia circuitry.",
"abstract": "Although the existence of thalamostriatal projections has long been known, the role(s) of this system in the basal ganglia circuitry remains poorly characterized. The intralaminar and ventral motor nuclei are the main sources of thalamic inputs to the striatum. This review emphasizes the high degree of anatomical and functional specificity of basal ganglia-thalamostriatal projections and discusses various aspects of the synaptic connectivity and neurochemical features that differentiate this glutamate system from the corticostriatal network. It also discusses the importance of thalamostriatal projections from the caudal intralaminar nuclei in the process of attentional orientation. A major task of future studies is to characterize the role(s) of corticostriatal and thalamostriatal pathways in regulating basal ganglia activity in normal and pathological conditions."
},
{
"pmid": "19555824",
"title": "Medical treatment of Parkinson disease.",
"abstract": "The cardinal characteristics of Parkinson disease (PD) include resting tremor, rigidity, and bradykinesia. Patients may also develop autonomic dysfunction, cognitive changes, psychiatric symptoms, sensory complaints, and sleep disturbances. The treatment of motor and non-motor symptoms of Parkinson disease is addressed in this article."
},
{
"pmid": "8815934",
"title": "Coordinated expression of dopamine receptors in neostriatal medium spiny neurons.",
"abstract": "In recent years, the distribution of dopamine receptor subtypes among the principal neurons of the neostriatum has been the subject of debate. Conventional anatomical and physiological approaches have yielded starkly different estimates of the extent to which D1 and D2 class dopamine receptors are colocalized. One plausible explanation for the discrepancy is that some dopamine receptors are present in physiologically significant numbers, but the mRNA for these receptors is not detectable with conventional techniques. To test this hypothesis, we examined the expression of DA receptors in individual neostriatal neurons by patch-clamp and RT-PCR techniques. Because of the strong correlation between peptide expression and projection site, medium spiny neurons were divided into three groups on the basis of expression of mRNA for enkephalin (ENK) and substance P (SP). Neurons expressing detectable levels of SP but not ENK had abundant mRNA for the D1a receptor. A subset of these cells (approximately 50%) coexpressed D3 or D4 receptor mRNA. Neurons expressing detectable levels of ENK but not SP had abundant mRNA for D2 receptor isoforms (short and long). A subset (10-25%) of these neurons coexpressed D1a or D1b mRNAs. Neurons coexpressing ENK and SP mRNAs consistently coexpressed D1a and D2 mRNAs in relatively high abundance. Functional analysis of neurons expressing lower abundance mRNAs revealed clear physiological consequences that could be attributed to these receptors. These results suggest that, although colocalization of D1a and D2 receptors is limited, functional D1 and D2 class receptors are colocalized in nearly one-half of all medium spiny projection neurons."
},
{
"pmid": "16249050",
"title": "The functional role of the subthalamic nucleus in cognitive and limbic circuits.",
"abstract": "Once it was believed that the subthalamic nucleus (STN) was no more than a relay station serving as a \"gate\" for ascending basal ganglia-thalamocortical circuits. Nowadays, the STN is considered to be one of the main regulators of motor function related to the basal ganglia. The role of the STN in the regulation of associative and limbic functions related to the basal ganglia has generally received little attention. In the present review, the functional role of the STN in the control of cortico-basal ganglia-thalamocortical associative and limbic circuits is discussed. In the past 20 years the concepts about the functional role of the STN have changed dramatically: from being an inhibitory nucleus to a potent excitatory nucleus, and from being involved in hyperkinesias to hypokinesias. However, it has been demonstrated only recently, mainly by reports on the behavioral (side-) effects of STN deep brain stimulation (DBS), which is a popular surgical technique in the treatment of patients suffering from advanced Parkinson Disease (PD), that the STN is clinically involved in associative and limbic functions. These findings were confirmed by results from animal studies. Experimental studies applying STN DBS or STN lesions to investigate the neuronal mechanisms involved in these procedures found profound effects on cognitive and motivational parameters. The anatomical, electrophysiological and behavioral data presented in this review point towards a potent regulatory function of the STN in the processing of associative and limbic information towards cortical and subcortical regions. In conclusion, it can be stated that the STN has anatomically a central position within the basal ganglia thalamocortical associative and limbic circuits and is functionally a potent regulator of these pathways."
},
{
"pmid": "11923461",
"title": "Activity patterns in a model for the subthalamopallidal network of the basal ganglia.",
"abstract": "Based on recent experimental data, we have developed a conductance-based computational network model of the subthalamic nucleus and the external segment of the globus pallidus in the indirect pathway of the basal ganglia. Computer simulations and analysis of this model illuminate the roles of the coupling architecture of the network, and associated synaptic conductances, in modulating the activity patterns displayed by this network. Depending on the relationships of these coupling parameters, the network can support three general classes of sustained firing patterns: clustering, propagating waves, and repetitive spiking that may show little regularity or correlation. Each activity pattern can occur continuously or in discrete episodes. We characterize the mechanisms underlying these rhythms, as well as the influence of parameters on details such as spiking frequency and wave speed. These results suggest that the subthalamopallidal circuit is capable both of correlated rhythmic activity and of irregular autonomous patterns of activity that block rhythmicity. Increased striatal input to, and weakened intrapallidal inhibition within, the indirect pathway can switch the behavior of the circuit from irregular to rhythmic. This may be sufficient to explain the emergence of correlated oscillatory activity in the subthalamopallidal circuit after destruction of dopaminergic neurons in Parkinson's disease and in animal models of parkinsonism."
},
{
"pmid": "24514863",
"title": "Parkinson disease subtypes.",
"abstract": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs."
},
{
"pmid": "14598096",
"title": "Acute akinesia or akinetic crisis in Parkinson's disease.",
"abstract": "In 22 patients with idiopathic Parkinson's disease we observed a sudden worsening of motor symptoms and severe akinesia during hospitalization because of infectious diseases, bone fractures, surgery for gastrointestinal tract diseases, and iatrogenic causes. Of these patients, 12 recovered completely, 6 had a partial recovery, and 4 died. Treatments included subcutaneous apomorphine/lisuride infusion and dantreolene (with a creatine phosphokinase level higher than 200 IU). In all patients a definite refractoriness to therapy was shown with a transient lack of response to apomorphine."
},
{
"pmid": "10627627",
"title": "Synchrony generation in recurrent networks with frequency-dependent synapses.",
"abstract": "Throughout the neocortex, groups of neurons have been found to fire synchronously on the time scale of several milliseconds. This near coincident firing of neurons could coordinate the multifaceted information of different features of a stimulus. The mechanisms of generating such synchrony are not clear. We simulated the activity of a population of excitatory and inhibitory neurons randomly interconnected into a recurrent network via synapses that display temporal dynamics in their transmission; surprisingly, we found a behavior of the network where action potential activity spontaneously self-organized to produce highly synchronous bursts involving virtually the entire network. These population bursts were also triggered by stimuli to the network in an all-or-none manner. We found that the particular intensities of the external stimulus to specific neurons were crucial to evoke population bursts. This topographic sensitivity therefore depends on the spectrum of basal discharge rates across the population and not on the anatomical individuality of the neurons, because this was random. These results suggest that networks in which neurons are even randomly interconnected via frequency-dependent synapses could exhibit a novel form of reflex response that is sensitive to the nature of the stimulus as well as the background spontaneous activity."
},
{
"pmid": "11756513",
"title": "Opposite influences of endogenous dopamine D1 and D2 receptor activation on activity states and electrophysiological properties of striatal neurons: studies combining in vivo intracellular recordings and reverse microdialysis.",
"abstract": "The tonic influence of dopamine D1 and D2 receptors on the activity of striatal neurons in vivo was investigated by performing intracellular recordings concurrently with reverse microdialysis in chloral hydrate-anesthetized rats. Striatal neurons were recorded in the vicinity of the microdialysis probe to assess their activity during infusions of artificial CSF (aCSF), the D1 receptor antagonist SCH 23390 (10 microm), or the D2 receptor antagonist eticlopride (20 microm). SCH 23390 perfusion decreased the excitability of striatal neurons exhibiting electrophysiological characteristics of spiny projection cells as evidenced by a decrease in the maximal depolarized membrane potential, a decrease in the amplitude of up-state events, and an increase in the intracellular current injection amplitude required to elicit an action potential. Conversely, a marked depolarization of up- and down-state membrane potential modes, a decrease in the amplitude of intracellular current injection required to elicit an action potential, and an increase in the number of spikes evoked by depolarizing current steps were observed in striatal neurons after local eticlopride infusion. A significant increase in maximal EPSP amplitude evoked by electrical stimulation of the prefrontal cortex was also observed during local eticlopride but not SCH 23390 infusion. These results indicate that in intact systems, ongoing dopaminergic neurotransmission exerts a powerful tonic modulatory influence on the up- and down-state membrane properties of striatal neurons and controls their excitability differentially via both D1- and D2-like receptors. Moreover, a significant component of D2 receptor-mediated inhibition of striatal neuron activity in vivo occurs via suppression of excitatory synaptic transmission."
},
{
"pmid": "23404337",
"title": "The cerebellum in Parkinson's disease.",
"abstract": "Parkinson's disease is a chronic progressive neurodegenerative disorder characterized by resting tremor, slowness of movements, rigidity, gait disturbance and postural instability. Most investigations on Parkinson's disease focused on the basal ganglia, whereas the cerebellum has often been overlooked. However, increasing evidence suggests that the cerebellum may have certain roles in the pathophysiology of Parkinson's disease. Anatomical studies identified reciprocal connections between the basal ganglia and cerebellum. There are Parkinson's disease-related pathological changes in the cerebellum. Functional or morphological modulations in the cerebellum were detected related to akinesia/rigidity, tremor, gait disturbance, dyskinesia and some non-motor symptoms. It is likely that the major roles of the cerebellum in Parkinson's disease include pathological and compensatory effects. Pathological changes in the cerebellum might be induced by dopaminergic degeneration, abnormal drives from the basal ganglia and dopaminergic treatment, and may account for some clinical symptoms in Parkinson's disease. The compensatory effect may help maintain better motor and non-motor functions. The cerebellum is also a potential target for some parkinsonian symptoms. Our knowledge about the roles of the cerebellum in Parkinson's disease remains limited, and further attention to the cerebellum is warranted."
},
{
"pmid": "19494773",
"title": "Akineto-rigid vs. tremor syndromes in Parkinsonism.",
"abstract": "PURPOSE OF REVIEW\nAkinesia, rigidity and low-frequency rest tremor are the three cardinal motor signs of Parkinson's disease and some Parkinson's disease animal models. However, cumulative evidence supports the view that akinesia/rigidity vs. tremor reflect different pathophysiological phenomena in the basal ganglia. Here, we review the recent physiological literature correlating abnormal neural activity in the basal ganglia with Parkinson's disease clinical symptoms.\n\n\nRECENT FINDINGS\nThe subthalamic nucleus of Parkinson's disease patients is characterized by oscillatory activity in the beta-frequency (approximately 15 Hz) range. However, Parkinson's disease tremor is not strictly correlated with the abnormal synchronous oscillations of the basal ganglia. On the other hand, akinesia and rigidity are better correlated with the basal ganglia beta oscillations.\n\n\nSUMMARY\nThe abnormal basal ganglia output leads to akinesia and rigidity. Parkinson's disease tremor most likely evolves as a downstream compensatory mechanism."
},
{
"pmid": "25465747",
"title": "Akinetic-rigid and tremor-dominant Parkinson's disease patients show different patterns of intrinsic brain activity.",
"abstract": "BACKGROUND\nParkinson's disease (PD) is a surprisingly heterogeneous neurodegenerative disorder. It is well established that different subtypes of PD present with different clinical courses and prognoses. However, the neural mechanism underlying these disparate presentations is uncertain.\n\n\nMETHODS\nHere we used resting-state fMRI (rs-fMRI) and the regional homogeneity (ReHo) method to determine neural activity patterns in the two main clinical subgroups of PD (akinetic-rigid and tremor-dominant).\n\n\nRESULTS\nCompared with healthy controls, akinetic-rigid (AR) subjects had increased ReHo mainly in right amygdala, left putamen, bilateral angular gyrus, bilateral medial prefrontal cortex (MPFC), and decreased ReHo in left post cingulate gyrus/precuneus (PCC/PCu) and bilateral thalamus. In contrast, tremor-dominant (TD) patients showed higher ReHo mostly in bilateral angular gyrus, left PCC, cerebellum_crus1, and cerebellum_6, while ReHo was decreased in right putamen, primary sensory cortex (S1), vermis_3, and cerebellum_4_5. These results indicate that AR and TD subgroups both represent altered spontaneous neural activity in default-mode regions and striatum, and AR subjects exhibit more changed neural activity in the mesolimbic cortex (amygdala) but TD in the cerebellar regions. Of note, direct comparison of the two subgroups revealed a distinct ReHo pattern primarily located in the striatal-thalamo-cortical (STC) and cerebello-thalamo-cortical (CTC) loops.\n\n\nCONCLUSION\nOverall, our findings highlight the involvement of default mode network (DMN) and STC circuit both in AR and TD subtypes, but also underscore the importance of integrating mesolimbic-striatal and CTC loops in understanding neural systems of akinesia and rigidity, as well as resting tremor in PD. This study provides improved understanding of the pathophysiological models of different subtypes of PD."
}
] |
PLoS Computational Biology | 31120875 | PMC6550413 | 10.1371/journal.pcbi.1006802 | Efficient algorithms to discover alterations with complementary functional association in cancer | Recent large cancer studies have measured somatic alterations in an unprecedented number of tumours. These large datasets allow the identification of cancer-related sets of genetic alterations by identifying relevant combinatorial patterns. Among such patterns, mutual exclusivity has been employed by several recent methods that have shown its effectiveness in characterizing gene sets associated to cancer. Mutual exclusivity arises because of the complementarity, at the functional level, of alterations in genes which are part of a group (e.g., a pathway) performing a given function. The availability of quantitative target profiles, from genetic perturbations or from clinical phenotypes, provides additional information that can be leveraged to improve the identification of cancer related gene sets by discovering groups with complementary functional associations with such targets. In this work we study the problem of finding groups of mutually exclusive alterations associated with a quantitative (functional) target. We propose a combinatorial formulation for the problem, and prove that the associated computational problem is computationally hard. We design two algorithms to solve the problem and implement them in our tool UNCOVER. We provide analytic evidence of the effectiveness of UNCOVER in finding high-quality solutions and show experimentally that UNCOVER finds sets of alterations significantly associated with functional targets in a variety of scenarios. In particular, we show that our algorithms find sets which are better than the ones obtained by the state-of-the-art method, even when sets are evaluated using the statistical score employed by the latter. In addition, our algorithms are much faster than the state-of-the-art, allowing the analysis of large datasets of thousands of target profiles from cancer cell lines. We show that on two such datasets, one from project Achilles and one from the Genomics of Drug Sensitivity in Cancer project, UNCOVER identifies several significant gene sets with complementary functional associations with targets. Software available at: https://github.com/VandinLab/UNCOVER. | Related workSeveral recent methods have used mutual exclusivity signals to identify sets of genes important for cancer [24]. RME [25] identifies mutually exclusive sets using a score derived from information theory. Dendrix [26] defines a combinatorial gene set score and uses a Markov Chain Monte Carlo (MCMC) approach for identifying mutually exclusive gene sets altered in a large fraction of the patients. Multi-Dendrix [27] extends the score of Dendrix to multiple sets and uses an integer linear program (ILP) based algorithm to simultaneously find multiple sets with mutually exclusive alterations. CoMET [18] uses a generalization of Fisher exact test to higher dimensional contingency tables to define a score to characterize mutually exclusive gene sets altered in relatively low fractions of the samples. WExT [18] generalizes the test from CoMET to incorporate individual gene weights (probabilities) for each alteration in each sample. WeSME [28] introduces a test that incorporates the alteration rates of patients and genes and uses a fast permutation approach to assess the statistical significance of the sets. TiMEx [29] assumes a generative model for alterations and defines a test to assess the null hypothesis that mutual exclusivity of a gene set is due to the interplay between waiting times to alterations and the time at which the tumor is sequenced. MEMo [17] and the method from [30] employ mutual exclusivity to find gene sets, but use an interaction network to limit the candidate gene sets. The method by [31] and PathTIMEx [32] introduce an additional dimension to the characterization of inter-tumor heterogeneity, by reconstructing the order in which mutually exclusive gene sets are mutated. None of these methods take quantitative targets into account in the discovery of significant gene sets and sets showing high mutual exclusivity may not be associated with target profiles (Fig 1).[33] recently developed the repeated evaluation of variables conditional entropy and redundancy (REVEALER) method, to identify mutually exclusive sets of alterations associated with functional phenotypes. REVEALER uses as objective function (to score a set of alterations) a re-scaled mutual information metric called information coefficient (IC). REVEALER employs a greedy strategy, computing at each iteration the conditional mutual information (CIC) of the target profile and each feature, conditioned on the current solution. REVEALER can be used to find sets of mutually exclusive alterations starting either from a user-defined seed for the solution or from scratch, and [33] shows that REVEALER finds sets of meaningful cancer-related alterations. | [
"24120142",
"25631445",
"23792563",
"25417114",
"28052061",
"28810144",
"23540688",
"28187284",
"24479672",
"28659971",
"23539594",
"20529912",
"25501392",
"26125594",
"24132290",
"21376230",
"21908773",
"26253137",
"25984343",
"27260156",
"28753430",
"12808457",
"16199517",
"18434431",
"21489305",
"21653252",
"23717195",
"25887147",
"25785493",
"27088724",
"21653252",
"26570998",
"22460905",
"23269662",
"20534738",
"21859464",
"18823568",
"27899662",
"19306108"
] | [
{
"pmid": "24120142",
"title": "The somatic genomic landscape of glioblastoma.",
"abstract": "We describe the landscape of somatic genomic alterations based on multidimensional and comprehensive characterization of more than 500 glioblastoma tumors (GBMs). We identify several novel mutated genes as well as complex rearrangements of signature receptors, including EGFR and PDGFRA. TERT promoter mutations are shown to correlate with elevated mRNA expression, supporting a role in telomerase reactivation. Correlative analyses confirm that the survival advantage of the proneural subtype is conferred by the G-CIMP phenotype, and MGMT DNA methylation may be a predictive biomarker for treatment response only in classical subtype GBM. Integrative analysis of genomic and proteomic profiles challenges the notion of therapeutic inhibition of a pathway as an alternative to inhibition of the target itself. These data will facilitate the discovery of therapeutic and diagnostic target candidates, the validation of research and clinical observations and the generation of unanticipated hypotheses that can advance our molecular understanding of this lethal cancer."
},
{
"pmid": "25631445",
"title": "Comprehensive genomic characterization of head and neck squamous cell carcinomas.",
"abstract": "The Cancer Genome Atlas profiled 279 head and neck squamous cell carcinomas (HNSCCs) to provide a comprehensive landscape of somatic genomic alterations. Here we show that human-papillomavirus-associated tumours are dominated by helical domain mutations of the oncogene PIK3CA, novel alterations involving loss of TRAF3, and amplification of the cell cycle gene E2F1. Smoking-related HNSCCs demonstrate near universal loss-of-function TP53 mutations and CDKN2A inactivation with frequent copy number alterations including amplification of 3q26/28 and 11q13/22. A subgroup of oral cavity tumours with favourable clinical outcomes displayed infrequent copy number alterations in conjunction with activating mutations of HRAS or PIK3CA, coupled with inactivating mutations of CASP8, NOTCH1 and TP53. Other distinct subgroups contained loss-of-function alterations of the chromatin modifier NSD1, WNT pathway genes AJUBA and FAT1, and activation of oxidative stress factor NFE2L2, mainly in laryngeal tumours. Therapeutic candidate alterations were identified in most HNSCCs."
},
{
"pmid": "23792563",
"title": "Comprehensive molecular characterization of clear cell renal cell carcinoma.",
"abstract": "Genetic changes underlying clear cell renal cell carcinoma (ccRCC) include alterations in genes controlling cellular oxygen sensing (for example, VHL) and the maintenance of chromatin states (for example, PBRM1). We surveyed more than 400 tumours using different genomic platforms and identified 19 significantly mutated genes. The PI(3)K/AKT pathway was recurrently mutated, suggesting this pathway as a potential therapeutic target. Widespread DNA hypomethylation was associated with mutation of the H3K36 methyltransferase SETD2, and integrative analysis suggested that mutations involving the SWI/SNF chromatin remodelling complex (PBRM1, ARID1A, SMARCA4) could have far-reaching effects on other pathways. Aggressive cancers demonstrated evidence of a metabolic shift, involving downregulation of genes involved in the TCA cycle, decreased AMPK and PTEN protein levels, upregulation of the pentose phosphate pathway and the glutamine transporter genes, increased acetyl-CoA carboxylase protein, and altered promoter methylation of miR-21 (also known as MIR21) and GRB10. Remodelling cellular metabolism thus constitutes a recurrent pattern in ccRCC that correlates with tumour stage and severity and offers new views on the opportunities for disease treatment."
},
{
"pmid": "25417114",
"title": "Integrated genomic characterization of papillary thyroid carcinoma.",
"abstract": "Papillary thyroid carcinoma (PTC) is the most common type of thyroid cancer. Here, we describe the genomic landscape of 496 PTCs. We observed a low frequency of somatic alterations (relative to other carcinomas) and extended the set of known PTC driver alterations to include EIF1AX, PPM1D, and CHEK2 and diverse gene fusions. These discoveries reduced the fraction of PTC cases with unknown oncogenic driver from 25% to 3.5%. Combined analyses of genomic variants, gene expression, and methylation demonstrated that different driver groups lead to different pathologies with distinct signaling and differentiation characteristics. Similarly, we identified distinct molecular subgroups of BRAF-mutant tumors, and multidimensional analyses highlighted a potential involvement of oncomiRs in less-differentiated subgroups. Our results propose a reclassification of thyroid cancers into molecular subtypes that better reflect their underlying signaling and differentiation properties, which has the potential to improve their pathological classification and better inform the management of the disease."
},
{
"pmid": "28052061",
"title": "Integrated genomic characterization of oesophageal carcinoma.",
"abstract": "Oesophageal cancers are prominent worldwide; however, there are few targeted therapies and survival rates for these cancers remain dismal. Here we performed a comprehensive molecular analysis of 164 carcinomas of the oesophagus derived from Western and Eastern populations. Beyond known histopathological and epidemiologic distinctions, molecular features differentiated oesophageal squamous cell carcinomas from oesophageal adenocarcinomas. Oesophageal squamous cell carcinomas resembled squamous carcinomas of other organs more than they did oesophageal adenocarcinomas. Our analyses identified three molecular subclasses of oesophageal squamous cell carcinomas, but none showed evidence for an aetiological role of human papillomavirus. Squamous cell carcinomas showed frequent genomic amplifications of CCND1 and SOX2 and/or TP63, whereas ERBB2, VEGFA and GATA4 and GATA6 were more commonly amplified in adenocarcinomas. Oesophageal adenocarcinomas strongly resembled the chromosomally unstable variant of gastric adenocarcinoma, suggesting that these cancers could be considered a single disease entity. However, some molecular features, including DNA hypermethylation, occurred disproportionally in oesophageal adenocarcinomas. These data provide a framework to facilitate more rational categorization of these tumours and a foundation for new therapies."
},
{
"pmid": "28810144",
"title": "Integrated Genomic Characterization of Pancreatic Ductal Adenocarcinoma.",
"abstract": "We performed integrated genomic, transcriptomic, and proteomic profiling of 150 pancreatic ductal adenocarcinoma (PDAC) specimens, including samples with characteristic low neoplastic cellularity. Deep whole-exome sequencing revealed recurrent somatic mutations in KRAS, TP53, CDKN2A, SMAD4, RNF43, ARID1A, TGFβR2, GNAS, RREB1, and PBRM1. KRAS wild-type tumors harbored alterations in other oncogenic drivers, including GNAS, BRAF, CTNNB1, and additional RAS pathway genes. A subset of tumors harbored multiple KRAS mutations, with some showing evidence of biallelic mutations. Protein profiling identified a favorable prognosis subset with low epithelial-mesenchymal transition and high MTOR pathway scores. Associations of non-coding RNAs with tumor-specific mRNA subtypes were also identified. Our integrated multi-platform analysis reveals a complex molecular landscape of PDAC and provides a roadmap for precision medicine."
},
{
"pmid": "23540688",
"title": "Lessons from the cancer genome.",
"abstract": "Systematic studies of the cancer genome have exploded in recent years. These studies have revealed scores of new cancer genes, including many in processes not previously known to be causal targets in cancer. The genes affect cell signaling, chromatin, and epigenomic regulation; RNA splicing; protein homeostasis; metabolism; and lineage maturation. Still, cancer genomics is in its infancy. Much work remains to complete the mutational catalog in primary tumors and across the natural history of cancer, to connect recurrent genomic alterations to altered pathways and acquired cellular vulnerabilities, and to use this information to guide the development and application of therapies."
},
{
"pmid": "28187284",
"title": "Clonal Heterogeneity and Tumor Evolution: Past, Present, and the Future.",
"abstract": "Intratumor heterogeneity, which fosters tumor evolution, is a key challenge in cancer medicine. Here, we review data and technologies that have revealed intra-tumor heterogeneity across cancer types and the dynamics, constraints, and contingencies inherent to tumor evolution. We emphasize the importance of macro-evolutionary leaps, often involving large-scale chromosomal alterations, in driving tumor evolution and metastasis and consider the role of the tumor microenvironment in engendering heterogeneity and drug resistance. We suggest that bold approaches to drug development, harnessing the adaptive properties of the immune-microenvironment while limiting those of the tumor, combined with advances in clinical trial-design, will improve patient outcome."
},
{
"pmid": "24479672",
"title": "Identifying driver mutations in sequenced cancer genomes: computational approaches to enable precision medicine.",
"abstract": "High-throughput DNA sequencing is revolutionizing the study of cancer and enabling the measurement of the somatic mutations that drive cancer development. However, the resulting sequencing datasets are large and complex, obscuring the clinically important mutations in a background of errors, noise, and random mutations. Here, we review computational approaches to identify somatic mutations in cancer genome sequences and to distinguish the driver mutations that are responsible for cancer from random, passenger mutations. First, we describe approaches to detect somatic mutations from high-throughput DNA sequencing data, particularly for tumor samples that comprise heterogeneous populations of cells. Next, we review computational approaches that aim to predict driver mutations according to their frequency of occurrence in a cohort of samples, or according to their predicted functional impact on protein sequence or structure. Finally, we review techniques to identify recurrent combinations of somatic mutations, including approaches that examine mutations in known pathways or protein-interaction networks, as well as de novo approaches that identify combinations of mutations according to statistical patterns of mutual exclusivity. These techniques, coupled with advances in high-throughput DNA sequencing, are enabling precision medicine approaches to the diagnosis and treatment of cancer."
},
{
"pmid": "28659971",
"title": "Computational Methods for Characterizing Cancer Mutational Heterogeneity.",
"abstract": "Advances in DNA sequencing technologies have allowed the characterization of somatic mutations in a large number of cancer genomes at an unprecedented level of detail, revealing the extreme genetic heterogeneity of cancer at two different levels: inter-tumor, with different patients of the same cancer type presenting different collections of somatic mutations, and intra-tumor, with different clones coexisting within the same tumor. Both inter-tumor and intra-tumor heterogeneity have crucial implications for clinical practices. Here, we review computational methods that use somatic alterations measured through next-generation DNA sequencing technologies for characterizing tumor heterogeneity and its association with clinical variables. We first review computational methods for studying inter-tumor heterogeneity, focusing on methods that attempt to summarize cancer heterogeneity by discovering pathways that are commonly mutated across different patients of the same cancer type. We then review computational methods for characterizing intra-tumor heterogeneity using information from bulk sequencing data or from single cell sequencing data. Finally, we present some of the recent computational methodologies that have been proposed to identify and assess the association between inter- or intra-tumor heterogeneity with clinical variables."
},
{
"pmid": "23539594",
"title": "Cancer genome landscapes.",
"abstract": "Over the past decade, comprehensive sequencing efforts have revealed the genomic landscapes of common forms of human cancer. For most cancer types, this landscape consists of a small number of \"mountains\" (genes altered in a high percentage of tumors) and a much larger number of \"hills\" (genes altered infrequently). To date, these studies have revealed ~140 genes that, when altered by intragenic mutations, can promote or \"drive\" tumorigenesis. A typical tumor contains two to eight of these \"driver gene\" mutations; the remaining mutations are passengers that confer no selective growth advantage. Driver genes can be classified into 12 signaling pathways that regulate three core cellular processes: cell fate, cell survival, and genome maintenance. A better understanding of these pathways is one of the most pressing needs in basic cancer research. Even now, however, our knowledge of cancer genomes is sufficient to guide the development of more effective approaches for reducing cancer morbidity and mortality."
},
{
"pmid": "20529912",
"title": "Inference of patient-specific pathway activities from multi-dimensional cancer genomics data using PARADIGM.",
"abstract": "MOTIVATION\nHigh-throughput data is providing a comprehensive view of the molecular changes in cancer tissues. New technologies allow for the simultaneous genome-wide assay of the state of genome copy number variation, gene expression, DNA methylation and epigenetics of tumor samples and cancer cell lines. Analyses of current data sets find that genetic alterations between patients can differ but often involve common pathways. It is therefore critical to identify relevant pathways involved in cancer progression and detect how they are altered in different patients.\n\n\nRESULTS\nWe present a novel method for inferring patient-specific genetic activities incorporating curated pathway interactions among genes. A gene is modeled by a factor graph as a set of interconnected variables encoding the expression and known activity of a gene and its products, allowing the incorporation of many types of omic data as evidence. The method predicts the degree to which a pathway's activities (e.g. internal gene states, interactions or high-level 'outputs') are altered in the patient using probabilistic inference. Compared with a competing pathway activity inference approach called SPIA, our method identifies altered activities in cancer-related pathways with fewer false-positives in both a glioblastoma multiform (GBM) and a breast cancer dataset. PARADIGM identified consistent pathway-level activities for subsets of the GBM patients that are overlooked when genes are considered in isolation. Further, grouping GBM patients based on their significant pathway perturbations divides them into clinically-relevant subgroups having significantly different survival outcomes. These findings suggest that therapeutics might be chosen that target genes at critical points in the commonly perturbed pathway(s) of a group of patients.\n\n\nAVAILABILITY\nSource code available at http://sbenz.github.com/Paradigm,.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25501392",
"title": "Pan-cancer network analysis identifies combinations of rare somatic mutations across pathways and protein complexes.",
"abstract": "Cancers exhibit extensive mutational heterogeneity, and the resulting long-tail phenomenon complicates the discovery of genes and pathways that are significantly mutated in cancer. We perform a pan-cancer analysis of mutated networks in 3,281 samples from 12 cancer types from The Cancer Genome Atlas (TCGA) using HotNet2, a new algorithm to find mutated subnetworks that overcomes the limitations of existing single-gene, pathway and network approaches. We identify 16 significantly mutated subnetworks that comprise well-known cancer signaling pathways as well as subnetworks with less characterized roles in cancer, including cohesin, condensin and others. Many of these subnetworks exhibit co-occurring mutations across samples. These subnetworks contain dozens of genes with rare somatic mutations across multiple cancers; many of these genes have additional evidence supporting a role in cancer. By illuminating these rare combinations of mutations, pan-cancer network analyses provide a roadmap to investigate new diagnostic and therapeutic opportunities across cancer types."
},
{
"pmid": "26125594",
"title": "Pathway and network analysis of cancer genomes.",
"abstract": "Genomic information on tumors from 50 cancer types cataloged by the International Cancer Genome Consortium (ICGC) shows that only a few well-studied driver genes are frequently mutated, in contrast to many infrequently mutated genes that may also contribute to tumor biology. Hence there has been large interest in developing pathway and network analysis methods that group genes and illuminate the processes involved. We provide an overview of these analysis techniques and show where they guide mechanistic and translational investigations."
},
{
"pmid": "24132290",
"title": "Mutational landscape and significance across 12 major cancer types.",
"abstract": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment."
},
{
"pmid": "21376230",
"title": "Hallmarks of cancer: the next generation.",
"abstract": "The hallmarks of cancer comprise six biological capabilities acquired during the multistep development of human tumors. The hallmarks constitute an organizing principle for rationalizing the complexities of neoplastic disease. They include sustaining proliferative signaling, evading growth suppressors, resisting cell death, enabling replicative immortality, inducing angiogenesis, and activating invasion and metastasis. Underlying these hallmarks are genome instability, which generates the genetic diversity that expedites their acquisition, and inflammation, which fosters multiple hallmark functions. Conceptual progress in the last decade has added two emerging hallmarks of potential generality to this list-reprogramming of energy metabolism and evading immune destruction. In addition to cancer cells, tumors exhibit another dimension of complexity: they contain a repertoire of recruited, ostensibly normal cells that contribute to the acquisition of hallmark traits by creating the \"tumor microenvironment.\" Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer."
},
{
"pmid": "21908773",
"title": "Mutual exclusivity analysis identifies oncogenic network modules.",
"abstract": "Although individual tumors of the same clinical type have surprisingly diverse genomic alterations, these events tend to occur in a limited number of pathways, and alterations that affect the same pathway tend to not co-occur in the same patient. While pathway analysis has been a powerful tool in cancer genomics, our knowledge of oncogenic pathway modules is incomplete. To systematically identify such modules, we have developed a novel method, Mutual Exclusivity Modules in cancer (MEMo). The method uses correlation analysis and statistical tests to identify network modules by three criteria: (1) Member genes are recurrently altered across a set of tumor samples; (2) member genes are known to or are likely to participate in the same biological process; and (3) alteration events within the modules are mutually exclusive. Applied to data from the Cancer Genome Atlas (TCGA), the method identifies the principal known altered modules in glioblastoma (GBM) and highlights the striking mutual exclusivity of genomic alterations in the PI(3)K, p53, and Rb pathways. In serous ovarian cancer, we make the novel observation that inactivation of BRCA1 and BRCA2 is mutually exclusive of amplification of CCNE1 and inactivation of RB1, suggesting distinct alternative causes of genomic instability in this cancer type; and, we identify RBBP8 as a candidate oncogene involved in Rb-mediated cell cycle control. When applied to any cancer genomics data set, the algorithm can nominate oncogenic alterations that have a particularly strong selective effect and may also be useful in the design of therapeutic combinations in cases where mutual exclusivity reflects synthetic lethality."
},
{
"pmid": "26253137",
"title": "CoMEt: a statistical approach to identify combinations of mutually exclusive alterations in cancer.",
"abstract": "Cancer is a heterogeneous disease with different combinations of genetic alterations driving its development in different individuals. We introduce CoMEt, an algorithm to identify combinations of alterations that exhibit a pattern of mutual exclusivity across individuals, often observed for alterations in the same pathway. CoMEt includes an exact statistical test for mutual exclusivity and techniques to perform simultaneous analysis of multiple sets of mutually exclusive and subtype-specific alterations. We demonstrate that CoMEt outperforms existing approaches on simulated and real data. We apply CoMEt to five different cancer types, identifying both known cancer genes and pathways, and novel putative cancer genes."
},
{
"pmid": "25984343",
"title": "Parallel genome-scale loss of function screens in 216 cancer cell lines for the identification of context-specific genetic dependencies.",
"abstract": "Using a genome-scale, lentivirally delivered shRNA library, we performed massively parallel pooled shRNA screens in 216 cancer cell lines to identify genes that are required for cell proliferation and/or viability. Cell line dependencies on 11,000 genes were interrogated by 5 shRNAs per gene. The proliferation effect of each shRNA in each cell line was assessed by transducing a population of 11M cells with one shRNA-virus per cell and determining the relative enrichment or depletion of each of the 54,000 shRNAs after 16 population doublings using Next Generation Sequencing. All the cell lines were screened using standardized conditions to best assess differential genetic dependencies across cell lines. When combined with genomic characterization of these cell lines, this dataset facilitates the linkage of genetic dependencies with specific cellular contexts (e.g., gene mutations or cell lineage). To enable such comparisons, we developed and provided a bioinformatics tool to identify linear and nonlinear correlations between these features."
},
{
"pmid": "27260156",
"title": "Genomic Copy Number Dictates a Gene-Independent Cell Response to CRISPR/Cas9 Targeting.",
"abstract": "UNLABELLED\nThe CRISPR/Cas9 system enables genome editing and somatic cell genetic screens in mammalian cells. We performed genome-scale loss-of-function screens in 33 cancer cell lines to identify genes essential for proliferation/survival and found a strong correlation between increased gene copy number and decreased cell viability after genome editing. Within regions of copy-number gain, CRISPR/Cas9 targeting of both expressed and unexpressed genes, as well as intergenic loci, led to significantly decreased cell proliferation through induction of a G2 cell-cycle arrest. By examining single-guide RNAs that map to multiple genomic sites, we found that this cell response to CRISPR/Cas9 editing correlated strongly with the number of target loci. These observations indicate that genome targeting by CRISPR/Cas9 elicits a gene-independent antiproliferative cell response. This effect has important practical implications for the interpretation of CRISPR/Cas9 screening data and confounds the use of this technology for the identification of essential genes in amplified regions.\n\n\nSIGNIFICANCE\nWe found that the number of CRISPR/Cas9-induced DNA breaks dictates a gene-independent antiproliferative response in cells. These observations have practical implications for using CRISPR/Cas9 to interrogate cancer gene function and illustrate that cancer cells are highly sensitive to site-specific DNA damage, which may provide a path to novel therapeutic strategies. Cancer Discov; 6(8); 914-29. ©2016 AACR.See related commentary by Sheel and Xue, p. 824See related article by Munoz et al., p. 900This article is highlighted in the In This Issue feature, p. 803."
},
{
"pmid": "28753430",
"title": "Defining a Cancer Dependency Map.",
"abstract": "Most human epithelial tumors harbor numerous alterations, making it difficult to predict which genes are required for tumor survival. To systematically identify cancer dependencies, we analyzed 501 genome-scale loss-of-function screens performed in diverse human cancer cell lines. We developed DEMETER, an analytical framework that segregates on- from off-target effects of RNAi. 769 genes were differentially required in subsets of these cell lines at a threshold of six SDs from the mean. We found predictive models for 426 dependencies (55%) by nonlinear regression modeling considering 66,646 molecular features. Many dependencies fall into a limited number of classes, and unexpectedly, in 82% of models, the top biomarkers were expression based. We demonstrated the basis behind one such predictive model linking hypermethylation of the UBB ubiquitin gene to a dependency on UBC. Together, these observations provide a foundation for a cancer dependency map that facilitates the prioritization of therapeutic targets."
},
{
"pmid": "12808457",
"title": "PGC-1alpha-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes.",
"abstract": "DNA microarrays can be used to identify gene expression changes characteristic of human disease. This is challenging, however, when relevant differences are subtle at the level of individual genes. We introduce an analytical strategy, Gene Set Enrichment Analysis, designed to detect modest but coordinate changes in the expression of groups of functionally related genes. Using this approach, we identify a set of genes involved in oxidative phosphorylation whose expression is coordinately decreased in human diabetic muscle. Expression of these genes is high at sites of insulin-mediated glucose disposal, activated by PGC-1alpha and correlated with total-body aerobic capacity. Our results associate this gene set with clinically important variation in human metabolism and illustrate the value of pathway relationships in the analysis of genomic profiling experiments."
},
{
"pmid": "16199517",
"title": "Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles.",
"abstract": "Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets."
},
{
"pmid": "18434431",
"title": "Combinatorial patterns of somatic gene mutations in cancer.",
"abstract": "Cancer is a complex process in which the abnormalities of many genes appear to be involved. The combinatorial patterns of gene mutations may reveal the functional relations between genes and pathways in tumorigenesis as well as identify targets for treatment. We examined the patterns of somatic mutations of cancers from Catalog of Somatic Mutations in Cancer (COSMIC), a large-scale database curated by the Wellcome Trust Sanger Institute. The frequently mutated genes are well-known oncogenes and tumor suppressors that are involved in generic processes of cell-cycle control, signal transduction, and stress responses. These \"signatures\" of gene mutations are heterogeneous when the cancers from different tissues are compared. Mutations in genes functioning in different pathways can occur in the same cancer (i.e., co-occur), whereas those in genes functioning in the same pathway are rarely mutated in the same sample. This observation supports the view of tumorigenesis as derived from a process like Darwinian evolution. However, certain combinatorial mutational patterns violate these simple rules and demonstrate tissue-specific variations. For instance, mutations of genes in the Ras and Wnt pathways tend to co-occur in the large intestine but are mutually exclusive in cancers of the pancreas. The relationships between mutations in different samples of a cancer can also reveal the temporal orders of mutational events. In addition, the observed mutational patterns suggest candidates of new cosequencing targets that can either reveal novel patterns or validate the predictions deduced from existing patterns. These combinatorial mutational patterns provide guiding information for the ongoing cancer genome projects."
},
{
"pmid": "21489305",
"title": "Discovering functional modules by identifying recurrent and mutually exclusive mutational patterns in tumors.",
"abstract": "BACKGROUND\nAssays of multiple tumor samples frequently reveal recurrent genomic aberrations, including point mutations and copy-number alterations, that affect individual genes. Analyses that extend beyond single genes are often restricted to examining pathways, interactions and functional modules that are already known.\n\n\nMETHODS\nWe present a method that identifies functional modules without any information other than patterns of recurrent and mutually exclusive aberrations (RME patterns) that arise due to positive selection for key cancer phenotypes. Our algorithm efficiently constructs and searches networks of potential interactions and identifies significant modules (RME modules) by using the algorithmic significance test.\n\n\nRESULTS\nWe apply the method to the TCGA collection of 145 glioblastoma samples, resulting in extension of known pathways and discovery of new functional modules. The method predicts a role for EP300 that was previously unknown in glioblastoma. We demonstrate the clinical relevance of these results by validating that expression of EP300 is prognostic, predicting survival independent of age at diagnosis and tumor grade.\n\n\nCONCLUSIONS\nWe have developed a sensitive, simple, and fast method for automatically detecting functional modules in tumors based solely on patterns of recurrent genomic aberration. Due to its ability to analyze very large amounts of diverse data, we expect it to be increasingly useful when applied to the many tumor panels scheduled to be assayed in the near future."
},
{
"pmid": "21653252",
"title": "De novo discovery of mutated driver pathways in cancer.",
"abstract": "Next-generation DNA sequencing technologies are enabling genome-wide measurements of somatic mutations in large numbers of cancer patients. A major challenge in the interpretation of these data is to distinguish functional \"driver mutations\" important for cancer development from random \"passenger mutations.\" A common approach for identifying driver mutations is to find genes that are mutated at significant frequency in a large cohort of cancer genomes. This approach is confounded by the observation that driver mutations target multiple cellular signaling and regulatory pathways. Thus, each cancer patient may exhibit a different combination of mutations that are sufficient to perturb these pathways. This mutational heterogeneity presents a problem for predicting driver mutations solely from their frequency of occurrence. We introduce two combinatorial properties, coverage and exclusivity, that distinguish driver pathways, or groups of genes containing driver mutations, from groups of genes with passenger mutations. We derive two algorithms, called Dendrix, to find driver pathways de novo from somatic mutation data. We apply Dendrix to analyze somatic mutation data from 623 genes in 188 lung adenocarcinoma patients, 601 genes in 84 glioblastoma patients, and 238 known mutations in 1000 patients with various cancers. In all data sets, we find groups of genes that are mutated in large subsets of patients and whose mutations are approximately exclusive. Our Dendrix algorithms scale to whole-genome analysis of thousands of patients and thus will prove useful for larger data sets to come from The Cancer Genome Atlas (TCGA) and other large-scale cancer genome sequencing projects."
},
{
"pmid": "23717195",
"title": "Simultaneous identification of multiple driver pathways in cancer.",
"abstract": "Distinguishing the somatic mutations responsible for cancer (driver mutations) from random, passenger mutations is a key challenge in cancer genomics. Driver mutations generally target cellular signaling and regulatory pathways consisting of multiple genes. This heterogeneity complicates the identification of driver mutations by their recurrence across samples, as different combinations of mutations in driver pathways are observed in different samples. We introduce the Multi-Dendrix algorithm for the simultaneous identification of multiple driver pathways de novo in somatic mutation data from a cohort of cancer samples. The algorithm relies on two combinatorial properties of mutations in a driver pathway: high coverage and mutual exclusivity. We derive an integer linear program that finds set of mutations exhibiting these properties. We apply Multi-Dendrix to somatic mutations from glioblastoma, breast cancer, and lung cancer samples. Multi-Dendrix identifies sets of mutations in genes that overlap with known pathways - including Rb, p53, PI(3)K, and cell cycle pathways - and also novel sets of mutually exclusive mutations, including mutations in several transcription factors or other genes involved in transcriptional regulation. These sets are discovered directly from mutation data with no prior knowledge of pathways or gene interactions. We show that Multi-Dendrix outperforms other algorithms for identifying combinations of mutations and is also orders of magnitude faster on genome-scale data. Software available at: http://compbio.cs.brown.edu/software."
},
{
"pmid": "25887147",
"title": "Systematic identification of cancer driving signaling pathways based on mutual exclusivity of genomic alterations.",
"abstract": "We present a novel method for the identification of sets of mutually exclusive gene alterations in a given set of genomic profiles. We scan the groups of genes with a common downstream effect on the signaling network, using a mutual exclusivity criterion that ensures that each gene in the group significantly contributes to the mutual exclusivity pattern. We test the method on all available TCGA cancer genomics datasets, and detect multiple previously unreported alterations that show significant mutual exclusivity and are likely to be driver events."
},
{
"pmid": "25785493",
"title": "Simultaneous inference of cancer pathways and tumor progression from cross-sectional mutation data.",
"abstract": "Recent cancer sequencing studies provide a wealth of somatic mutation data from a large number of patients. One of the most intriguing and challenging questions arising from this data is to determine whether the temporal order of somatic mutations in a cancer follows any common progression. Since we usually obtain only one sample from a patient, such inferences are commonly made from cross-sectional data from different patients. This analysis is complicated by the extensive variation in the somatic mutations across different patients, variation that is reduced by examining combinations of mutations in various pathways. Thus far, methods to reconstruct tumor progression at the pathway level have restricted attention to known, a priori defined pathways. In this work we show how to simultaneously infer pathways and the temporal order of their mutations from cross-sectional data, leveraging on the exclusivity property of driver mutations within a pathway. We define the pathway linear progression model, and derive a combinatorial formulation for the problem of finding the optimal model from mutation data. We show that with enough samples the optimal solution to this problem uniquely identifies the correct model with high probability even when errors are present in the mutation data. We then formulate the problem as an integer linear program (ILP), which allows the analysis of datasets from recent studies with large numbers of samples. We use our algorithm to analyze somatic mutation data from three cancer studies, including two studies from The Cancer Genome Atlas (TCGA) on large number of samples on colorectal cancer and glioblastoma. The models reconstructed with our method capture most of the current knowledge of the progression of somatic mutations in these cancer types, while also providing new insights on the tumor progression at the pathway level."
},
{
"pmid": "27088724",
"title": "Characterizing genomic alterations in cancer by complementary functional associations.",
"abstract": "Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes."
},
{
"pmid": "21653252",
"title": "De novo discovery of mutated driver pathways in cancer.",
"abstract": "Next-generation DNA sequencing technologies are enabling genome-wide measurements of somatic mutations in large numbers of cancer patients. A major challenge in the interpretation of these data is to distinguish functional \"driver mutations\" important for cancer development from random \"passenger mutations.\" A common approach for identifying driver mutations is to find genes that are mutated at significant frequency in a large cohort of cancer genomes. This approach is confounded by the observation that driver mutations target multiple cellular signaling and regulatory pathways. Thus, each cancer patient may exhibit a different combination of mutations that are sufficient to perturb these pathways. This mutational heterogeneity presents a problem for predicting driver mutations solely from their frequency of occurrence. We introduce two combinatorial properties, coverage and exclusivity, that distinguish driver pathways, or groups of genes containing driver mutations, from groups of genes with passenger mutations. We derive two algorithms, called Dendrix, to find driver pathways de novo from somatic mutation data. We apply Dendrix to analyze somatic mutation data from 623 genes in 188 lung adenocarcinoma patients, 601 genes in 84 glioblastoma patients, and 238 known mutations in 1000 patients with various cancers. In all data sets, we find groups of genes that are mutated in large subsets of patients and whose mutations are approximately exclusive. Our Dendrix algorithms scale to whole-genome analysis of thousands of patients and thus will prove useful for larger data sets to come from The Cancer Genome Atlas (TCGA) and other large-scale cancer genome sequencing projects."
},
{
"pmid": "26570998",
"title": "Pharmacogenomic agreement between two cancer cell line data sets.",
"abstract": "Large cancer cell line collections broadly capture the genomic diversity of human cancers and provide valuable insight into anti-cancer drug response. Here we show substantial agreement and biological consilience between drug sensitivity measurements and their associated genomic predictors from two publicly available large-scale pharmacogenomics resources: The Cancer Cell Line Encyclopedia and the Genomics of Drug Sensitivity in Cancer databases."
},
{
"pmid": "22460905",
"title": "The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity.",
"abstract": "The systematic translation of cancer genomic data into knowledge of tumour biology and therapeutic possibilities remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacological annotation is available. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacological profiles for 24 anticancer drugs across 479 of the cell lines, this collection allowed identification of genetic, lineage, and gene-expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Together, our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of 'personalized' therapeutic regimens."
},
{
"pmid": "23269662",
"title": "ATARiS: computational quantification of gene suppression phenotypes from multisample RNAi screens.",
"abstract": "Genome-scale RNAi libraries enable the systematic interrogation of gene function. However, the interpretation of RNAi screens is complicated by the observation that RNAi reagents designed to suppress the mRNA transcripts of the same gene often produce a spectrum of phenotypic outcomes due to differential on-target gene suppression or perturbation of off-target transcripts. Here we present a computational method, Analytic Technique for Assessment of RNAi by Similarity (ATARiS), that takes advantage of patterns in RNAi data across multiple samples in order to enrich for RNAi reagents whose phenotypic effects relate to suppression of their intended targets. By summarizing only such reagent effects for each gene, ATARiS produces quantitative, gene-level phenotype values, which provide an intuitive measure of the effect of gene suppression in each sample. This method is robust for data sets that contain as few as 10 samples and can be used to analyze screens of any number of targeted genes. We used this analytic approach to interrogate RNAi data derived from screening more than 100 human cancer cell lines and identified HNF1B as a transforming oncogene required for the survival of cancer cells that harbor HNF1B amplifications. ATARiS is publicly available at http://broadinstitute.org/ataris."
},
{
"pmid": "20534738",
"title": "Nrf2 and Keap1 abnormalities in non-small cell lung carcinoma and association with clinicopathologic features.",
"abstract": "PURPOSE\nTo understand the role of nuclear factor erythroid-2-related factor 2 (Nrf2) and Kelch-like ECH-associated protein 1 (Keap1) in non-small cell lung cancer (NSCLC), we studied their expression in a large series of tumors with annotated clinicopathologic data, including response to platinum-based adjuvant chemotherapy.\n\n\nEXPERIMENTAL DESIGN\nWe determined the immunohistochemical expression of nuclear Nrf2 and cytoplasmic Keap1 in 304 NSCLCs and its association with patients' clinicopathologic characteristics, and in 89 tumors from patients who received neoadjuvant (n = 26) or adjuvant platinum-based chemotherapy (n = 63). We evaluated NFE2L2 and KEAP1 mutations in 31 tumor specimens.\n\n\nRESULTS\nWe detected nuclear Nrf2 expression in 26% of NSCLCs; it was significantly more common in squamous cell carcinomas (38%) than in adenocarcinomas (18%; P < 0.0001). Low or absent Keap1 expression was detected in 56% of NSCLCs; it was significantly more common in adenocarcinomas (62%) than in squamous cell carcinomas (46%; P = 0.0057). In NSCLC, mutations of NFE2L2 and KEAP1 were very uncommon (2 of 29 and 1 of 31 cases, respectively). In multivariate analysis, Nrf2 expression was associated with worse overall survival [P = 0.0139; hazard ratio (HR), 1.75] in NSCLC patients, and low or absent Keap1 expression was associated with worse overall survival (P = 0.0181; HR, 2.09) in squamous cell carcinoma. In univariate analysis, nuclear Nrf2 expression was associated with worse recurrence-free survival in squamous cell carcinoma patients who received adjuvant treatment (P = 0.0410; HR, 3.37).\n\n\nCONCLUSIONS\nIncreased expression of Nrf2 and decreased expression of Keap1 are common abnormalities in NSCLC and are associated with a poor outcome. Nuclear expression of Nrf2 in malignant lung cancer cells may play a role in resistance to platinum-based treatment in squamous cell carcinoma."
},
{
"pmid": "21859464",
"title": "Messing up disorder: how do missense mutations in the tumor suppressor protein APC lead to cancer?",
"abstract": "Mutations in the adenomatous polyposis coli (APC) tumor suppressor gene strongly predispose to development of gastro-intestinal tumors. Central to the tumorigenic events in APC mutant cells is the uncontrolled stabilization and transcriptional activation of the protein β-catenin. Many questions remain as to how APC controls β-catenin degradation. Remarkably, the large C-terminal region of APC, which spans over 2000 amino acids and includes critical regions in downregulating β-catenin, is predicted to be natively unfolded. Here we discuss how this uncommonly large disordered region may help to coordinate the multiple cellular functions of APC. Recently, a significant number of germline and somatic missense mutations in the central region of APC were linked to tumorigenesis in the colon as well as extra-intestinal tissues. We classify and localize all currently known missense mutations in the APC structure. The molecular basis by which these mutations interfere with the function of APC remains unresolved. We propose several mechanisms by which cancer-related missense mutations in the large disordered domain of APC may interfere with tumor suppressor activity. Insight in the underlying molecular events will be invaluable in the development of novel strategies to counter dysregulated Wnt signaling by APC mutations in cancer."
},
{
"pmid": "18823568",
"title": "iRefIndex: a consolidated protein interaction database with provenance.",
"abstract": "BACKGROUND\nInteraction data for a given protein may be spread across multiple databases. We set out to create a unifying index that would facilitate searching for these data and that would group together redundant interaction data while recording the methods used to perform this grouping.\n\n\nRESULTS\nWe present a method to generate a key for a protein interaction record and a key for each participant protein. These keys may be generated by anyone using only the primary sequence of the proteins, their taxonomy identifiers and the Secure Hash Algorithm. Two interaction records will have identical keys if they refer to the same set of identical protein sequences and taxonomy identifiers. We define records with identical keys as a redundant group. Our method required that we map protein database references found in interaction records to current protein sequence records. Operations performed during this mapping are described by a mapping score that may provide valuable feedback to source interaction databases on problematic references that are malformed, deprecated, ambiguous or unfound. Keys for protein participants allow for retrieval of interaction information independent of the protein references used in the original records.\n\n\nCONCLUSION\nWe have applied our method to protein interaction records from BIND, BioGrid, DIP, HPRD, IntAct, MINT, MPact, MPPI and OPHID. The resulting interaction reference index is provided in PSI-MITAB 2.5 format at http://irefindex.uio.no. This index may form the basis of alternative redundant groupings based on gene identifiers or near sequence identity groupings."
},
{
"pmid": "27899662",
"title": "KEGG: new perspectives on genomes, pathways, diseases and drugs.",
"abstract": "KEGG (http://www.kegg.jp/ or http://www.genome.jp/kegg/) is an encyclopedia of genes and genomes. Assigning functional meanings to genes and genomes both at the molecular and higher levels is the primary objective of the KEGG database project. Molecular-level functions are stored in the KO (KEGG Orthology) database, where each KO is defined as a functional ortholog of genes and proteins. Higher-level functions are represented by networks of molecular interactions, reactions and relations in the forms of KEGG pathway maps, BRITE hierarchies and KEGG modules. In the past the KO database was developed for the purpose of defining nodes of molecular networks, but now the content has been expanded and the quality improved irrespective of whether or not the KOs appear in the three molecular network databases. The newly introduced addendum category of the GENES database is a collection of individual proteins whose functions are experimentally characterized and from which an increasing number of KOs are defined. Furthermore, the DISEASE and DRUG databases have been improved by systematic analysis of drug labels for better integration of diseases and drugs with the KEGG molecular networks. KEGG is moving towards becoming a comprehensive knowledge base for both functional interpretation and practical application of genomic information."
},
{
"pmid": "19306108",
"title": "Arylsulfatase B regulates colonic epithelial cell migration by effects on MMP9 expression and RhoA activation.",
"abstract": "Arylsulfatase B (ASB; N-acetylgalactosamine-4-sulfatase; 4-sulfatase; ARSB) is the enzyme that removes 4-sulfate groups from N-acetylgalactosamine 4-sulfate, which combines with glucuronate to form the disaccharide unit of chondroitin-4-sulfate (C4S). In this study, we report how variation in expression of ASB affected the migration of human colonic epithelial cells. In the T84 cell line, derived from lung metastasis of malignant colonic epithelial cells, the activity of ASB, as well as steroid sulfatase, arylsulfatase A, and galactose-6-sulfatase, were significantly less than in normal, primary colonic epithelial cells and in the NCM460 cell line which was derived from normal colonocytes. In the T84 cells, matrix metalloproteinase 9 (MMP9), activated RhoA, and cell migration, as well as C4S content, were significantly more than in the NCM460 cells. Silencing and overexpression of ASB had inverse effects on MMP9, activated RhoA, and cell migration, as well as the C4S content, in the NCM460 and T84 cells. When ASB expression was silenced by siRNA in the NCM460 cells, MMP9 secretion increased to over 3 times the basal level, activated RhoA increased * 85%, and cell migration increased * 52%. Following overexpression of ASB, MMP9 declined 51%, activated RhoA declined * 51%, and cell migration decreased * 37%. These findings demonstrate marked effects of ASB expression on the migratory activity of colonic epithelial cells, activated RhoA, and MMP9, and suggest a potential vital role of ASB, due to its impact on chondroitin sulfation, on determination of the invasive phenotype of colonic epithelial cells."
}
] |
PLoS Computational Biology | 31170150 | PMC6553697 | 10.1371/journal.pcbi.1007071 | Dynamic properties of internal noise probed by modulating binocular rivalry | Neural systems are inherently noisy, and this noise can affect our perception from moment to moment. This is particularly apparent in binocular rivalry, where perception of competing stimuli shown to the left and right eyes alternates over time. We modulated rivalling stimuli using dynamic sequences of external noise of various rates and amplitudes. We repeated each external noise sequence twice, and assessed the consistency of percepts across repetitions. External noise modulations of sufficiently high contrast increased consistency scores above baseline, and were most effective at 1/8Hz. A computational model of rivalry in which internal noise has a 1/f (pink) temporal amplitude spectrum, and a standard deviation of 16% contrast, provided the best account of our data. Our novel technique provides detailed estimates of the dynamic properties of internal noise during binocular rivalry, and by extension the stochastic processes that drive our perception and other types of spontaneous brain activity. | Related work on rivalryAs mentioned above, Kim et al. [1] modulated the contrast of rivalling stimuli periodically in antiphase at a range of temporal frequencies (building on earlier work by O’Shea and Crassini [18] in which rivalling stimuli were entirely removed at different frequencies and phases). They implement three computational models to account for their results, each of which has random walk (i.e. brown) noise with a spectral slope of 1/f2, but report obtaining similar results with white noise for their experimental conditions. Furthermore, one of the models they implement is a version of the Wilson [3] model considered here, but they report the best performance when the internal noise is added to the adaptation differential equation (see Methods), rather than the rivalling units (see also [23]). In additional simulations, we found similar effects on the dominance duration distributions for internal noise placed either in the main equation or adaptation equation. However, placing internal noise in the adaptation differential equation resulted in response consistency that was not tuned to modulation frequency (i.e., flat). We suspect that Kim et al.’s paradigm did not afford sufficient constraints to distinguish between the two very different internal noise types or the locus of internal noise.Other models that have incorporated a stochastic component include the model of Lehky [2] which also used random walk (brown) noise, Kalarickal and Marshall [29] who used additive uniformly distributed (effectively white) noise, and Stollenwerk and Bode [30] who used temporally white noise that was correlated across space. A further model developed by Rubin and colleagues [15,16] uses exponentially filtered white noise which progressively attenuates higher frequencies. However none of these studies report testing other types of internal noise, nor were their experimental conditions sufficient to offer meaningful constraints on the internal noise properties. As far as we are aware, this is the first study that has modelled internal noise of different amplitudes and spectral properties and compared the predictions to empirical results.Baker & Graf [8] explored binocular rivalry using broadband pink noise stimuli that also varied dynamically in time. By testing factorial combinations of temporal amplitude spectra across the two eyes, they showed that stimuli with 1/f temporal amplitude spectra tended to dominate over stimuli with different spectral slopes (the same was also true of static stimuli with a 1/f spatial amplitude spectrum). Whilst these results do not directly imply anything about the properties of internal noise, they are consistent with the idea that the visual system is optimised for stimuli encountered in the natural world, which are typically 1/f in both space and time (e.g. [31–35]). Our findings here imply that as well as having a preference for external stimuli with naturalistic properties, the internal structure of the visual system might itself have evolved to emulate these temporal constraints [32,36–38]. | [
"16183099",
"3067209",
"14612564",
"17764714",
"17904610",
"490227",
"19757880",
"19289828",
"19124036",
"20053080",
"21920853",
"20598538",
"17209732",
"7700878",
"17615138",
"19125318",
"16489854",
"6522219",
"18234273",
"11992115",
"3404312",
"14208857",
"19956332",
"6523751",
"29225766",
"23024357",
"6740959",
"17209731",
"14629871",
"3430225",
"17705683",
"11520932",
"24190908",
"28278313",
"11477428",
"5884255",
"10615461",
"2617860",
"26982370",
"18547600",
"30822470",
"23337440",
"26024455",
"24777419",
"20471349",
"11932559"
] | [
{
"pmid": "16183099",
"title": "Stochastic resonance in binocular rivalry.",
"abstract": "When a different image is presented to each eye, visual awareness spontaneously alternates between the two images--a phenomenon called binocular rivalry. Because binocular rivalry is characterized by two marginally stable perceptual states and spontaneous, apparently stochastic, switching between them, it has been speculated that switches in perceptual awareness reflect a double-well-potential type computational architecture coupled with noise. To characterize this noise-mediated mechanism, we investigated whether stimulus input, neural adaptation, and inhibitory modulations (thought to underlie perceptual switches) interacted with noise in such a way that the system produced stochastic resonance. By subjecting binocular rivalry to weak periodic contrast modulations spanning a range of frequencies, we demonstrated quantitative evidence of stochastic resonance in binocular rivalry. Our behavioral results combined with computational simulations provided insights into the nature of the internal noise (its magnitude, locus, and calibration) that is relevant to perceptual switching, as well as provided novel dynamic constraints on computational models designed to capture the neural mechanisms underlying perceptual switching."
},
{
"pmid": "3067209",
"title": "An astable multivibrator model of binocular rivalry.",
"abstract": "The behavior of a neural network model for binocular rivalry is explored through the development of an analogy between it and an electronic astable multivibrator circuit. The model incorporates reciprocal feedback inhibition between signals from the left and the right eyes prior to binocular convergence. The strength of inhibitory coupling determines whether the system undergoes rivalrous oscillations or remains in stable fusion: strong coupling leads to oscillations, weak coupling to fusion. This implies that correlation between spatial patterns presented to the two eyes can affect the strength of binocular inhibition. Finally, computer simulations are presented which show that a reciprocal inhibition model can reproduce the stochastic behavior of rivalry. The model described is a counterexample to claims that reciprocal inhibition models as a class cannot exhibit many of the experimentally observed properties of rivalry."
},
{
"pmid": "14612564",
"title": "Computational evidence for a rivalry hierarchy in vision.",
"abstract": "Cortical-form vision comprises multiple, hierarchically arranged areas with feedforward and feedback interconnections. This complex architecture poses difficulties for attempts to link perceptual phenomena to activity at a particular level of the system. This difficulty has been especially salient in studies of binocular rivalry alternations, where there is seemingly conflicting evidence for a locus in primary visual cortex or alternatively in higher cortical areas devoted to object perception. Here, I use a competitive neural model to demonstrate that the data require at least two hierarchic rivalry stages for their explanation. This model demonstrates that competitive inhibition in the first rivalry stage can be eliminated by using suitable stimulus dynamics, thereby revealing properties of a later stage, a result obtained with both spike-rate and conductance-based model neurons. This result provides a synthesis of competing rivalry theories and suggests that neural competition may be a general characteristic throughout the form-vision hierarchy."
},
{
"pmid": "17764714",
"title": "Minimal physiological conditions for binocular rivalry and rivalry memory.",
"abstract": "Binocular rivalry entails a perceptual alternation between incompatible stimuli presented to the two eyes. A minimal explanation for binocular rivalry involves strong competitive inhibition between neurons responding to different monocular stimuli to preclude simultaneous activity in the two groups. In addition, strong self-adaptation of dominant neurons is necessary to enable suppressed neurons to become dominant in turn. Here a minimal nonlinear neural model is developed incorporating inhibition, self-adaptation, and recurrent excitation. The model permits derivation of an equation for mean dominance duration as a function of the underlying physiological variables. The dominance duration equation incorporates an explicit representation of Levelt's second law. The same equation also shows that introduction of a simple compressive response nonlinearity can explain Levelt's fourth law. Finally, addition of brief, recurrent synaptic facilitation to the model generates properties of rivalry memory."
},
{
"pmid": "17904610",
"title": "Binocular contrast interactions: dichoptic masking is not a single process.",
"abstract": "To decouple interocular suppression and binocular summation we varied the relative phase of mask and target in a 2IFC contrast-masking paradigm. In Experiment I, dichoptic mask gratings had the same orientation and spatial frequency as the target. For in-phase masking, suppression was strong (a log-log slope of approximately 1) and there was weak facilitation at low mask contrasts. Anti-phase masking was weaker (a log-log slope of approximately 0.7) and there was no facilitation. A two-stage model of contrast gain control [Meese, T.S., Georgeson, M.A. and Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision, 6: 1224-1243] provided a good fit to the in-phase results and fixed its free parameters. It made successful predictions (with no free parameters) for the anti-phase results when (A) interocular suppression was phase-indifferent but (B) binocular summation was phase sensitive. Experiments II and III showed that interocular suppression comprised two components: (i) a tuned effect with an orientation bandwidth of approximately +/-33 degrees and a spatial frequency bandwidth of >3 octaves, and (ii) an untuned effect that elevated threshold by a factor of between 2 and 4. Operationally, binocular summation was more tightly tuned, having an orientation bandwidth of approximately +/-8 degrees , and a spatial frequency bandwidth of approximately 0.5 octaves. Our results replicate the unusual shapes of the in-phase dichoptic tuning functions reported by Legge [Legge, G.E. (1979). Spatial frequency masking in human vision: Binocular interactions. Journal of the Optical Society of America, 69: 838-847]. These can now be seen as the envelope of the direct effects from interocular suppression and the indirect effect from binocular summation, which contaminates the signal channel with a mask that has been suppressed by the target."
},
{
"pmid": "490227",
"title": "Spatial frequency masking in human vision: binocular interactions.",
"abstract": "Binocular contrast interactions in human vision were studied psychophysically. Thresholds were obtained for sinewave grating stimulation of the right eye in the presence of simultaneous masking gratings presented to the right eye (monocular masking) or left eye (dichoptic masking). In the first experiment, thresholds were measured at 0.25, 1.0, 4.0, and 16.0 cycle per degree (cpd) as a function of the contrast of masking gratings of identical frequency and phase. Thresholds rose nonmonotonically with masking contrast. At medium and high contrast levels, dichoptic masking was more effective in elevating contrast thresholds than monocular masking, and approached Weber's Law behavior. In the second experiment, spatial frequency tuning functions were obtained for test gratings at five spatial frequencies, by measuring threshold elevation as a function of the spatial frequency of constant-contrast masking gratings. At 1.0, 4.0, and 16.0 cpd, the tuning functions peaked at the test frequencies. The dichoptic tuning functions had a bandwidth of about 1 octave between half-maximum points, narrower than +/- 1 octave bandwidths of the monocular tuning functions. At 0.125 and 0.25 cpd, the tuning functions were broader and exhibited a shift in peak masking to frequencies above the test frequencies."
},
{
"pmid": "19757880",
"title": "Cross-orientation masking is speed invariant between ocular pathways but speed dependent within them.",
"abstract": "In human (D. H. Baker, T. S. Meese, & R. J. Summers, 2007b) and in cat (B. Li, M. R. Peterson, J. K. Thompson, T. Duong, & R. D. Freeman, 2005; F. Sengpiel & V. Vorobyov, 2005) there are at least two routes to cross-orientation suppression (XOS): a broadband, non-adaptable, monocular (within-eye) pathway and a more narrowband, adaptable interocular (between the eyes) pathway. We further characterized these two routes psychophysically by measuring the weight of suppression across spatio-temporal frequency for cross-oriented pairs of superimposed flickering Gabor patches. Masking functions were normalized to unmasked detection thresholds and fitted by a two-stage model of contrast gain control (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) that was developed to accommodate XOS. The weight of monocular suppression was a power function of the scalar quantity 'speed' (temporal-frequency/spatial-frequency). This weight can be expressed as the ratio of non-oriented magno- and parvo-like mechanisms, permitting a fast-acting, early locus, as benefits the urgency for action associated with high retinal speeds. In contrast, dichoptic-masking functions superimposed. Overall, this (i) provides further evidence for dissociation between the two forms of XOS in humans, and (ii) indicates that the monocular and interocular varieties of XOS are space/time scale-dependent and scale-invariant, respectively. This suggests an image-processing role for interocular XOS that is tailored to natural image statistics-very different from that of the scale-dependent (speed-dependent) monocular variety."
},
{
"pmid": "19289828",
"title": "Natural images dominate in binocular rivalry.",
"abstract": "Ecological approaches to perception have demonstrated that information encoding by the visual system is informed by the natural environment, both in terms of simple image attributes like luminance and contrast, and more complex relationships corresponding to Gestalt principles of perceptual organization. Here, we ask if this optimization biases perception of visual inputs that are perceptually bistable. Using the binocular rivalry paradigm, we designed stimuli that varied in either their spatiotemporal amplitude spectra or their phase spectra. We found that noise stimuli with \"natural\" amplitude spectra (i.e., amplitude content proportional to 1/f, where f is spatial or temporal frequency) dominate over those with any other systematic spectral slope, along both spatial and temporal dimensions. This could not be explained by perceived contrast measurements, and occurred even though all stimuli had equal energy. Calculating the effective contrast following attenuation by a model contrast sensitivity function suggested that the strong contrast dependency of rivalry provides the mechanism by which binocular vision is optimized for viewing natural images. We also compared rivalry between natural and phase-scrambled images and found a strong preference for natural phase spectra that could not be accounted for by observer biases in a control task. We propose that this phase specificity relates to contour information, and arises either from the activity of V1 complex cells, or from later visual areas, consistent with recent neuroimaging and single-cell work. Our findings demonstrate that human vision integrates information across space, time, and phase to select the input most likely to hold behavioral relevance."
},
{
"pmid": "19124036",
"title": "On the relation between dichoptic masking and binocular rivalry.",
"abstract": "When our two eyes view incompatible images, the brain invokes suppressive processes to inhibit one image, and favor the other. Two phenomena are typically observed: dichoptic masking (reduced sensitivity to one image) for brief presentations, and binocular rivalry (alternation between the two images), over longer exposures. However, it is not clear if these two phenomena arise from a common suppressive process. We investigated this by measuring both threshold elevation in simultaneous dichoptic masking and mean percept durations in rivalry, whilst varying relative stimulus orientation. Masking and rivalry showed significant correlations, such that strong masking was associated with long dominance durations. A second experiment suggested that individual differences across both measures are also correlated. These findings are consistent with varying the magnitude of interocular suppression in computational models of both rivalry and masking, and imply the existence of a common suppressive process. Since dichoptic masking has been localised to the monocular neurons of V1, this is a plausible first stage of binocular rivalry."
},
{
"pmid": "20053080",
"title": "Orientation-tuned suppression in binocular rivalry reveals general and specific components of rivalry suppression.",
"abstract": "During binocular rivalry (BR), conflicting monocular images are alternately suppressed from awareness. During suppression of an image, contrast sensitivity for probes is reduced by approximately 0.3-0.5 log units relative to when the image is in perceptual dominance. Previous studies on rivalry suppression have led to controversies concerning the nature and extent of suppression during BR. We tested for feature-specific suppression using orthogonal rivaling gratings and measuring contrast sensitivity to small grating probes at a range of orientations in a 2AFC orientation discrimination task. Results indicate that suppression is not uniform across orientations: suppression was much greater for orientations close to that of the suppressed grating. The higher suppression was specific to a narrow range around the suppressed rival grating, with a tuning similar to V1 orientation bandwidths. A similar experiment tested for spatial frequency tuning and found that suppression was stronger for frequencies close to that of the suppressed grating. Interestingly, no tuned suppression was observed when a flicker-and-swap paradigm was used, suggesting that tuned suppression occurs only for lower-level, interocular rivalry. Together, the results suggest there are two components to rivalry suppression: a general feature-invariant component and an additional component specifically tuned to the rivaling features."
},
{
"pmid": "21920853",
"title": "Suppressed images selectively affect the dominant percept during binocular rivalry.",
"abstract": "During binocular rivalry, perception alternates between dissimilar images that are presented dichoptically. It has been argued that perception during the dominance phase of rivalry is unaffected by the suppressed image. Recent evidence suggests, however, that the suppressed image does affect perception of the dominant image, yet the extent and nature of this interaction remain elusive. We hypothesize that this interaction depends on the difference in feature content between the rivaling images. Here, we investigate how sensitivity to probes presented in the image that is currently dominant in perception is affected by the suppressed image. Observers performed a 2AFC discrimination task on oriented probes (Experiment 1) or probes with different motion directions (Experiment 2). Our results show that performance on both orientation and motion direction discrimination was affected by the content of the suppressed image. The strength of interference depended specifically on the difference in feature content (e.g., the difference in orientation) between the probe and the suppressed image. Moreover, the pattern of interference by the suppressed image is qualitatively similar to the situation where this image and the probe are simultaneously visible. We conclude that perception during the dominance phase of rivalry is affected by a suppressed image as if it were visible."
},
{
"pmid": "20598538",
"title": "Visual sensitivity underlying changes in visual consciousness.",
"abstract": "When viewing a different stimulus with each eye, we experience the remarkable phenomenon of binocular rivalry: alternations in consciousness between the stimuli [1, 2]. According to a popular theory first proposed in 1901, neurons encoding the two stimuli engage in reciprocal inhibition [3-8] so that those processing one stimulus inhibit those processing the other, yielding consciousness of one dominant stimulus at any moment and suppressing the other. Also according to the theory, neurons encoding the dominant stimulus adapt, weakening their activity and the inhibition they can exert, whereas neurons encoding the suppressed stimulus recover from adaptation until the balance of activity reverses, triggering an alternation in consciousness. Despite its popularity, this theory has one glaring inconsistency with data: during an episode of suppression, visual sensitivity to brief probe stimuli in the dominant eye should decrease over time and should increase in the suppressed eye, yet sensitivity appears to be constant [9, 10]. Using more appropriate probe stimuli (experiment 1) in conjunction with a new method (experiment 2), we found that sensitivities in dominance and suppression do show the predicted complementary changes."
},
{
"pmid": "17209732",
"title": "The time course of binocular rivalry reveals a fundamental role of noise.",
"abstract": "When our two eyes view incongruent images, we experience binocular rivalry: An ongoing cycle of dominance periods of either image and transition periods when both are visible. Two key forces underlying this process are adaptation of and inhibition between the images' neural representations. Models based on these factors meet the constraints posed by data on dominance periods, but these are not very stringent. We extensively studied contrast dependence of dominance and transition durations and that of the occurrence of return transitions: Occasions when an eye loses and regains dominance without intervening dominance of the other eye. We found that dominance durations and the incidence of return transitions depend similarly on contrast; transition durations show a different dependence. Regarding dominance durations, we show that the widely accepted rule known as Levelt's second proposition is only valid in a limited contrast range; outside this range, the opposite of the proposition is true. Our data refute current models, based solely on adaptation and inhibition, as these cannot explain the long and reversible transitions that we find. These features indicate that noise is a crucial force in rivalry, frequently dominating the deterministic forces."
},
{
"pmid": "7700878",
"title": "Binocular rivalry is not chaotic.",
"abstract": "Time series of the durations each eye was dominant during binocular rivalry were obtained psychophysically. The oscillations showed an adaptation effect with mean and standard deviations of rivalry dominance durations increasing as a square root function of time over the course of a trial. The data were corrected for this non-stationarity. Dominance durations had a log-normal probability distribution and the autocorrelation function revealed no short term correlations in the time series. In an attempt to distinguish whether the variability of durations was due to a deterministic, low-dimensional chaotic attractor or to a stochastic process, the data were subjected to two tests. The first was calculation of correlation dimensions and the second was nonlinear forecasting of the time series. Both tests included comparisons with randomized 'surrogate data' as controls. In neither case was there a large difference between test results for actual data and surrogate data. We conclude that chaos is not a major factor underlying variability in binocular rivalry."
},
{
"pmid": "17615138",
"title": "Noise-induced alternations in an attractor network model of perceptual bistability.",
"abstract": "When a stimulus supports two distinct interpretations, perception alternates in an irregular manner between them. What causes the bistable perceptual switches remains an open question. Most existing models assume that switches arise from a slow fatiguing process, such as adaptation or synaptic depression. We develop a new, attractor-based framework in which alternations are induced by noise and are absent without it. Our model goes beyond previous energy-based conceptualizations of perceptual bistability by constructing a neurally plausible attractor model that is implemented in both firing rate mean-field and spiking cell-based networks. The model accounts for known properties of bistable perceptual phenomena, most notably the increase in alternation rate with stimulation strength observed in binocular rivalry. Furthermore, it makes a novel prediction about the effect of changing stimulus strength on the activity levels of the dominant and suppressed neural populations, a prediction that could be tested with functional MRI or electrophysiological recordings. The neural architecture derived from the energy-based model readily generalizes to several competing populations, providing a natural extension for multistability phenomena."
},
{
"pmid": "19125318",
"title": "Balance between noise and adaptation in competition models of perceptual bistability.",
"abstract": "Perceptual bistability occurs when a physical stimulus gives rise to two distinct interpretations that alternate irregularly. Noise and adaptation processes are two possible mechanisms for switching in neuronal competition models that describe the alternating behaviors. Either of these processes, if strong enough, could alone cause the alternations in dominance. We examined their relative role in producing alternations by studying models where by smoothly varying the parameters, one can change the rhythmogenesis mechanism from being adaptation-driven to noise-driven. In consideration of the experimental constraints on the statistics of the alternations (mean and shape of the dominance duration distribution and correlations between successive durations) we ask whether we can rule out one of the mechanisms. We conclude that in order to comply with the observed mean of the dominance durations and their coefficient of variation, the models must operate within a balance between the noise and adaptation strength-both mechanisms are involved in producing alternations, in such a way that the system operates near the boundary between being adaptation-driven and noise-driven."
},
{
"pmid": "16489854",
"title": "The human eye is an example of robust optical design.",
"abstract": "In most eyes, in the fovea and at best focus, the resolution capabilities of the eye's optics and the retinal mosaic are remarkably well adapted. Although there is a large individual variability, the average magnitude of the high order aberrations is similar in groups of eyes with different refractive errors. This is surprising because these eyes are comparatively different in shape: Myopic eyes are longer whereas hyperopic eyes are shorter. In most young eyes, the amount of aberrations for the isolated cornea is larger than for the complete eye, indicating that the internal ocular optics (mainly the crystalline lens) play a significant role in compensating for the corneal aberrations, thereby producing an improved retinal image. In this paper, we show that this compensation is larger in the less optically centered eyes that mostly correspond to hyperopic eyes. This suggests a type of mechanism in the eye's design that is the most likely responsible for this compensation. Spherical aberration of the cornea is partially compensated by that of the lens in most eyes. Lateral coma is also compensated mainly in hyperopic eyes. We found that the distribution of aberrations between the cornea and lens appears to allow the optical properties of the eye to be relatively insensitive to variations arising from eye growth or exact centration and alignment of the eye's optics relative to the fovea. These results may suggest the presence of an auto-compensation mechanism that renders the eye's optics robust despite large variation in the ocular shape and geometry."
},
{
"pmid": "18234273",
"title": "Hysteresis effects in stereopsis and binocular rivalry.",
"abstract": "Neural hysteresis plays a fundamental role in stereopsis and reveals the existence of positive feedback at the cortical level [Wilson, H. R., & Cowan, J. D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 13(2), 55-80]. We measured hysteresis as a function of orientation disparity in tilted gratings in which a transition is perceived between stereopsis and binocular rivalry. The patterns consisted of sinusoidal gratings with orientation disparities (0 degrees, 1 degrees, 2 degrees,..., 40 degrees) resulting in various degrees of tilt. A movie of these 41 pattern pairs was shown at a rate of 0.5, 1 or 2 pattern pairs per second, in forward or reverse order. Two transition points were measured: the point at which the single tilted grating appeared to split into two rivalrous gratings (T1), and the point at which two rivalrous gratings appeared to merge into a single tilted grating (T2). The transitions occurred at different orientation disparities (T1=25.4 degrees, T2=17.0 degrees) which was consistent with hysteresis and far exceeded the difference which could be attributed to reaction time. The results are consistent with a cortical model which includes positive feedback and recurrent inhibition between neural units representing different eyes and orientations."
},
{
"pmid": "11992115",
"title": "Stable perception of visually ambiguous patterns.",
"abstract": "During the viewing of certain patterns, widely known as ambiguous or puzzle figures, perception lapses into a sequence of spontaneous alternations, switching every few seconds between two or more visual interpretations of the stimulus. Although their nature and origin remain topics of debate, these stochastic switches are generally thought to be the automatic and inevitable consequence of viewing a pattern without a unique solution. We report here that in humans such perceptual alternations can be slowed, and even brought to a standstill, if the visual stimulus is periodically removed from view. We also show, with a visual illusion, that this stabilizing effect hinges on perceptual disappearance rather than on actual removal of the stimulus. These findings indicate that uninterrupted subjective perception of an ambiguous pattern is required for the initiation of the brain-state changes underlying multistable vision."
},
{
"pmid": "3404312",
"title": "Visual signal detection. IV. Observer inconsistency.",
"abstract": "Historically, human signal-detection responses have been assumed to be governed by external determinants (nature of the signal, the noise, and the task) and internal determinants. Variability in the internal determinants is commonly attributed to internal noise (often vaguely defined). We present a variety of experimental results that demonstrate observer inconsistency in performing noise-limited visual detection and discrimination tasks with repeated presentation of images. Our results can be interpreted by using a model that includes an internal-noise component that is directly proportional to image noise. This so-called induced internal-noise component dominates when external noise is easily visible. We demonstrate that decision-variable fluctuations lead to this type of internal noise. Given this induced internal-noise proportionality (sigma i/sigma 0 = 0.75 +/- 0.1), the upper limit to human visual signal-detection efficiency is 64% +/- 6%. This limit is consistent with a variety of results presented in earlier papers in this series."
},
{
"pmid": "19956332",
"title": "Stochastic variations in sensory awareness are driven by noisy neuronal adaptation: evidence from serial correlations in perceptual bistability.",
"abstract": "When the sensory system is subjected to ambiguous input, perception alternates between interpretations in a seemingly random fashion. Although neuronal noise obviously plays a role, the neural mechanism for the generation of randomness at the slow time scale of the percept durations (multiple seconds) is unresolved. Here significant nonzero serial correlations are reported in series of visual percept durations (to the author's knowledge for the first time accounting for duration impurities caused by reaction time, drift, and incomplete percepts). Serial correlations for perceptual rivalry using structure-from-motion ambiguity were smaller than for binocular rivalry using orthogonal gratings. A spectrum of computational models is considered, and it is concluded that noise in adaptation of percept-related neurons causes the serial correlations. This work bridges, in a physiologically plausible way, widely appreciated deterministic modeling and randomness in experimental observations of visual rivalry."
},
{
"pmid": "6523751",
"title": "Eye movements, afterimages and monocular rivalry.",
"abstract": "The eye-movement/afterimage theory of \"monocular rivalry\" (MR) between gratings was tested and strongly supported. In three experiments perceptual dominance of vertical or horizontal components of the pattern and fluctuations in perceived contrast of a single grating were shown to depend on the nature of the preceding shift in fixation position in the manner predicted by the theory. In a fourth experiment the angular selectivity of these fluctuations was eliminated, as predicted, when appropriate eye movements were made. Fixation-contingent fluctuations became equally strong for 15 degrees and 90 degrees angles. Taken together with data on afterimages, the results appear to resolve most of the problems recently raised against the theory."
},
{
"pmid": "29225766",
"title": "On the Discovery of Monocular Rivalry by Tscherning in 1898: Translation and Review.",
"abstract": "Monocular rivalry was named by Breese in 1899. He made prolonged observation of superimposed orthogonal gratings; they fluctuated in clarity with either one or the other grating occasionally being visible alone. A year earlier, Tscherning observed similar fluctuations with a grid of vertical and horizontal lines and with other stimuli; we draw attention to his prior account. Monocular rivalry has since been shown to occur with a wide variety of superimposed patterns with several independent rediscoveries of it. We also argue that Helmholtz described some phenomenon other than monocular rivalry in 1867."
},
{
"pmid": "23024357",
"title": "Zero-dimensional noise: the best mask you never saw.",
"abstract": "The transmission of weak signals through the visual system is limited by internal noise. Its level can be estimated by adding external noise, which increases the variance within the detecting mechanism, causing masking. But experiments with white noise fail to meet three predictions: (a) noise has too small an influence on the slope of the psychometric function, (b) masking occurs even when the noise sample is identical in each two-alternative forced-choice (2AFC) interval, and (c) double-pass consistency is too low. We show that much of the energy of 2D white noise masks extends well beyond the pass-band of plausible detecting mechanisms and that this suppresses signal activity. These problems are avoided by restricting the external noise energy to the target mechanisms by introducing a pedestal with a mean contrast of 0% and independent contrast jitter in each 2AFC interval (termed zero-dimensional [0D] noise). We compared the jitter condition to masking from 2D white noise in double-pass masking and (novel) contrast matching experiments. Zero-dimensional noise produced the strongest masking, greatest double-pass consistency, and no suppression of perceived contrast, consistent with a noisy ideal observer. Deviations from this behavior for 2D white noise were explained by cross-channel suppression with no need to appeal to induced internal noise or uncertainty. We conclude that (a) results from previous experiments using white pixel noise should be re-evaluated and (b) 0D noise provides a cleaner method for investigating internal variability than pixel noise. Ironically then, the best external noise stimulus does not look noisy."
},
{
"pmid": "6740959",
"title": "Binocular contrast summation--II. Quadratic summation.",
"abstract": "Quadratic summation is presented as a rule that describes binocular contrast summation. The rule asserts that for left-eye and right-eye contrasts CL and CR, there is an effective binocular contrast C given by the formula: (formula; see text) Pairs of left-eye and right-eye stimuli that produce equal values of C are equivalent. Quadratic summation is applied to the results of experiments in which stimuli presented to the two eyes differ only in contrast. It provides a good, first-order account of binocular summation in contrast detection, contrast discrimination, dichoptic masking, contrast matching and reaction time studies. A binocular energy-detector model is presented as a basis for quadratic summation."
},
{
"pmid": "17209731",
"title": "Binocular contrast vision at and above threshold.",
"abstract": "A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model."
},
{
"pmid": "14629871",
"title": "Lateral neural model of binocular rivalry.",
"abstract": "This article introduces a two-dimensionally extended, neuron-based model for binocular rivalry. The basic block of the model is a certain type of astable multivibrator comprising excitatory and inhibitory neurons. Many of these blocks are laterally coupled on a medium range to provide a two-dimensional layer. Our model, like others, needs noise to reproduce typical stochastic oscillations. Due to its spatial extension, the noise has to be laterally correlated. When the contrast ratio of the pictures varies, their share of the perception time changes in a way that is known from comparable experimental data (Levelt, 1965; Mueller & Blake, 1989). This is a result of the lateral coupling and not a property of the single model block. The presentation of simple and suitable inhomogeneous stimuli leads to an easily describable perception of periodically moving pictures like propagating fronts or breathing spots. This suggests new experiments. Under certain conditions, a bifurcation from static to moving perceptions is predicted and may be checked and employed by future experiments. Recent \"paradox\" (Logothetis, 1999) observations of two different neuron classes in cortical areas MT (Logothetis & Schall, 1989) and V4 (Leopold & Logothetis, 1996), one that behaves alike under rivaling and nonrivaling conditions and another that drastically changes its behavior, are interpreted as being related to separate inhibitor neurons."
},
{
"pmid": "3430225",
"title": "Relations between the statistics of natural images and the response properties of cortical cells.",
"abstract": "The relative efficiency of any particular image-coding scheme should be defined only in relation to the class of images that the code is likely to encounter. To understand the representation of images by the mammalian visual system, it might therefore be useful to consider the statistics of images from the natural environment (i.e., images with trees, rocks, bushes, etc). In this study, various coding schemes are compared in relation to how they represent the information in such natural images. The coefficients of such codes are represented by arrays of mechanisms that respond to local regions of space, spatial frequency, and orientation (Gabor-like transforms). For many classes of image, such codes will not be an efficient means of representing information. However, the results obtained with six natural images suggest that the orientation and the spatial-frequency tuning of mammalian simple cells are well suited for coding the information in such images if the goal of the code is to convert higher-order redundancy (e.g., correlation between the intensities of neighboring pixels) into first-order redundancy (i.e., the response distribution of the coefficients). Such coding produces a relatively high signal-to-noise ratio and permits information to be transmitted with only a subset of the total number of cells. These results support Barlow's theory that the goal of natural vision is to represent the information in the natural environment with minimal redundancy."
},
{
"pmid": "17705683",
"title": "Visual perception and the statistical properties of natural scenes.",
"abstract": "The environments in which we live and the tasks we must perform to survive and reproduce have shaped the design of our perceptual systems through evolution and experience. Therefore, direct measurement of the statistical regularities in natural environments (scenes) has great potential value for advancing our understanding of visual perception. This review begins with a general discussion of the natural scene statistics approach, of the different kinds of statistics that can be measured, and of some existing measurement techniques. This is followed by a summary of the natural scene statistics measured over the past 20 years. Finally, there is a summary of the hypotheses, models, and experiments that have emerged from the analysis of natural scene statistics."
},
{
"pmid": "11520932",
"title": "Natural image statistics and neural representation.",
"abstract": "It has long been assumed that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical properties of the signals to which they are exposed. Attneave (1954)Barlow (1961) proposed that information theory could provide a link between environmental statistics and neural responses through the concept of coding efficiency. Recent developments in statistical modeling, along with powerful computational tools, have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis for both individual neurons and populations of neurons."
},
{
"pmid": "24190908",
"title": "Perceived contrast in complex images.",
"abstract": "To understand how different spatial frequencies contribute to the overall perceived contrast of complex, broadband photographic images, we adapted the classification image paradigm. Using natural images as stimuli, we randomly varied relative contrast amplitude at different spatial frequencies and had human subjects determine which images had higher contrast. Then, we determined how the random variations corresponded with the human judgments. We found that the overall contrast of an image is disproportionately determined by how much contrast is between 1 and 6 c/°, around the peak of the contrast sensitivity function (CSF). We then employed the basic components of contrast psychophysics modeling to show that the CSF alone is not enough to account for our results and that an increase in gain control strength toward low spatial frequencies is necessary. One important consequence of this is that contrast constancy, the apparent independence of suprathreshold perceived contrast and spatial frequency, will not hold during viewing of natural images. We also found that images with darker low-luminance regions tended to be judged as having higher overall contrast, which we interpret as the consequence of darker local backgrounds resulting in higher band-limited contrast response in the visual system."
},
{
"pmid": "28278313",
"title": "Distribution of content in recently-viewed scenes whitens perception.",
"abstract": "Anisotropies in visual perception have often been presumed to reflect an evolutionary adaptation to an environment with a particular anisotropy. Here, we adapt observers to globally-atypical environments presented in virtual reality to assess the malleability of this well-known perceptual anisotropy. Results showed that the typical bias in orientation perception was in fact altered as a result of recent experience. Application of Bayesian modeling indicates that these global changes of the recently-viewed environment implicate a Bayesian prior matched to the recently experienced environment. These results suggest that biases in orientation perception are fluid and predictable, and that humans adapt to orientation biases in their visual environment \"on the fly\" to optimize perceptual encoding of content in the recently-viewed visual world."
},
{
"pmid": "11477428",
"title": "Natural signal statistics and sensory gain control.",
"abstract": "We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role."
},
{
"pmid": "10615461",
"title": "Binocular and monocular detection of Gabor patches in binocular two-dimensional noise.",
"abstract": "Contrast thresholds for detecting sine-wave Gabor patches in two-dimensional externally added random-pixel noise were measured. Thresholds were obtained for monocular and binocular signals in the presence of spatial correlated (identical) and uncorrelated (independent) noise in the two eyes. Measurements were obtained at four different spectral densities of noise (including zero). Thresholds were higher for monocular stimuli than for binocular, and higher in the presence of correlated noise compared to uncorrelated noise. The magnitude of binocular summation, similar in correlated and uncorrelated noise, decreased with increasing noise strength. The independent contributions of internal noise and sampling efficiency to detection were analysed. Sampling efficiencies were higher for binocular than for monocular viewing for both types of noise, with values being higher with uncorrelated noise. Binocular stimuli showed a lower equivalent noise level compared to the mean monocular case for both types of noise."
},
{
"pmid": "2617860",
"title": "Binocular combination of contrast signals.",
"abstract": "We studied the detectability of dichoptically presented vertical grating patterns that varied in the ratio of the contrasts presented to the two eyes. The resulting threshold data fall on a binocular summation contour well described by a power summation equation with an exponent near 2. We studied the effect of adding one-dimensional visual noise, either correlated or uncorrelated between the eyes, to the grating patterns. The addition of uncorrelated noise elevated thresholds uniformly for all interocular ratios, while correlated noise elevated thresholds for stimuli whose ratios were near 1 more than thresholds for other stimuli. We also examined the effects of monocular adaptation to a high-contrast grating on the form of the summation contour. Such adaptation elevates threshold in a manner that varies continuously with the interocular contrast ratio of the test targets, and increases the amount of binocular summation. Each of several current models can explain some of our results, but no one of them seems capable of accounting for all three sets of data. We therefore develop a new multiple-channel model, the distribution model, which postulates a family of linear binocular channels that vary in their sensitivities to the two monocular inputs. This model can account for our data and those of others concerning binocular summation, masking, adaptation and interocular transfer. We conclude that there exists a system of ocular dominance channels in the human visual system."
},
{
"pmid": "26982370",
"title": "Binocular contrast discrimination needs monocular multiplicative noise.",
"abstract": "The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (<4%), consistent with previous studies, and at high contrasts (≥34%), which has not been previously reported. However, control experiments showed no binocular advantage at high contrasts in the presence of a fixation point or for observers without accommodation. We evaluated two putative contrast-discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms."
},
{
"pmid": "18547600",
"title": "Contrast masking in strabismic amblyopia: attenuation, noise, interocular suppression and binocular summation.",
"abstract": "To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrast discrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224-1243.] was 'lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye."
},
{
"pmid": "30822470",
"title": "Internal noise in contrast discrimination propagates forwards from early visual cortex.",
"abstract": "Human contrast discrimination performance is limited by transduction nonlinearities and variability of the neural representation (noise). Whereas the nonlinearities have been well-characterised, there is less agreement about the specifics of internal noise. Psychophysical models assume that it impacts late in sensory processing, whereas neuroimaging and intracranial electrophysiology studies suggest that the noise is much earlier. We investigated whether perceptually-relevant internal noise arises in early visual areas or later decision making areas. We recorded EEG and MEG during a two-interval-forced-choice contrast discrimination task and used multivariate pattern analysis to decode target/non-target and selected/non-selected intervals from evoked responses. We found that perceptual decisions could be decoded from both EEG and MEG signals, even when the stimuli in both intervals were physically identical. Above-chance decision classification started <100 ms after stimulus onset, suggesting that neural noise affects sensory signals early in the visual pathway. Classification accuracy increased over time, peaking at >500 ms. Applying multivariate analysis to separate anatomically-defined brain regions in MEG source space, we found that occipital regions were informative early on but then information spreads forwards across parietal and frontal regions. This is consistent with neural noise affecting sensory processing at multiple stages of perceptual decision making. We suggest how early sensory noise might be resolved with Birdsall's linearisation, in which a dominant noise source obscures subsequent nonlinearities, to allow the visual system to preserve the wide dynamic range of early areas whilst still benefitting from contrast-invariance at later stages. A preprint of this work is available at: https://doi.org/10.1101/364612."
},
{
"pmid": "23337440",
"title": "The statistical distribution of noisy transmission in human sensors.",
"abstract": "OBJECTIVE\nBrains, like other physical devices, are inherently noisy. This source of variability is large, to the extent that internal noise often impacts human sensory processing more than externally induced (stimulus-driven) perturbations. Despite the fundamental nature of this phenomenon, its statistical distribution remains unknown: for the past 40 years it has been assumed Gaussian, but the applicability (or lack thereof) of this assumption has not been checked.\n\n\nAPPROACH\nWe obtained detailed measurements of this process by exploiting an integrated approach that combines experimental, theoretical and computational tools from bioengineering applications of system identification and reverse correlation methodologies.\n\n\nMAIN RESULTS\nThe resulting characterization reveals that the underlying distribution is in fact not Gaussian, but well captured by the Laplace (double-exponential) distribution.\n\n\nSIGNIFICANCE\nPotentially relevant to this result is the observation that image contrast follows leptokurtic distributions in natural scenes, suggesting that the properties of internal noise in human sensors may reflect environmental statistics."
},
{
"pmid": "26024455",
"title": "Connecting psychophysical performance to neuronal response properties I: Discrimination of suprathreshold stimuli.",
"abstract": "One of the major goals of sensory neuroscience is to understand how an organism's perceptual abilities relate to the underlying physiology. To this end, we derived equations to estimate the best possible psychophysical discrimination performance, given the properties of the neurons carrying the sensory code.We set up a generic sensory coding model with neurons characterized by their tuning function to the stimulus and the random process that generates spikes. The tuning function was a Gaussian function or a sigmoid (Naka-Rushton) function.Spikes were generated using Poisson spiking processes whose rates were modulated by a multiplicative, gamma-distributed gain signal that was shared between neurons. This doubly stochastic process generates realistic levels of neuronal variability and a realistic correlation structure within the population. Using Fisher information as a close approximation of the model's decoding precision, we derived equations to predict the model's discrimination performance from the neuronal parameters. We then verified the accuracy of our equations using Monte Carlo simulations. Our work has two major benefits. Firstly, we can quickly calculate the performance of physiologically plausible population-coding models by evaluating simple equations, which makes it easy to fit the model to psychophysical data. Secondly, the equations revealed some remarkably straightforward relationships between psychophysical discrimination performance and the parameters of the neuronal population, giving deep insights into the relationships between an organism's perceptual abilities and the properties of the neurons on which those abilities depend."
},
{
"pmid": "24777419",
"title": "Partitioning neuronal variability.",
"abstract": "Responses of sensory neurons differ across repeated measurements. This variability is usually treated as stochasticity arising within neurons or neural circuits. However, some portion of the variability arises from fluctuations in excitability due to factors that are not purely sensory, such as arousal, attention and adaptation. To isolate these fluctuations, we developed a model in which spikes are generated by a Poisson process whose rate is the product of a drive that is sensory in origin and a gain summarizing stimulus-independent modulatory influences on excitability. This model provides an accurate account of response distributions of visual neurons in macaque lateral geniculate nucleus and cortical areas V1, V2 and MT, revealing that variability originates in large part from excitability fluctuations that are correlated over time and between neurons, and that increase in strength along the visual pathway. The model provides a parsimonious explanation for observed systematic dependencies of response variability and covariability on firing rate."
},
{
"pmid": "20471349",
"title": "The temporal structures and functional significance of scale-free brain activity.",
"abstract": "Scale-free dynamics, with a power spectrum following P proportional to f(-beta), are an intrinsic feature of many complex processes in nature. In neural systems, scale-free activity is often neglected in electrophysiological research. Here, we investigate scale-free dynamics in human brain and show that it contains extensive nested frequencies, with the phase of lower frequencies modulating the amplitude of higher frequencies in an upward progression across the frequency spectrum. The functional significance of scale-free brain activity is indicated by task performance modulation and regional variation, with beta being larger in default network and visual cortex and smaller in hippocampus and cerebellum. The precise patterns of nested frequencies in the brain differ from other scale-free dynamics in nature, such as earth seismic waves and stock market fluctuations, suggesting system-specific generative mechanisms. Our findings reveal robust temporal structures and behavioral significance of scale-free brain activity and should motivate future study on its physiological mechanisms and cognitive implications."
},
{
"pmid": "11932559",
"title": "A spiking neuron model for binocular rivalry.",
"abstract": "We present a biologically plausible model of binocular rivalry consisting of a network of Hodgkin-Huxley type neurons. Our model accounts for the experimentally and psychophysically observed phenomena: (1) it reproduces the distribution of dominance durations seen in both humans and primates, (2) it exhibits a lack of correlation between lengths of successive dominance durations, (3) variation of stimulus strength to one eye influences only the mean dominance duration of the contralateral eye, not the mean dominance duration of the ipsilateral eye, (4) increasing both stimuli strengths in parallel decreases the mean dominance durations. We have also derived a reduced population rate model from our spiking model from which explicit expressions for the dependence of the dominance durations on input strengths are analytically calculated. We also use this reduced model to derive an expression for the distribution of dominance durations seen within an individual."
}
] |
Frontiers in Neurorobotics | 31214008 | PMC6554328 | 10.3389/fnbot.2019.00022 | Fast and Flexible Multi-Step Cloth Manipulation Planning Using an Encode-Manipulate-Decode Network (EM*D Net) | We propose a deep neural network architecture, the Encode-Manipulate-Decode (EM*D) net, for rapid manipulation planning on deformable objects. We demonstrate its effectiveness on simulated cloth. The net consists of 3D convolutional encoder and decoder modules that map cloth states to and from latent space, with a “manipulation module” in between that learns a forward model of the cloth's dynamics w.r.t. the manipulation repertoire, in latent space. The manipulation module's architecture is specialized for its role as a forward model, iteratively modifying a state representation by means of residual connections and repeated input at every layer. We train the network to predict the post-manipulation cloth state from a pre-manipulation cloth state and a manipulation input. By training the network end-to-end, we force the encoder and decoder modules to learn a latent state representation that facilitates modification by the manipulation module. We show that the network can achieve good generalization from a training dataset of 6,000 manipulation examples. Comparative experiments without the architectural specializations of the manipulation module show reduced performance, confirming the benefits of our architecture. Manipulation plans are generated by performing error back-propagation w.r.t. the manipulation inputs. Recurrent use of the manipulation network during planning allows for generation of multi-step plans. We show results for plans of up to three manipulations, demonstrating generally good approximation of the goal state. Plan generation takes <2.5 s for a three-step plan and is found to be robust to cloth self-occlusion, supporting the approach' viability for practical application. | Related Work in Model-Based LearningThere is increasing evidence from neuroscience that humans learn, in part, by acquiring forward models (Gläscher et al., 2010; Liljeholm et al., 2013; Lee et al., 2014). Human ability to generalize implicit knowledge of cloth dynamics to novel circumstances suggests that we acquire forward models of these dynamics. Forward models are commonly used in model-based control and planning, but in the case of cloth manipulation planning, the use of explicit forward models (i.e., physical simulation) is problematic due to computational cost and the difficulty of obtaining an accurate model, as discussed above. However, it has been demonstrated that neural networks can be trained as forward models. Of particular relevance here is (Wahlström et al., 2015) for the use of a neural network trained as a forward model in latent space. The proposed model takes high-dimensional observations (images) of a low-dimensional control task as inputs, maps these observations into low-dimensional latent representations (by means of PCA followed by an encoder network), feeds these through a network functioning as a forward model, and then maps the outputs of this network to high-dimensional predictions of future states. This model is then used to search for control signals that bring about a fixed goal.Also related is (Watter et al., 2015). Here too, an encoder network is used to map high-dimensional observations to low-dimensional latent representations. The forward model takes the form of linear transformations in latent space (although a non-linear variant is considered as well). We return to these and other related neural network studies in the discussion section. In the context of cloth manipulation, use of a neural network as forward model allows us to side-step the computational cost of explicit simulation (replacing it with forward propagation through the network), as well as the burden of acquiring an accurate model of a given cloth item (instead, the forward model is learned from data). | [
"27187944",
"20510862",
"24507199",
"23884955"
] | [
{
"pmid": "27187944",
"title": "Learning to Generate Chairs, Tables and Cars with Convolutional Networks.",
"abstract": "We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task."
},
{
"pmid": "20510862",
"title": "States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.",
"abstract": "Reinforcement learning (RL) uses sequential experience with situations (\"states\") and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior."
},
{
"pmid": "24507199",
"title": "Neural computations underlying arbitration between model-based and model-free learning.",
"abstract": "There is accumulating neural evidence to support the existence of two distinct systems for guiding action selection, a deliberative \"model-based\" and a reflexive \"model-free\" system. However, little is known about how the brain determines which of these systems controls behavior at one moment in time. We provide evidence for an arbitration mechanism that allocates the degree of control over behavior by model-based and model-free systems as a function of the reliability of their respective predictions. We show that the inferior lateral prefrontal and frontopolar cortex encode both reliability signals and the output of a comparison between those signals, implicating these regions in the arbitration process. Moreover, connectivity between these regions and model-free valuation areas is negatively modulated by the degree of model-based control in the arbitrator, suggesting that arbitration may work through modulation of the model-free valuation system when the arbitrator deems that the model-based system should drive behavior."
},
{
"pmid": "23884955",
"title": "Neural correlates of the divergence of instrumental probability distributions.",
"abstract": "Flexible action selection requires knowledge about how alternative actions impact the environment: a \"cognitive map\" of instrumental contingencies. Reinforcement learning theories formalize this map as a set of stochastic relationships between actions and states, such that for any given action considered in a current state, a probability distribution is specified over possible outcome states. Here, we show that activity in the human inferior parietal lobule correlates with the divergence of such outcome distributions-a measure that reflects whether discrimination between alternative actions increases the controllability of the future-and, further, that this effect is dissociable from those of other information theoretic and motivational variables, such as outcome entropy, action values, and outcome utilities. Our results suggest that, although ultimately combined with reward estimates to generate action values, outcome probability distributions associated with alternative actions may be contrasted independently of valence computations, to narrow the scope of the action selection problem."
}
] |
Frontiers in Neuroinformatics | 31214007 | PMC6558144 | 10.3389/fninf.2019.00041 | Exploiting Multi-Level Parallelism for Stitching Very Large Microscopy Images | Due to the limited field of view of the microscopes, acquisitions of macroscopic specimens require many parallel image stacks to cover the whole volume of interest. Overlapping regions are introduced among stacks in order to make it possible automatic alignment by means of a 3D stitching tool. Since state-of-the-art microscopes coupled with chemical clearing procedures can generate 3D images whose size exceeds the Terabyte, parallelization is required to keep stitching time within acceptable limits. In the present paper we discuss how multi-level parallelization reduces the execution times of TeraStitcher, a tool designed to deal with very large images. Two algorithms performing dataset partition for efficient parallelization in a transparent way are presented together with experimental results proving the effectiveness of the approach that achieves a speedup close to 300×, when both coarse- and fine-grained parallelism are exploited. Multi-level parallelization of TeraStitcher led to a significant reduction of processing times with no changes in the user interface, and with no additional effort required for the maintenance of code. | 2. Related Work and BackgroundSeveral tools have been developed in the last ten years for stitching microscopy images, however most of them are not adequate to stitch 3D Teravoxel-sized datasets because they were designed under different assumptions (Emmenlauer et al., 2009; Preibisch et al., 2009; Yu and Peng, 2011; Chalfoun et al., 2017). For example, MIST (Chalfoun et al., 2017) has been recently proposed as a tool for rapid and accurate stitching of large 2D time-lapse mosaics. Although it has been designed to deal with very large datasets and it exploits different sources of parallelism to improve stitching performance, the tool can handle only 2D images each with a typical size of a few Gigabytes.Recently, Imaris has announced a standalone commercial application capable to precisely aligning and fusing 2D, 3D, or 4D Terabyte-sized images (Bitplane, 2018). Although it is very likely that their tool uses at least multi-threading to efficiently exploit modern multi-core architectures, no information about its real capabilities and performance is available. To the best of our knowledge, the only noncommercial tool designed to handle Terabyte-sized 3D images is BigStitcher (Hörl et al., 2018), an evolution recently released of the tool described in Preibisch et al. (2009) and distributed as a plugin of Fiji (Schindelin et al., 2012). BigStitcher provides several functionalities besides stitching. It handles and reconstructs large multi-tile, multi-view acquisitions compensating all major optical effects. It also uses parallelism at thread level to speedup the stitching process.As already stated TeraStitcher is able to stitch very large images (Bria and Iannello, 2012). It performs stitching in six steps: (i) import of the unstitched dataset; (ii) pairwise tiles displacement computation; (iii) displacement projection; (iv) displacement thresholding; (v) optimal tiles placement; and (vi) tiles merging and final multiresolution image generation. To improve flexibility, steps (i–v) generate an xml file representing, in a compact and structured form, the input of the next step. This enables running the steps separately, manually intervening to correct errors of single steps, and changing the implementation of one step without affecting the others. While the interested reader may find in Bria and Iannello (2012) a detailed description of each step, here we focus only on implementation issues related with parallelization of displacement computation and tile merging that are, by far, the most time consuming steps in the stitching pipeline and motivated our parallelization work.Pairwise tiles displacement computation (alignment step in the following) aims at correcting the small alignment errors between adjacent tiles introduced by the microscope motorized stages. To correct the alignments, TeraStitcher uses an algorithm based on a Maximum Intensity Projection (MIP) of the overlapped area between any two adjacent tiles, and on the search of a maximum of the Normalized Cross Correlation (NCC) among pairs of homologous projections from both tiles (see Figure 1). Finding the maximum of NCC is by far the most computationally intensive part of the pairwise tiles displacement computation. It requires to move one of the two MIPs with respect to the other in any direction in order to compute a map of NCCs. The number of floating-point operations needed to compute that map depends on the size of the MIPs and of the computed NCC map. The workload associated to the pairwise tiles displacement computation is therefore an increasing function of: (i) the number of adjacent tiles; (ii) the size of the overlapping region between adjacent tiles; (iii) the size of the NCC map to be computed; (iv) the number of sub-stacks in which each tile is partitioned. In other words, the workload grows not only with the overall size of the acquired image, as it is intuitive, but also with the resolution of the microscope, since a larger map in terms of pixels has to be computed to correct alignment errors if the voxel size decreases.Figure 1An NCC map is computed for homologous projections (MIPs) of the two overlapping (blue and red) regions of adjacent tiles.Merging and multiresolution image generation (fusion step in the following) aim at creating a stitched image, i.e., a unique seamless image without overlapping regions in a form suitable for further processing. Indeed, one nice feature of TeraStitcher is that it enables the generation of multiple copies of the stitched image at decreasing resolution and size to simplify some types of manipulations when the highest resolution image is very large. Each low resolution image is obtained by properly combining nearby voxels and halving the size of the higher resolution image. We minimize memory occupancy and I/O operations that dominate this step by reading only once at the time limited portions of the input dataset and generating all the requested resolutions of that portion before loading another portion. | [
"26601011",
"23181553",
"26914202",
"28694478",
"23575631",
"17384643",
"19196411",
"19346324",
"22743772",
"23037106"
] | [
{
"pmid": "26601011",
"title": "Label-free near-infrared reflectance microscopy as a complimentary tool for two-photon fluorescence brain imaging.",
"abstract": "In vivo two-photon imaging combined with targeted fluorescent indicators is currently extensively used for attaining critical insights into brain functionality and structural plasticity. Additional information might be gained from back-scattered photons from the near-infrared (NIR) laser without introducing any exogenous labelling. Here, we describe a complimentary and versatile approach that, by collecting the reflected NIR light, provides structural details on axons and blood vessels in the brain, both in fixed samples and in live animals under a cranial window. Indeed, by combining NIR reflectance and two-photon imaging of a slice of hippocampus from a Thy1-GFPm mouse, we show the presence of randomly oriented axons intermingled with sparsely fluorescent neuronal processes. The back-scattered photons guide the contextualization of the fluorescence structure within brain atlas thanks to the recognition of characteristic hippocampal structures. Interestingly, NIR reflectance microscopy allowed the label-free detection of axonal elongations over the superficial layers of mouse cortex under a cranial window in vivo. Finally, blood flow can be measured in live preparations, thus validating label free NIR reflectance as a tool for monitoring hemodynamic fluctuations. The prospective versatility of this label-free technique complimentary to two-photon fluorescence microscopy is demonstrated in a mouse model of photothrombotic stroke in which the axonal degeneration and blood flow remodeling can be investigated."
},
{
"pmid": "23181553",
"title": "TeraStitcher - a tool for fast automatic 3D-stitching of teravoxel-sized microscopy images.",
"abstract": "BACKGROUND\nFurther advances in modern microscopy are leading to teravoxel-sized tiled 3D images at high resolution, thus increasing the dimension of the stitching problem of at least two orders of magnitude. The existing software solutions do not seem adequate to address the additional requirements arising from these datasets, such as the minimization of memory usage and the need to process just a small portion of data.\n\n\nRESULTS\nWe propose a free and fully automated 3D Stitching tool designed to match the special requirements coming out of teravoxel-sized tiled microscopy images that is able to stitch them in a reasonable time even on workstations with limited resources. The tool was tested on teravoxel-sized whole mouse brain images with micrometer resolution and it was also compared with the state-of-the-art stitching tools on megavoxel-sized publicy available datasets. This comparison confirmed that the solutions we adopted are suited for stitching very large images and also perform well on datasets with different characteristics. Indeed, some of the algorithms embedded in other stitching tools could be easily integrated in our framework if they turned out to be more effective on other classes of images. To this purpose, we designed a software architecture which separates the strategies that use efficiently memory resources from the algorithms which may depend on the characteristics of the acquired images.\n\n\nCONCLUSIONS\nTeraStitcher is a free tool that enables the stitching of Teravoxel-sized tiled microscopy images even on workstations with relatively limited resources of memory (<8 GB) and processing power. It exploits the knowledge of approximate tile positions and uses ad-hoc strategies and algorithms designed for such very large datasets. The produced images can be saved into a multiresolution representation to be efficiently retrieved and processed. We provide TeraStitcher both as standalone application and as plugin of the free software Vaa3D."
},
{
"pmid": "28694478",
"title": "MIST: Accurate and Scalable Microscopy Image Stitching Tool with Stage Modeling and Error Minimization.",
"abstract": "Automated microscopy can image specimens larger than the microscope's field of view (FOV) by stitching overlapping image tiles. It also enables time-lapse studies of entire cell cultures in multiple imaging modalities. We created MIST (Microscopy Image Stitching Tool) for rapid and accurate stitching of large 2D time-lapse mosaics. MIST estimates the mechanical stage model parameters (actuator backlash, and stage repeatability 'r') from computed pairwise translations and then minimizes stitching errors by optimizing the translations within a (4r)2 square area. MIST has a performance-oriented implementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of time-lapse multi-channel mosaics 15 to 100 times faster than existing tools. We created 15 reference datasets to quantify MIST's stitching accuracy. The datasets consist of three preparations of stem cell colonies seeded at low density and imaged with varying overlap (10 to 50%). The location and size of 1150 colonies are measured to quantify stitching accuracy. MIST generated stitched images with an average centroid distance error that is less than 2% of a FOV. The sources of these errors include mechanical uncertainties, specimen photobleaching, segmentation, and stitching inaccuracies. MIST produced higher stitching accuracy than three open-source tools. MIST is available in ImageJ at isg.nist.gov."
},
{
"pmid": "23575631",
"title": "Structural and molecular interrogation of intact biological systems.",
"abstract": "Obtaining high-resolution information from a complex system, while maintaining the global perspective needed to understand system function, represents a key challenge in biology. Here we address this challenge with a method (termed CLARITY) for the transformation of intact tissue into a nanoporous hydrogel-hybridized form (crosslinked to a three-dimensional network of hydrophilic polymers) that is fully assembled but optically transparent and macromolecule-permeable. Using mouse brains, we show intact-tissue imaging of long-range projections, local circuit wiring, cellular relationships, subcellular structures, protein complexes, nucleic acids and neurotransmitters. CLARITY also enables intact-tissue in situ hybridization, immunohistochemistry with multiple rounds of staining and de-staining in non-sectioned tissue, and antibody labelling throughout the intact adult mouse brain. Finally, we show that CLARITY enables fine structural analysis of clinical samples, including non-sectioned human tissue from a neuropsychiatric-disease setting, establishing a path for the transmutation of human tissue into a stable, intact and accessible form suitable for probing structural and molecular underpinnings of physiological function and disease."
},
{
"pmid": "17384643",
"title": "Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain.",
"abstract": "Visualizing entire neuronal networks for analysis in the intact brain has been impossible up to now. Techniques like computer tomography or magnetic resonance imaging (MRI) do not yield cellular resolution, and mechanical slicing procedures are insufficient to achieve high-resolution reconstructions in three dimensions. Here we present an approach that allows imaging of whole fixed mouse brains. We modified 'ultramicroscopy' by combining it with a special procedure to clear tissue. We show that this new technique allows optical sectioning of fixed mouse brains with cellular resolution and can be used to detect single GFP-labeled neurons in excised mouse hippocampi. We obtained three-dimensional (3D) images of dendritic trees and spines of populations of CA1 neurons in isolated hippocampi. Also in fruit flies and in mouse embryos, we were able to visualize details of the anatomy by imaging autofluorescence. Our method is ideally suited for high-throughput phenotype screening of transgenic mice and thus will benefit the investigation of disease models."
},
{
"pmid": "19196411",
"title": "XuvTools: free, fast and reliable stitching of large 3D datasets.",
"abstract": "Current biomedical research increasingly requires imaging large and thick 3D structures at high resolution. Prominent examples are the tracking of fine filaments over long distances in brain slices, or the localization of gene expression or cell migration in whole animals like Caenorhabditis elegans or zebrafish. To obtain both high resolution and a large field of view (FOV), a combination of multiple recordings ('tiles') is one of the options. Although hardware solutions exist for fast and reproducible acquisition of multiple 3D tiles, generic software solutions are missing to assemble ('stitch') these tiles quickly and accurately. In this paper, we present a framework that achieves fully automated recombination of tiles recorded at arbitrary positions in 3D space, as long as some small overlap between tiles is provided. A fully automated 3D correlation between all tiles is achieved such that no manual interaction or prior knowledge about tile positions is needed. We use (1) phase-only correlation in a multi-scale approach to estimate the coarse positions, (2) normalized cross-correlation of small patches extracted at salient points to obtain the precise matches, (3) find the globally optimal placement for all tiles by a singular value decomposition and (4) accomplish a nearly seamless stitching by a bleaching correction at the tile borders. If the dataset contains multiple channels, all channels are used to obtain the best matches between tiles. For speedup we employ a heuristic method to prune unneeded correlations, and compute all correlations via the fast Fourier transform (FFT), thereby achieving very good runtime performance. We demonstrate the successful application of the proposed framework to a wide range of different datasets from whole zebrafish embryos and C. elegans, mouse and rat brain slices and fine plant hairs (trichome). Further, we compare our stitching results to those of other commercially and freely available software solutions. The algorithms presented are being made available freely as an open source toolset 'XuvTools' at the corresponding author's website (http://lmb.informatik.uni-freiburg.de/people/ronneber), licensed under the GNU General Public License (GPL) v2. Binaries are provided for Linux and Microsoft Windows. The toolset is written in templated C++, such that it can operate on datasets with any bit-depth. Due to the consequent use of 64bit addressing, stacks of arbitrary size (i.e. larger than 4 GB) can be stitched. The runtime on a standard desktop computer is in the range of a few minutes. A user friendly interface for advanced manual interaction and visualization is also available."
},
{
"pmid": "19346324",
"title": "Globally optimal stitching of tiled 3D microscopic image acquisitions.",
"abstract": "MOTIVATION\nModern anatomical and developmental studies often require high-resolution imaging of large specimens in three dimensions (3D). Confocal microscopy produces high-resolution 3D images, but is limited by a relatively small field of view compared with the size of large biological specimens. Therefore, motorized stages that move the sample are used to create a tiled scan of the whole specimen. The physical coordinates provided by the microscope stage are not precise enough to allow direct reconstruction (Stitching) of the whole image from individual image stacks.\n\n\nRESULTS\nTo optimally stitch a large collection of 3D confocal images, we developed a method that, based on the Fourier Shift Theorem, computes all possible translations between pairs of 3D images, yielding the best overlap in terms of the cross-correlation measure and subsequently finds the globally optimal configuration of the whole group of 3D images. This method avoids the propagation of errors by consecutive registration steps. Additionally, to compensate the brightness differences between tiles, we apply a smooth, non-linear intensity transition between the overlapping images. Our stitching approach is fast, works on 2D and 3D images, and for small image sets does not require prior knowledge about the tile configuration.\n\n\nAVAILABILITY\nThe implementation of this method is available as an ImageJ plugin distributed as a part of the Fiji project (Fiji is just ImageJ: http://pacific.mpi-cbg.de/)."
},
{
"pmid": "22743772",
"title": "Fiji: an open-source platform for biological-image analysis.",
"abstract": "Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities."
},
{
"pmid": "23037106",
"title": "Confocal light sheet microscopy: micron-scale neuroanatomy of the entire mouse brain.",
"abstract": "Elucidating the neural pathways that underlie brain function is one of the greatest challenges in neuroscience. Light sheet based microscopy is a cutting edge method to map cerebral circuitry through optical sectioning of cleared mouse brains. However, the image contrast provided by this method is not sufficient to resolve and reconstruct the entire neuronal network. Here we combined the advantages of light sheet illumination and confocal slit detection to increase the image contrast in real time, with a frame rate of 10 Hz. In fact, in confocal light sheet microscopy (CLSM), the out-of-focus and scattered light is filtered out before detection, without multiple acquisitions or any post-processing of the acquired data. The background rejection capabilities of CLSM were validated in cleared mouse brains by comparison with a structured illumination approach. We show that CLSM allows reconstructing macroscopic brain volumes with sub-cellular resolution. We obtained a comprehensive map of Purkinje cells in the cerebellum of L7-GFP transgenic mice. Further, we were able to trace neuronal projections across brain of thy1-GFP-M transgenic mice. The whole-brain high-resolution fluorescence imaging assured by CLSM may represent a powerful tool to navigate the brain through neuronal pathways. Although this work is focused on brain imaging, the macro-scale high-resolution tomographies affordable with CLSM are ideally suited to explore, at micron-scale resolution, the anatomy of different specimens like murine organs, embryos or flies."
}
] |
Frontiers in Psychology | 31214097 | PMC6558186 | 10.3389/fpsyg.2019.01309 | What Entrepreneurial Followers at the Start-Up Stage Need From Entrepreneurship Cultivation: Evidence From Western China | Entrepreneurial followers are defined as the crucial members of a specific entrepreneurial team and do not include the leader or normal employees in the present paper. This population can be viewed as indispensable factors in the success of entrepreneurship, especially in the start-up stage. In addition, according to the following time, they can be divided into two groups, namely long-term entrepreneurial followers and short-term entrepreneurial followers. However, studies focusing on entrepreneurship cultivation for entrepreneurial followers are relatively few. The main purpose of this paper is to determine the needs of Chinese entrepreneurial followers in entrepreneurship cultivation from the early stage of entrepreneurship. In this paper, a sample of 200 long-term entrepreneurial followers from Tianfu New Area in China was investigated. To enable the researchers to explore the unique opinions of entrepreneurial followers, a mixed data collection approach that combined interviews and questionnaires was chosen in this study. The results revealed following findings: (a) high levels of social capital, good entrepreneurial opportunities and projects, and highly cooperative teams were viewed as the most important factors for entrepreneurship by entrepreneurial followers in China; (b) most entrepreneurial followers believed that the primary difficulty in the cultivation process was the inefficiency in talent training mechanism; and (c) nearly 40% of samples suggested that the cultivation and enhancement of local talents should be firstly carried out by the Chinese government, indicating a gap between the supporting force for local and returned talents in China. In addition, various types of incentive policies and good environments for talent growth were also considered as important suggestions by entrepreneurial followers. We found that unlike entrepreneurial leaders, entrepreneurial followers focus more on income expectation, and personal development rather than supporting the development of companies in China. These findings should be viewed as priorities when enhancing current entrepreneurship cultivation in China. | Related WorksPrevious studies indicated that the entrepreneurship is important for social, national, and industrial development (Shane and Venkataraman, 2000; Raposo and Do, 2011; Zhang et al., 2014). For instance, Zhang et al. (2014) stated that entrepreneurship contributes to the incubation of technological innovation, increases economic efficiency, and creates new jobs. Raposo and Do (2011) argued that the improvement of innovation performance and the well-being of citizens are also considerable factors that contributed to a high level of entrepreneurial activity. Shane and Venkataraman (2000) suggested that entrepreneurship is a good way of showing new technical information that is embodied in products and services. In addition, various aspects such as age (Bohlmann et al., 2017), gender (Zampetakis et al., 2017), personality (Vries, 2010; Hu et al., 2018), cognitive style (Kickul et al., 2010), decision-making abilities of entrepreneurs (Liu, 2018), and the optimization of the allocation of entrepreneurial resources (Dunkelberg et al., 2013) are recent research topics found in the literature.Much evidence to support that entrepreneurship capabilities can be cultivated and are not fixed personal features has been published in existing research. For instance, Karimi et al. (2016) stated that effective cultivation can foster entrepreneurial competences. In addition, as Sánchez (2011), Chia (2010), and Piperopoulos and Dimov (2014) suggested, despite the knowledge and skills necessary to begin and run a business, the improvement of certain beliefs, values, and attitudes is the main achievement of entrepreneurial cultivation. With respect to these improvements from a cultivation perspective, the governmental influence is frequently considered, especially in relation to educational and industrial policies (Verheul et al., 2002; Sánchez, 2011). Raposo and Do (2011) claimed that policy can affect entrepreneurship in two ways: directly, through special measures, and indirectly, through generic measures. Pittaway and Cope (2016) cited political issues as the first macro level of entrepreneurship cultivation, and they believed that the second macro level should refer to general enterprise infrastructure. To be more specific, these themes can be viewed as the input and output of the domain of entrepreneurship cultivation, respectively.From a framework perspective, Jamieson (1984) divided entrepreneurship cultivation into 3 categories by considering different types of cultivation—namely, cultivation for enterprise, which aims to educate students on awareness creation from a theoretical perspective; cultivation in enterprise, which focuses on encouraging participants to begin their own business; and the cultivation in enterprise that includes but is not limited to training management and business expansion. The first category refers to the level of universities or higher-learning institutions that have attracted the attention of the majority of researchers who focus on entrepreneurship cultivation (e.g., Saboe et al., 2002; Rindova et al., 2012; Galloway and Brown, 2013; Küttim et al., 2014; Din et al., 2016). Such a high level of research interest is mainly due to the general consensus that youths are the most important participants in entrepreneurship; there is a significantly positive relationship between the effectiveness of education programs in universities and learning institutions and youths’ intentions of entrepreneurship. Din et al. (2016) stated that many graduates prefer to find positions in public and private sectors with high levels of competitiveness and income rather than becoming entrepreneurs due to their lack of knowledge, awareness, and skills. This is primarily owing to the deficiency of entrepreneurship educational programs in universities. Moreover, some scholars even recommend that entrepreneurship education should be implemented earlier (Sexton and Landström, 2000). In addition, with respect to the third category, the educated population are often small-business owners who have achieved some success. In other words, the point of concern in this category is not whether the entrepreneurs could start a business, but how to run the business more successfully. Therefore, exiting studies related to this aspect could be classified into the area of small-business management (Gorman et al., 1997; Neck and Greene, 2011). However, despite the inspiration that was cultivated in universities and learning institutions and the management capabilities gained in enterprise-training activities, how to carry out the practice in reality is a considerably important point to be discussed, especially at the start-up stage. According to Raposo and Do (2011), this is a typical example of the second category, which refers to the hardest time in the entrepreneurship. In general, start-up companies simply refer to young innovative companies (Zaech and Baldegger, 2017). In this view, the age of companies has been proved to be a widely accepted assessment criterion to judge whether a company is going through the start-up stage (Pellegrino et al., 2012). According to Villéger (2018), the maximum age to define start-up companies varies from 5 to 12 years. Additionally, growth, organizational flexibility, and limited human and finical resources are also typical characteristics of start-up companies that have been identified by existing research (Liao and Welsch, 2003; Peterson et al., 2009).In addition to the temporal perspective, many scholars have researched entrepreneurship cultivation from a spatial point of view, in which the United States (Worsham and Dees, 2012; Elert et al., 2015; Guo et al., 2016) and the United Kingdom (Matlay, 2009; Henry and Treanor, 2010; Dabic et al., 2016) are frequently discussed. In addition, other developed European countries, such as France (Klapper, 2004; Kövesi, 2017), Germany (Klandt, 2004), and Sweden (Dahlstedt and Fejes, 2017) have also been targeted by existing studies. However, a focus on developing countries, such as China, is relatively low in English-language literature, and if developing countries were discussed, it was mostly in consideration of university students only. For instance, Wu and Wu (2008) investigated the relationship between Chinese university students’ higher educational backgrounds and their entrepreneurial intentions. Zhou and Xu (2012) evaluated the state of entrepreneurship education from a student level in China and compared it to the United States. Li et al. (2013) discussed the critical factors in Chinese higher-educational institutions that may shape the directions of entrepreneurship education. Tian et al. (2016) conducted a knowledge map for studies related to entrepreneurship education in China from 2004 to 2013. Xu et al. (2016) reviewed entrepreneurship education in Chinese secondary schools. It is interesting to note that China did not begin any entrepreneurship education programs until 2002, when the Ministry of Education published a pilot project for entrepreneurship education, and the effectiveness of this project is considerable. According to a global entrepreneurship survey, China jumped from eleventh place in 2002 to second place in 2012 in the entrepreneurship composite index rankings of more than 60 countries and regions (Li et al., 2013). Therefore, more in-depth research that targets the development process and current state of Chinese entrepreneurship cultivation is of great significance from a global perspective. Furthermore, as mentioned above, a comprehensive investigation should be implemented, with empirical evidence used to identify useful educational factors that ensure success for entrepreneurs at the start-up stage.In the view of entrepreneurial followers, some scholars have implemented empirical studies that focus on various related issues (Beckman and Burton, 2008; Jin et al., 2016; Forsström-Tuominen et al., 2017). For instance, Jin et al. (2016) conducted a meta-analysis to investigate the relationship between the composition features of entrepreneurial teams and new venture performance. New venture performance can generally be defined as the development and growth of companies at the start-up stage (Klotz et al., 2014). Jin et al. (2016) suggested that the individual ability of entrepreneurial followers could contribute to a higher level of new venture performance. Additionally, Forsström-Tuominen et al. (2017) applied a qualitative multiple-case study to analyze the initiation and formation of entrepreneurial teams in the start-up stage, based on individual- and group-interview data from 4 high-tech teams. They found that in addition to economic and technical issues, the various social and psychological aspects, such as collective encouragement, could be viewed as another important impetus to initiate an entrepreneurial team. It seems that to a large extent, there would be no entrepreneurship without a team. This argument emphasizes the significant value of entrepreneurial members in early stage entrepreneurship. However, the definition of entrepreneurial followers still remains unclear. Instead, most existing studies mixed entrepreneurial leaders and followers, especially for those focused on the entrepreneurship cultivation (Dabic et al., 2016; Din et al., 2016). It seems that such a practice is overly broad and inevitably leads to a reduction in pertinence. Therefore, the present paper provided the definition of entrepreneurial follower (see the section “Introduction”) in order to identify the main differences between entrepreneurial leaders and followers, and accordingly find the needs of entrepreneurial followers in entrepreneurship cultivation during the start-up stage.Moreover, according to the following time, we argued that the entrepreneurial followers can be divided into two categories: long-term entrepreneurial followers and short-term entrepreneurial followers. In comparison, the former shows greater loyalty and autonomy, and has more interest or belief foundation with the leaders. In addition, the behaviors of leader are also more likely to be shaped and altered by the former because the long-term closely personal, social and working relationships between them. Many historical examples have proved the importance of long-term entrepreneurial followers for the success of entrepreneurship. As Cooney (2005) stated that, when one considers the success of Apple, Steve Jobs may immediately spring to mind for most people. However, such great success could not be achieved without Steve Wozniak, who invented the model for the first personal computer, or Mike Markkula, who provided access to venture capital. In addition, with respect to the Alibaba Group, a world-famous Chinese company, apart from the personal capacity of Ma Yun, who is the founder and executive chairman, the success at the start-up stage of venturing cannot be separated from the role of his followers, including but not limited to Jianhang Jin, who was responsible for marketing, and Yongming Wu, who provided technical support. | [
"29250004",
"30595721",
"29962985",
"25135594",
"29880664",
"26715429",
"18476868",
"7613329",
"21774900",
"28386244"
] | [
{
"pmid": "29250004",
"title": "A Lifespan Perspective on Entrepreneurship: Perceived Opportunities and Skills Explain the Negative Association between Age and Entrepreneurial Activity.",
"abstract": "Researchers and practitioners are increasingly interested in entrepreneurship as a means to fight youth unemployment and to improve financial stability at higher ages. However, only few studies so far have examined the association between age and entrepreneurial activity. Based on theories from the lifespan psychology literature and entrepreneurship, we develop and test a model in which perceived opportunities and skills explain the relationship between age and entrepreneurial activity. We analyzed data from the 2013 Global Entrepreneurship Monitor (GEM), while controlling for gender and potential variation between countries. Results showed that age related negatively to entrepreneurial activity, and that perceived opportunities and skills for entrepreneurship mediated this relationship. Overall, these findings suggest that entrepreneurship research should treat age as a substantial variable."
},
{
"pmid": "30595721",
"title": "Envisioning the use of online tests in assessing twenty-first century learning: a literature review.",
"abstract": "The digital world brings with it more and more opportunities to be innovative around assessment. With a variety of digital tools and the pervasive availability of information anywhere anytime, there is a tremendous capacity to creatively employ a diversity of assessment approaches to support and evaluate student learning in higher education. The challenge in a digital world is to harness the possibilities afforded by technology to drive and assess deep learning that prepares graduates for a changing and uncertain future. One widespread method of online assessment used in higher education is online tests. The increase in the use of online tests necessitates an investigation into their role in evaluating twenty-first century learning. This paper draws on the literature to explore the role of online tests in higher education, particularly their relationship to student learning in a digital and changing world, and the issues and challenges they present. We conclude that online tests, when used effectively, can be valuable in the assessment of twenty-first century learning and we synthesise the literature to extract principles for the optimisation of online tests in a digital age."
},
{
"pmid": "29962985",
"title": "Creativity, Proactive Personality, and Entrepreneurial Intention: The Role of Entrepreneurial Alertness.",
"abstract": "This study examines the extent to which entrepreneurial alertness mediates the effects of students' proactive personalities and creativity on entrepreneurial intention. Drawing on a field survey of 735 Chinese undergraduates at 26 universities, this study provides evidence for the argument that entrepreneurial alertness has a fully mediation effect on the relationship between creativity, a proactive personality, and entrepreneurial intention. The findings shed light on the mechanisms that underpin entrepreneurial alertness and contribute to the literature on key elements of the entrepreneurial process."
},
{
"pmid": "25135594",
"title": "Do men need empowering too? A systematic review of entrepreneurial education and microenterprise development on health disparities among inner-city black male youth.",
"abstract": "Economic strengthening through entrepreneurial and microenterprise development has been shown to mitigate poverty-based health disparities in developing countries. Yet, little is known regarding the impact of similar approaches on disadvantaged U.S. populations, particularly inner-city African-American male youth disproportionately affected by poverty, unemployment, and adverse health outcomes. A systematic literature review was conducted to guide programming and research in this area. Eligible studies were those published in English from 2003 to 2014 which evaluated an entrepreneurial and microenterprise initiative targeting inner-city youth, aged 15 to 24, and which did not exclude male participants. Peer-reviewed publications were identified from two electronic bibliographic databases. A manual search was conducted among web-based gray literature and registered trials not yet published. Among the 26 papers retrieved for review, six met the inclusion criteria and were retained for analysis. None of the 16 registered microenterprise trials were being conducted among disadvantaged populations in the U.S. The available literature suggests that entrepreneurial and microenterprise programs can positively impact youth's economic and psychosocial functioning and result in healthier decision-making. Young black men specifically benefited from increased autonomy, engagement, and risk avoidance. However, such programs are vastly underutilized among U.S. minority youth, and the current evidence is insufficiently descriptive or rigorous to draw definitive conclusions. Many programs described challenges in securing adequate resources, recruiting minority male youth, and sustaining community buy-in. There is an urgent need to increase implementation and evaluation efforts, using innovative and rigorous designs, to improve the low status of greater numbers of African-American male youth."
},
{
"pmid": "26715429",
"title": "Academic leagues: a Brazilian way to teach about cancer in medical universities.",
"abstract": "BACKGROUND\nPerformance of qualified professionals committed to cancer care on a global scale is critical. Nevertheless there is a deficit in Cancer Education in Brazilian medical schools (MS). Projects called Academic Leagues (AL) have been gaining attention. However, there are few studies on this subject. AL arise from student initiative, arranged into different areas, on focus in general knowledge, universal to any medical field. They are not obligatory and students are responsible for the organizing and planning processes of AL, so participation highlights the motivation to active pursuit of knowledge. The objective of this study was to explore the relevance of AL, especially on the development of important skills and attitudes for medical students.\n\n\nMETHODS\nA survey was undertaken in order to assess the number of AL Brazilian MS. After nominal list, a grey literature review was conducted to identify those with AL and those with Oncology AL.\n\n\nRESULTS\nOne hundred eighty of the 234 MS were included. Only 4 MS selected held no information about AL and 74.4 % of them had AL in Oncology. The majority had records in digital media. The number of AL was proportional to the distribution of MS across the country, which was related to the number of inhabitants.\n\n\nCONCLUSIONS\nThe real impact and the potential of these projects can be truly understand by a qualitative analysis. AL are able to develop skills and competencies that are rarely stimulated whilst studying in traditional curriculum. This has positive effects on professional training, community approach through prevention strategies, and development on a personal level permitting a dynamic, versatile and attentive outlook to their social role. Besides stimulating fundamental roles to medical practice, students that participate in AL acquire knowledge and develop important skills such as management and leadership, entrepreneurship, innovation, health education, construction of citizenship. Oncology AL encourage more skilled care to patients and more effective policies for cancer control."
},
{
"pmid": "18476868",
"title": "Microalgal triacylglycerols as feedstocks for biofuel production: perspectives and advances.",
"abstract": "Microalgae represent an exceptionally diverse but highly specialized group of micro-organisms adapted to various ecological habitats. Many microalgae have the ability to produce substantial amounts (e.g. 20-50% dry cell weight) of triacylglycerols (TAG) as a storage lipid under photo-oxidative stress or other adverse environmental conditions. Fatty acids, the building blocks for TAGs and all other cellular lipids, are synthesized in the chloroplast using a single set of enzymes, of which acetyl CoA carboxylase (ACCase) is key in regulating fatty acid synthesis rates. However, the expression of genes involved in fatty acid synthesis is poorly understood in microalgae. Synthesis and sequestration of TAG into cytosolic lipid bodies appear to be a protective mechanism by which algal cells cope with stress conditions, but little is known about regulation of TAG formation at the molecular and cellular level. While the concept of using microalgae as an alternative and renewable source of lipid-rich biomass feedstock for biofuels has been explored over the past few decades, a scalable, commercially viable system has yet to emerge. Today, the production of algal oil is primarily confined to high-value specialty oils with nutritional value, rather than commodity oils for biofuel. This review provides a brief summary of the current knowledge on oleaginous algae and their fatty acid and TAG biosynthesis, algal model systems and genomic approaches to a better understanding of TAG production, and a historical perspective and path forward for microalgae-based biofuel research and commercialization."
},
{
"pmid": "7613329",
"title": "Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research.",
"abstract": "Qualitative research methods have a long history in the social sciences and deserve to be an essential component in health and health services research. Qualitative and quantitative approaches to research tend to be portrayed as antithetical; the aim of this series of papers is to show the value of a range of qualitative techniques and how they can complement quantitative research."
},
{
"pmid": "21774900",
"title": "Entrepreneurship education: relationship between education and entrepreneurial activity.",
"abstract": "The importance of entrepreneurial activity for the economic growth of countries is now well established. The relevant literature suggests important links between education, venture creation and entrepreneurial performance, as well as between entrepreneurial education and entrepreneurial activity. The primary purpose of this paper is to provide some insights about entrepreneurship education. The meaning of entrepreneurship education is explained, and the significant increase of these educational programmes is highlighted. Literature has been suggesting that the most suitable indicator to evaluate the results of entrepreneurship education is the rate of new business creation. However, some studies indicate that the results of such programmes are not immediate. Therefore, many researchers try to understand the precursors of venture creation, concluding that is necessary to carry out longitudinal studies. Based on an overview of the research published about the existing linkage of entrepreneurship education and entrepreneurial activity, the main topics studied by different academics are addressed. For the authors, the positive impact of entrepreneurship education puts a double challenge on governments in the future: the increased need of financial funds to support entrepreneurship education and the choice of the correct educational programme."
},
{
"pmid": "28386244",
"title": "Gender-based Differential Item Functioning in the Application of the Theory of Planned Behavior for the Study of Entrepreneurial Intentions.",
"abstract": "Over the past years the percentage of female entrepreneurs has increased, yet it is still far below of that for males. Although various attempts have been made to explain differences in mens' and women's entrepreneurial attitudes and intentions, the extent to which those differences are due to self-report biases has not been yet considered. The present study utilized Differential Item Functioning (DIF) to compare men and women's reporting on entrepreneurial intentions. DIF occurs in situations where members of different groups show differing probabilities of endorsing an item despite possessing the same level of the ability that the item is intended to measure. Drawing on the theory of planned behavior (TPB), the present study investigated whether constructs such as entrepreneurial attitudes, perceived behavioral control, subjective norms and intention would show gender differences and whether these gender differences could be explained by DIF. Using DIF methods on a dataset of 1800 Greek participants (50.4% female) indicated that differences at the item-level are almost non-existent. Moreover, the differential test functioning (DTF) analysis, which allows assessing the overall impact of DIF effects with all items being taken into account simultaneously, suggested that the effect of DIF across all the items for each scale was negligible. Future research should consider that measurement invariance can be assumed when using TPB constructs for the study of entrepreneurial motivation independent of gender."
}
] |
Micromachines | 31137767 | PMC6562584 | 10.3390/mi10050346 | The Matrix KV Storage System Based on NVM Devices | The storage device based on Nonvolatile Memory (NVM devices) has high read/write speed and embedded processor. It is a useful way to improve the efficiency of Key-Value (KV) application. However it still has some limitations such as limited capacity, poorer computing power compared with CPU, and complex I/O system software. Thus it is not an effective way to construct KV storage system with NVM devices directly. We analyze the characteristics of NVM devices and demands of KV application to design the matrix KV storage system based on NVM Devices. The group collaboration management based on Bloomfilter, intragroup optimization based on competition, embedded KV management based on B+-tree, and the new interface of KV storage system are presented. Then, the embedded processor in the NVM device and CPU can be comprehensively utilized to construct a matrix KV pair management system. It can improve the storage and management efficiency of massive KV pairs, and it can also support the efficient execution of KV applications. A prototype is implemented named MKVS (the matrix KV storage system based on NVM devices) to test with YCSB (Yahoo! Cloud System Benchmark) and to compare with the current in-memory KV store. The results show that MKVS can improve the throughput by 5.98 times, and reduce the 99.7% read latency and 77.2% write latency. | 2. Related WorksThere are many researches on how to improve the access speed of a storage system based on NVM devices. A PCIe PCM array named Onyx was implemented and tested [9]. The results showed that it could improve the performance of read and small write by ~72–120% compared with Flash-based SSD. FusionIO extended the support of file systems for atomic writing [10]. It could convert write requests in MySQL to atomic write requests by sending to NVM devices, and improved the efficiency of transactions by 70%. S. Kannan, A, used NVM devices to store checkpoints locally and remotely [11]. The pre-replication policy was designed to move the checkpoint from memory to NVM devices before the checkpoint is started. The efficiency of metadata management largely affects the I/O performance of a file system. In general, metadata are stored with the data of a file in blocks, and a small modification to the metadata leads to the update of the entire block. To take the advantage of the high efficiency of NVM devices to optimize the access performance of the metadata, Subramanya R Dulloor designed a lightweight file system called PMFS for NVDIMMs (non-volatile dual in-line memory module) [12]. It uses cache lines and a 64-byte granulated log to ensure file system consistency, while reducing the performance impact of metadata updating and balances supporting existing applications and optimizing the access performance on NVM devices. Youyou Lu designed Blurred Persistence for transactional persistent memory [13]. It could blur the volatility–persistence boundary to reduce the overhead in transaction support and improve system performance by 56.3% to 143.7%. Wei Q proposed the persistence in memory metadata management mechanism (PIMM), which reduces SSD I/O traffic by utilizing persistence and byte-addressable of NVM devices [14]. PIMM separated data from the metadata access path and stored the data on SSD at runtime and the metadata on NVM devices. PIMM is prototyped on a real NVDIMM platform. Extensive evaluation on the implemented prototype showed that it could reduce the block erase of SSD by 91% and improve I/O performance under several real workloads.Additionally, much work has been done on the system software overhead and performance loss caused by the access interface. Swanson analyzed the hardware and software overhead of the storage system based on NVM devices [15], pointing out that the current I/O system software stack needs to be reconstructed. The proportion on the software stack for traditional architecture is 18%, while PCIe-NVM is 63%. The overhead of the software greatly prevents the NVM device from achieving the purpose of increasing bandwidth and reducing latency. This article points to the shortcomings of using the traditional block I/O interface, and proposes a primitive that batch processes multiple I/O requests to achieve atomic writes, which can reduce overhead of applications, file systems, and operating systems. Besides, direct access to NVM devices is also very popular. DEVFS also reminded that the software stack of storage system should be carefully considered when exploring the characteristics of the storage device [16]. The traditional software stack of storage requires the application to be trapped in the operating system and involved in many layers such as memory buffers, cache, file system, block layer, etc. They will undoubtedly greatly increase the access latency, thus reducing the benefits of the NVM device with high I/O speed. Researchers reminded readers that file systems account for a large proportion of software overhead, so it is important to optimize or current file systems. In the PCIe-NVM prototype system, the file system accounts for 30% of the total latency and reduces performance by 85%. Volos explored the interface optimization technology for SCM (storage class memory) [17], and proposed to use the hardware access control function to avoid the time overhead of the context switch between the kernel space and user space when accessing the file system and to spread the file system function into the application to achieve more flexibility. Although the performance of the system can be greatly improved by adjusting the existing storage I/O stack, the programming based on the fixed POSIX interface is still too cumbersome and inefficient and is not friendly enough to the programmer. In this regard, research on direct I/O has also been carried out, which allow users to interact directly with memory without modifying metafiles, while reducing the access control overhead for data at the file system level. The hardware-based file system access control was used to separate access paths for metadata and data [18,19], and a direct I/O between the user space storage devices was used to avoid metadata modification. In addition, in order to take advantage of the byte addressing of NVM devices, it is necessary to pay attention to the granularity of access and update. Mnemosyne is a lightweight access interface for NVDIMMs to solve the problem of how user programs create and manage nonvolatile memory and how to ensure data consistency [20]. The load/store instruction was used to access the NVDIMMs directly [21].Many studies were focused on how to improve the search efficiency and access performance of KV pairs according to the characteristics of KV store. Data-intensive storage in the age of big data urgently demands a flexible and efficient KV store, in particular in the field of web services. KV store is responsible for storing large amounts of data and accessing them quickly. KV store consists of massive small files, and the saccess characteristics and proportion of each operation must be considered when designing the system. Xingbo Wu designed LSM-Trie [22], a prefix tree structure that can effectively manage metadata and reduce write time overhead of KV store. The combination of KV store and NVM was also a hot spot. The hybrid storage was praised due to the read and write performance gap between NVM devices and DRAM. For NVM, especially PCM has limited writing lifetime. Therefore, there are many researches on how to optimize NVM devices to reduce write times [23,24,25,26]. For example, Chen proposed an unordered leaf node B+ tree to reduce the write overhead caused by sorting [23]. HIKV realize the overall optimization of KV operation by using the advantages of hybrid storage and hybrid indexing [27]. In order to take the advantage of byte-addressable for the NVM device, Deukyeon redesigned the B+ tree to overcome the problem that the amount of write transmission data is inconsistent with cache line. The open source project Pmemkv is a KV database for NVM devices [28]. It uses the linked list and C++ binding of the persistent memory development kit (PMDK) libpmemobj library to implement a persistent memory-aware queue for direct memory access. NVMKV optimizes the KV store based on the internal structure of the NVM device, and implements a lightweight KV store using an FTL-sparse address space, dynamic mapping technology, and transaction consistency, while supporting a highly lock-free parallel mechanism [29] that can almost reach bare device speed. The workload analysis of cache showed that the ratio of get operation to set operation is up to 30:1 in KV store. This means that the concurrency is very important demand to the storage system for KV store. The NVM device has good parallelism. Echo [30] and NVStore [31] use MVCC for concurrency control. Chronos [32] and MICA [33] use partitioning to achieve concurrency control for hash tables. PALM is a lock-free concurrent B+ tree [34]. FPTree use HTM (hardware transactional memory) to handle the concurrency of internal nodes and use fine-grained locks for leaf nodes to access concurrently [18]. ALOHA-KV proposes counter example to prove that concurrent transactions can reduce the time overhead without conflicts, and designs the epoch-based ECC (error correcting code) mechanism to minimize the overhead caused by synchronization conflicts [35]. | [] | [] |
Cells | 31126166 | PMC6562946 | 10.3390/cells8050499 | Multi-Path Dilated Residual Network for Nuclei Segmentation and Detection | As a typical biomedical detection task, nuclei detection has been widely used in human health management, disease diagnosis and other fields. However, the task of cell detection in microscopic images is still challenging because the nuclei are commonly small and dense with many overlapping nuclei in the images. In order to detect nuclei, the most important key step is to segment the cell targets accurately. Based on Mask RCNN model, we designed a multi-path dilated residual network, and realized a network structure to segment and detect dense small objects, and effectively solved the problem of information loss of small objects in deep neural network. The experimental results on two typical nuclear segmentation data sets show that our model has better recognition and segmentation capability for dense small targets. | 2. Related WorkAt present, active contour model [2], watershed model [3], and regional growth model [4] are three popular nuclear segmentation models. Among them, active contour model is used more frequently, mainly because it can fit the boundary of the target well. However, the segmentation effect of the model depends largely on the initial contour given by the detected nucleus position and the detection effect of the nucleus. Good or bad has a strong restriction on the final segmentation effect. Therefore, more research is needed in cell division.In recent years, deep learning methods have been widely used in histopathological image analysis. Ciresan et al. [5] employ deep convolution neural network (CNN) to detect mitotic phenomena in mammary tissue pathological images. Ertosun et al. [6] perform automated grading of gliomas on brain histopathological images using CNN. Xu Jun et al. [7] execute automated segmentation of epithelial and matrix regions on pathological mammary tissue images using CNN. Sirinukunwattana and Ojala [8,9] use CNN to automatically segment and classify the benign and malignant glands of colorectal cancer.In 2012, Krizhevsky A et al. [10] propose an network called AlexNet, which automatically extracts deep features of images through convolutional neural network (CNN). AlexNet uses the ReLU activation function instead of the traditional Sigmoid activation function to accelerate the convergence of the model. The powerful feature extraction ability of convolutional neural network has been rapidly applied to various fields of computer vision. Ojbect detection based on deep neural network can be divided into region extraction method and regression method. Object detection task based on region extraction method can be divided into two sub-problems: candidate region generation, candidate region boundary box regression and in-frame object recognition. In 2014, the RCNN object detection algorithm proposed by Girshick R et al. [11] is the earliest candidate region-based algorithm, which first generates candidate regions by selective search method, and then classify the candidate regions. Selective search divides the image into several smaller regions by a simple region partitioning algorithm, then merges these regions according to certain similarity rules through hierarchical grouping, and finally generates candidate regions. RCNN transforms the detection problem into classification problem. After extracting deep features from each candidate region using CNN, SVM algorithm is used to classify, which greatly improves the detection accuracy. In order to solve the problem of input fixed size image in image classification, He et al. [12] proposed SPP-Net. The proposed spatial pyramid pooling layer can be combined with RCNN to improve the performance of the model. It is extremely inefficient for RCNN to use the CNN model to compute the features of each candidate region in turn. Fast RCNN proposed by R. Girshick et al. [13] finds the corresponding feature regions from the feature map of CNN output according to the proportion of the candidate regions, which solves the problem of time-consuming repeated feature calculation. In addition, Fast RCNN uses Softmax classifier instead of SVM classifier. For Fast RCNN, it uses the selective search method in RCNN to generate candidate regions. To solve this problem, the Faster RCNN model proposed by S. Ren et al. [14] introduced candidate region generation network (RPN) to generate candidate regions directly. Faster R-CNN model has been greatly improved in both accuracy and speed of detection after RPN. The latter research based on candidate region method basically adopts similar framework. The feature pyramid network (FPN) proposed by Lin T Y et al. [15] in 2017 is one of the earliest networks to use multi-scale features and top-down structure for object detection, which improves the detection effect of small targets through multi-stage feature fusion. Li Y et al. [16] proposed the first full convolution end-to-end instance partitioning model. By introducing two score maps, the tasks of partitioning and classifying are paralleled. Mask RCNN [17] is a two-stage instance segmentation algorithm based on candidate regions. It is extended based on Faster-RCNN algorithm, and a small FCN segmentation network is added to predict the foreground and background of the target. In the first stage, the whole image is scanned to generate a number of regions that may contain objects. Each candidate region is aligned by ROI Align pooling and mapped to a fixed-size feature. Then the extracted feature is sent to the segmentation branch network, and finally the segmentation results of the instances in the candidate region are obtained. Later on, a series of object segmentation and detection schemes are proposed [18,19,20,21,22,23], however, they are not target on the high dense and small objects of the microscopic histopathological images. | [
"17906660",
"20172780",
"20872884",
"28154470",
"27614792",
"27295650",
"28287963"
] | [
{
"pmid": "17906660",
"title": "Nuclear E-cadherin and VHL immunoreactivity are prognostic indicators of clear-cell renal cell carcinoma.",
"abstract": "The loss of functional von Hippel-Lindau (VHL) tumor suppressor gene is associated with the development of clear-cell renal cell carcinoma (CC-RCC). Recently, VHL was shown to promote the transcription of E-cadherin, an adhesion molecule whose expression is inversely correlated with the aggressive phenotype of numerous epithelial cancers. Here, we performed immunohistochemistry on CC-RCC tissue microarrays to determine the prognostic value of E-cadherin and VHL with respect to Fuhrman grade and clinical prognosis. Low Fuhrman grade and good prognosis associated with positive VHL and E-cadherin immunoreactivity, whereas poor prognosis and high-grade tumors associated with a lack of E-cadherin and lower frequency of VHL staining. A significant portion of CC-RCC with positive VHL immunostaining correlated with nuclear localization of C-terminally cleaved E-cadherin. DNA sequencing revealed in a majority of nuclear E-cadherin-positive CC-RCC, subtle point mutations, deletions and insertions in VHL. Furthermore, nuclear E-cadherin was not observed in chromophobe or papillary RCC, as well as matched normal kidney tissue. In addition, nuclear E-cadherin localization was recapitulated in CC-RCC xenografts devoid of functional VHL or reconstituted with synthetic mutant VHL grown in SCID mice. These findings provide the first evidence of aberrant nuclear localization of E-cadherin in CC-RCC harboring VHL mutations, and suggest potential prognostic value of VHL and E-cadherin in CC-RCC."
},
{
"pmid": "20172780",
"title": "Expectation-maximization-driven geodesic active contour with overlap resolution (EMaGACOR): application to lymphocyte segmentation on breast cancer histopathology.",
"abstract": "The presence of lymphocytic infiltration (LI) has been correlated with nodal metastasis and tumor recurrence in HER2+ breast cancer (BC). The ability to automatically detect and quantify extent of LI on histopathology imagery could potentially result in the development of an image based prognostic tool for human epidermal growth factor receptor-2 (HER2+) BC patients. Lymphocyte segmentation in hematoxylin and eosin (H&E) stained BC histopathology images is complicated by the similarity in appearance between lymphocyte nuclei and other structures (e.g., cancer nuclei) in the image. Additional challenges include biological variability, histological artifacts, and high prevalence of overlapping objects. Although active contours are widely employed in image segmentation, they are limited in their ability to segment overlapping objects and are sensitive to initialization. In this paper, we present a new segmentation scheme, expectation-maximization (EM) driven geodesic active contour with overlap resolution (EMaGACOR), which we apply to automatically detecting and segmenting lymphocytes on HER2+ BC histopathology images. EMaGACOR utilizes the expectation-maximization algorithm for automatically initializing a geodesic active contour (GAC) and includes a novel scheme based on heuristic splitting of contours via identification of high concavity points for resolving overlapping structures. EMaGACOR was evaluated on a total of 100 HER2+ breast biopsy histology images and was found to have a detection sensitivity of over 86% and a positive predictive value of over 64%. By comparison, the EMaGAC model (without overlap resolution) and GAC model yielded corresponding detection sensitivities of 42% and 19%, respectively. Furthermore, EMaGACOR was able to correctly resolve over 90% of overlaps between intersecting lymphocytes. Hausdorff distance (HD) and mean absolute distance (MAD) for EMaGACOR were found to be 2.1 and 0.9 pixels, respectively, and significantly better compared to the corresponding performance of the EMaGAC and GAC models. EMaGACOR is an efficient, robust, reproducible, and accurate segmentation technique that could potentially be applied to other biomedical image analysis problems."
},
{
"pmid": "20872884",
"title": "Constrained watershed method to infer morphology of mammalian cells in microscopic images.",
"abstract": "Precise information about the size, shape, temporal dynamics, and spatial distribution of cells is beneficial for the understanding of cell behavior and may play a key role in drug development, regenerative medicine, and disease research. The traditional method of manual observation and measurement of cells from microscopic images is tedious, expensive, and time consuming. Thus, automated methods are in high demand, especially given the increasing quantity of cell data being collected. In this article, an automated method to measure cell morphology from microscopic images is proposed to outline the boundaries of individual hematopoietic stem cells (HSCs). The proposed method outlines the cell regions using a constrained watershed method which is derived as an inverse problem. The experimental results generated by applying the proposed method to different HSC image sequences showed robust performance to detect and segment individual and dividing cells. The performance of the proposed method for individual cell segmentation for single frame high-resolution images was more than 97%, and decreased slightly to 90% for low-resolution multiframe stitched images."
},
{
"pmid": "28154470",
"title": "A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images.",
"abstract": "Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions."
},
{
"pmid": "27614792",
"title": "Gland segmentation in colon histology images: The glas challenge contest.",
"abstract": "Colorectal adenocarcinoma originating in intestinal glandular structures is the most common form of colon cancer. In clinical practice, the morphology of intestinal glands, including architectural appearance and glandular formation, is used by pathologists to inform prognosis and plan the treatment of individual patients. However, achieving good inter-observer as well as intra-observer reproducibility of cancer grading is still a major challenge in modern pathology. An automated approach which quantifies the morphology of glands is a solution to the problem. This paper provides an overview to the Gland Segmentation in Colon Histology Images Challenge Contest (GlaS) held at MICCAI'2015. Details of the challenge, including organization, dataset and evaluation criteria, are presented, along with the method descriptions and evaluation results from the top performing methods."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "28287963",
"title": "A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.",
"abstract": "Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images."
}
] |
Scientific Data | 31209213 | PMC6572845 | 10.1038/s41597-019-0103-9 | Multitask learning and benchmarking with clinical time series data | Health care is one of the most exciting frontiers in data mining and machine learning. Successful adoption of electronic health records (EHRs) created an explosion in digital clinical data available for analysis, but progress in machine learning for healthcare research has been difficult to measure because of the absence of publicly available benchmark data sets. To address this problem, we propose four clinical prediction benchmarks using data derived from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database. These tasks cover a range of clinical problems including modeling risk of mortality, forecasting length of stay, detecting physiologic decline, and phenotype classification. We propose strong linear and neural baselines for all four tasks and evaluate the effect of deep supervision, multitask training and data-specific architectural modifications on the performance of neural models. | Related WorkThere is an extensive body of research on clinical predictions using deep learning, and we will attempt to highlight only the most representative or relevant work since a full treatment is not possible.Feedforward neural networks nearly always outperform logistic regression and severity of illness scores in modeling mortality risk among hospitalized patients23–25. Recently, it was shown that novel neural architectures (including ones based on LSTM) perform well for predicting inpatient mortality, 30-day unplanned readmission, long length-of-stay (binary classification) and diagnoses on general EHR data (not limited to ICU)26. The experiments were done on several private datasets.There is a great deal of early research that uses neural networks to predict LOS in hospitalized patients27,28. However, rather than regression, much of this work formulates the task as binary classification aimed at identifying patients at risk for long stays29. Recently, novel deep learning architectures have been proposed for survival analysis30,31, a similar time-to-event regression task with right censoring.Phenotyping has been a popular application for deep learning researchers in recent years, though model architecture and problem definition vary widely. Feedforward networks32,33, LSTM networks34 and temporal convolutional networks35 have been used to predict diagnostic codes from clinical time series. In 2016, it was first shown that recurrent neural networks could classify dozens of acute care diagnoses in variable length clinical time series36.Multitask learning has its roots in clinical prediction23. Several authors formulated phenotyping as multi-label classification, using neural networks to implicitly capture comorbidities in hidden layers35,36. Others attempted to jointly solve multiple related clinical tasks, including predicting mortality and length of stay37. However, none of this work addressed problem settings where sequential or temporal structure varies across tasks. The closest work in spirit to ours is a paper by Collobert and Weston38 where a single convolutional network is used to perform a variety of natural language tasks (part-of-speech tagging, named entity recognition, and language modeling) with diverse sequential structure.Earlier version of this work has been available online for two years (arXiv:1703.07771v1). The current version adds more detailed description of the dataset generation process, improves the neural baselines and adds more discussion on the results. Since the release of the preliminary version of the benchmark codebase, several teams used our dataset generation pipeline (fully or partially). In particular, the pipeline was used for in-hospital mortality prediction39–44, decompensation prediction45, length-of-stay prediction43,45,46, phenotyping39,40,47 and readmission prediction48. Additionally, attention-based RNNs were applied for all our benchmark tasks49.In a parallel work another set of benchmark tasks based on MIMIC-III was introduced that includes multiple versions of in-hospital mortality predictions, length-of-stay and ICD-9 code group predictions, but does not include decompensation prediction50. The most critical difference is that in all their prediction tasks the input is either the data of the first 24 or 48 hours, while we do length of stay and decompensation prediction at each hour of the stay, and do phenotyping based on the data of the entire stay. We frame the length-of-stay prediction as a classification problem and use the Cohen’s kappa score as its metric, while they frame it as a regression problem and use the mean squared error as its metric. The metric they use is less indicative of performance given that the distribution of length of stay has a heavy tail. In ICD-9 code group prediction, we have 25 code groups as opposed to their 20 groups. There are many differences in the data processing and feature selection as well. For example, we exclude all ICU stays where the patient is younger than 18, while they exclude patients younger than 15. Moreover, they consider only the first admission of a patient, while we consider all admissions. They have benchmarks for three different features sets: A, B, and C, while we have only one set of features, which roughly corresponds to their feature set A. The set of baselines is also different. While our work has more LSTM-based baselines, the parallel work has more baselines with traditional machine learning techniques. | [
"25006137",
"16540951",
"23297608",
"17141139",
"26819042",
"25969432",
"19574617",
"17059892",
"1902275",
"27219127",
"11246308",
"23766893",
"7944911",
"7622400",
"8181282",
"23826094",
"29879470",
"6499483",
"11588210",
"20637974",
"12544992",
"16826863",
"9731816",
"26420780",
"27174893",
"27107443"
] | [
{
"pmid": "25006137",
"title": "Big data in health care: using analytics to identify and manage high-risk and high-cost patients.",
"abstract": "The US health care system is rapidly adopting electronic health records, which will dramatically increase the quantity of clinical data that are available electronically. Simultaneously, rapid progress has been made in clinical analytics--techniques for analyzing large quantities of data and gleaning new insights from that analysis--which is part of what is known as big data. As a result, there are unprecedented opportunities to use big data to reduce the costs of health care in the United States. We present six use cases--that is, key examples--where some of the clearest opportunities exist to reduce costs through the use of big data: high-cost patients, readmissions, triage, decompensation (when a patient's condition worsens), adverse events, and treatment optimization for diseases affecting multiple organ systems. We discuss the types of insights that are likely to emerge from clinical analytics, the types of data needed to obtain such insights, and the infrastructure--analytics, algorithms, registries, assessment scores, monitoring devices, and so forth--that organizations will need to perform the necessary analyses and to implement changes that will improve care while reducing costs. Our findings have policy implications for regulatory oversight, ways to address privacy concerns, and the support of research on analytics."
},
{
"pmid": "16540951",
"title": "Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today's critically ill patients.",
"abstract": "OBJECTIVE\nTo improve the accuracy of the Acute Physiology and Chronic Health Evaluation (APACHE) method for predicting hospital mortality among critically ill adults and to evaluate changes in the accuracy of earlier APACHE models.\n\n\nDESIGN\n: Observational cohort study.\n\n\nSETTING\nA total of 104 intensive care units (ICUs) in 45 U.S. hospitals.\n\n\nPATIENTS\nA total of 131,618 consecutive ICU admissions during 2002 and 2003, of which 110,558 met inclusion criteria and had complete data.\n\n\nINTERVENTIONS\nNone.\n\n\nMEASUREMENTS AND MAIN RESULTS\nWe developed APACHE IV using ICU day 1 information and a multivariate logistic regression procedure to estimate the probability of hospital death for randomly selected patients who comprised 60% of the database. Predictor variables were similar to those in APACHE III, but new variables were added and different statistical modeling used. We assessed the accuracy of APACHE IV predictions by comparing observed and predicted hospital mortality for the excluded patients (validation set). We tested discrimination and used multiple tests of calibration in aggregate and for patient subgroups. APACHE IV had good discrimination (area under the receiver operating characteristic curve = 0.88) and calibration (Hosmer-Lemeshow C statistic = 16.9, p = .08). For 90% of 116 ICU admission diagnoses, the ratio of observed to predicted mortality was not significantly different from 1.0. We also used the validation data set to compare the accuracy of APACHE IV predictions to those using APACHE III versions developed 7 and 14 yrs previously. There was little change in discrimination, but aggregate mortality was systematically overestimated as model age increased. When examined across disease, predictive accuracy was maintained for some diagnoses but for others seemed to reflect changes in practice or therapy.\n\n\nCONCLUSIONS\nAPACHE IV predictions of hospital mortality have good discrimination and calibration and should be useful for benchmarking performance in U.S. ICUs. The accuracy of predictive models is dynamic and should be periodically retested. When accuracy deteriorates they should be revised and updated."
},
{
"pmid": "23297608",
"title": "The high cost of low-acuity ICU outliers.",
"abstract": "Direct variable costs were determined on each hospital day for all patients with an intensive care unit (ICU) stay in four Phoenix-area hospital ICUs. Average daily direct variable cost in the four ICUs ranged from $1,436 to $1,759 and represented 69.4 percent and 45.7 percent of total hospital stay cost for medical and surgical patients, respectively. Daily ICU cost and length of stay (LOS) were higher in patients with higher ICU admission acuity of illness as measured by the APACHE risk prediction methodology; 16.2 percent of patients had an ICU stay in excess of six days, and these LOS outliers accounted for 56.7 percent of total ICU cost. While higher-acuity patients were more likely to be ICU LOS outliers, 11.1 percent of low-risk patients were outliers. The low-risk group included 69.4 percent of the ICU population and accounted for 47 percent of all LOS outliers. Low-risk LOS outliers accounted for 25.3 percent of ICU cost and incurred fivefold higher hospital stay costs and mortality rates. These data suggest that severity of illness is an important determinant of daily resource consumption and LOS, regardless of whether the patient arrives in the ICU with high acuity or develops complications that increase acuity. The finding that a substantial number of long-stay patients come into the ICU with low acuity and deteriorate after ICU admission is not widely recognized and represents an important opportunity to improve patient outcomes and lower costs. ICUs should consider adding low-risk LOS data to their quality and financial performance reports."
},
{
"pmid": "17141139",
"title": "Triage in medicine, part I: Concept, history, and types.",
"abstract": "This 2-article series offers a conceptual, historical, and moral analysis of the practice of triage. Part I distinguishes triage from related concepts, reviews the evolution of triage principles and practices, and describes the settings in which triage is commonly practiced. Part II identifies and examines the moral values and principles underlying the practice of triage."
},
{
"pmid": "26819042",
"title": "Mastering the game of Go with deep neural networks and tree search.",
"abstract": "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away."
},
{
"pmid": "25969432",
"title": "Customization of a Severity of Illness Score Using Local Electronic Medical Record Data.",
"abstract": "PURPOSE\nSeverity of illness (SOI) scores are traditionally based on archival data collected from a wide range of clinical settings. Mortality prediction using SOI scores tends to underperform when applied to contemporary cases or those that differ from the case-mix of the original derivation cohorts. We investigated the use of local clinical data captured from hospital electronic medical records (EMRs) to improve the predictive performance of traditional severity of illness scoring.\n\n\nMETHODS\nWe conducted a retrospective analysis using data from the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database, which contains clinical data from the Beth Israel Deaconess Medical Center in Boston, Massachusetts. A total of 17 490 intensive care unit (ICU) admissions with complete data were included, from 4 different service types: medical ICU, surgical ICU, coronary care unit, and cardiac surgery recovery unit. We developed customized SOI scores trained on data from each service type, using the clinical variables employed in the Simplified Acute Physiology Score (SAPS). In-hospital, 30-day, and 2-year mortality predictions were compared with those obtained from using the original SAPS using the area under the receiver-operating characteristics curve (AUROC) as well as the area under the precision-recall curve (AUPRC). Test performance in different cohorts stratified by severity of organ injury was also evaluated.\n\n\nRESULTS\nMost customized scores (30 of 39) significantly outperformed SAPS with respect to both AUROC and AUPRC. Enhancements over SAPS were greatest for patients undergoing cardiovascular surgery and for prediction of 2-year mortality.\n\n\nCONCLUSIONS\nCustom models based on ICU-specific data provided better mortality prediction than traditional SAPS scoring using the same predictor variables. Our local data approach demonstrates the value of electronic data capture in the ICU, of secondary uses of EMR data, and of local customization of SOI scoring."
},
{
"pmid": "19574617",
"title": "Factorial switching linear dynamical systems applied to physiological condition monitoring.",
"abstract": "Condition monitoring often involves the analysis of systems with hidden factors that switch between different modes of operation in some way. Given a sequence of observations, the task is to infer the filtering distribution of the switch setting at each time step. In this paper, we present factorial switching linear dynamical systems as a general framework for handling such problems. We show how domain knowledge and learning can be successfully combined in this framework, and introduce a new factor (the \"X-factor\") for dealing with unmodeled variation. We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of physiological measurements and artifacts. We have explicit knowledge of common factors and use the X-factor to model novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on typical intensive care unit monitoring data."
},
{
"pmid": "17059892",
"title": "The multitasking clinician: decision-making and cognitive demand during and after team handoffs in emergency care.",
"abstract": "Several studies have shown that there is information loss during interruptions, and that multitasking creates higher memory load, both of which contribute to medical error. Nowhere is this more critical than in the emergency department (ED), where the emphasis of clinical decision is on the timely evaluation and stabilization of patients. This paper reports on the nature of multitasking and shift change and its implications for patient safety in an adult ED, using the methods of ethnographic observation and interviews. Data were analyzed using grounded theory to study cognition in the context of the work environment. Analysis revealed that interruptions within the ED were prevalent and diverse in nature. On average, there was an interruption every 9 and 14 min for the attending physicians and the residents, respectively. In addition, the workflow analysis showed gaps in information flow due to multitasking and shift changes. Transfer of information began at the point of hand-offs/shift changes and continued through various other activities, such as documentation, consultation, teaching activities and utilization of computer resources. The results show that the nature of the communication process in the ED is complex and cognitively taxing for the clinicians, which can compromise patient safety. The need to tailor existing generic electronic tools to support adaptive processes like multitasking and handoffs in a time-constrained environment is discussed."
},
{
"pmid": "1902275",
"title": "The relationship between severity of illness and hospital length of stay and mortality.",
"abstract": "To address the question of quantification of severity of illness on a wide scale, the Computerized Severity Index (CSI) was developed by a research team at the Johns Hopkins University. This article describes an initial assessment of some aspects of the validity and reliability of the CSI on a sample of 2,378 patients within 27 high-volume DRGs from five teaching hospitals. The 27 DRGs predicted 27% of the variation in LOS, while DRGs adjusted for Admission CSI scores predicted 38% and DRGs adjusted for Maximum CSI scores throughout the hospital stay predicted 54% of this variation. Thus, the Maximum CSI score increased the predictability of DRGs by 100%. We explored the impact of including a 7-day cutoff criterion along with the Maximum CSI score similar to a criterion used in an alternative severity of illness measure. The DRG/Maximum CSI score's predictive power increased to 63% when the 7-day cutoff was added to the CSI definition. The Admission CSI score was used to predict in-hospital mortality and correlated R = 0.603 with mortality. The reliability of Admission and Maximum CSI data collection was high, with agreement of 95% and kappa statistics of 0.88 and 0.90, respectively."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
},
{
"pmid": "11246308",
"title": "Predicting hospital mortality for patients in the intensive care unit: a comparison of artificial neural networks with logistic regression models.",
"abstract": "OBJECTIVE\nLogistic regression (LR), commonly used for hospital mortality prediction, has limitations. Artificial neural networks (ANNs) have been proposed as an alternative. We compared the performance of these approaches by using stepwise reductions in sample size.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nSeven intensive care units (ICU) at one tertiary care center.\n\n\nPATIENTS\nPatients were 1,647 ICU admissions for whom first-day Acute Physiology and Chronic Health Evaluation III variables were collected.\n\n\nINTERVENTIONS\nNone.\n\n\nMEASUREMENTS AND MAIN RESULTS\nWe constructed LR and ANN models on a random set of 1,200 admissions (development set) and used the remaining 447 as the validation set. We repeated model construction on progressively smaller development sets (800, 400, and 200 admissions) and retested on the original validation set (n = 447). For each development set, we constructed models from two LR and two ANN architectures, organizing the independent variables differently. With the 1,200-admission development set, all models had good fit and discrimination on the validation set, where fit was assessed by the Hosmer-Lemeshow C statistic (range, 10.6-15.3; p > or = .05) and standardized mortality ratio (SMR) (range, 0.93 [95% confidence interval, 0.79-1.15] to 1.09 [95% confidence interval, 0.89-1.38]), and discrimination was assessed by the area under the receiver operating characteristic curve (range, 0.80-0.84). As development set sample size decreased, model performance on the validation set deteriorated rapidly, although the ANNs retained marginally better fit at 800 (best C statistic was 26.3 [p = .0009] and 13.1 [p = .11] for the LR and ANN models). Below 800, fit was poor with both approaches, with high C statistics (ranging from 22.8 [p <.004] to 633 [p <.0001]) and highly biased SMRs (seven of the eight models below 800 had SMRs of <0.85, with an upper confidence interval of <1). Discrimination ranged from 0.74 to 0.84 below 800.\n\n\nCONCLUSIONS\nWhen sample size is adequate, LR and ANN models have similar performance. However, development sets of < or = 800 were generally inadequate. This is concerning, given typical sample sizes used for individual ICU mortality prediction."
},
{
"pmid": "23766893",
"title": "A Database-driven Decision Support System: Customized Mortality Prediction.",
"abstract": "We hypothesize that local customized modeling will provide more accurate mortality prediction than the current standard approach using existing scoring systems. Mortality prediction models were developed for two subsets of patients in Multi-parameter Intelligent Monitoring for Intensive Care (MIMIC), a public de-identified ICU database, and for the subset of patients ≥80 years old in a cardiac surgical patient registry. Logistic regression (LR), Bayesian network (BN) and artificial neural network (ANN) were employed. The best-fitted models were tested on the remaining unseen data and compared to either the Simplified Acute Physiology Score (SAPS) for the ICU patients, or the EuroSCORE for the cardiac surgery patients. Local customized mortality prediction models performed better as compared to the corresponding current standard severity scoring system for all three subsets of patients: patients with acute kidney injury (AUC = 0.875 for ANN, vs. SAPS, AUC = 0.642), patients with subarachnoid hemorrhage (AUC = 0.958 for BN, vs. SAPS, AUC = 0.84), and elderly patients undergoing open heart surgery (AUC = 0.94 for ANN, vs. EuroSCORE, AUC = 0.648). Rather than developing models with good external validity by including a heterogeneous patient population, an alternative approach would be to build models for specific patient subsets using one's local database."
},
{
"pmid": "7944911",
"title": "Simulated neural networks to predict outcomes, costs, and length of stay among orthopedic rehabilitation patients.",
"abstract": "Our purpose was to develop a set of simulated neural networks that would predict functional outcomes, length of stay, and costs among orthopedic patients admitted to an inpatient rehabilitation hospital. We used retrospective data for a sample of 387 patients between the ages of 60 and 89 who had been admitted to a single rehabilitation facility over a period of 12 months. Using age and data on functional capacity at admission from the Functional Independence Measure, we were successful in constructing networks that were 86%, 87%, and 91% accurate in predicting functional outcome, length of stay, and costs to within +/- 15% of the actual value. In each case the accuracy of the network exceeded that of a multiple regression equation using the same variables. Our results show the feasibility of using simulated neural networks to predict rehabilitation outcomes, and the advantages of neural networks over conventional linear models. Networks of this kind may be of significant value to administrators and clinicians in predicting outcomes and resource usage as rehabilitation hospitals are faced with capitation and prospective payment schemes."
},
{
"pmid": "7622400",
"title": "Artificial neural network predictions of lengths of stay on a post-coronary care unit.",
"abstract": "OBJECTIVE\nTo create and validate a model that predicts length of hospital unit stay.\n\n\nDESIGN\nEx post facto. Seventy-four independent admission variables in 15 general categories were utilized to predict possible stays of 1 to 20 days.\n\n\nSETTING\nLaboratory.\n\n\nSAMPLE\nRecords of patients discharged from a post-coronary care unit in early 1993.\n\n\nRESULTS\nAn artificial neural network was trained on 629 records and tested on an additional 127 records of patients. The absolute disparity between the actual lengths of stays in the test records and the predictions of the network averaged 1.4 days per record, and the actual length of stay was predicted within 1 day 72% of the time.\n\n\nCONCLUSIONS\nThe artificial neural network demonstrated the capacity to utilize common patient admission characteristics to predict lengths of stay. This technology shows promise in aiding timely initiation of treatment and effective resource planning and cost control."
},
{
"pmid": "8181282",
"title": "A comparison of statistical and connectionist models for the prediction of chronicity in a surgical intensive care unit.",
"abstract": "OBJECTIVE\nTo compare statistical and connectionist models for the prediction of chronicity which is influenced by patient disease and external factors.\n\n\nDESIGN\nRetrospective development of predictive criteria and subsequent prospective testing of the same predictive criteria, using multiple logistic regression and three architecturally distinct neural networks; revision of predictive criteria.\n\n\nSETTING\nSurgical intensive care unit (ICU) equipped with a clinical information system in a +/- 1000-bed university hospital.\n\n\nPATIENTS\nFour hundred ninety-one patients with ICU length of stay 3 days who survived at least an additional 4 days.\n\n\nINTERVENTIONS\nNone.\n\n\nMEASUREMENTS AND MAIN RESULTS\nChronicity was defined as a length of stay > 7 days. Neural networks predicted chronicity more reliably than the statistical model regardless of the former's architecture. However, the neural networks' ability to predict this chronicity degraded over time.\n\n\nCONCLUSIONS\nConnectionist models may contribute to the prediction of clinical trajectory, including outcome and resource utilization, in surgical ICUs."
},
{
"pmid": "23826094",
"title": "Computational phenotype discovery using unsupervised feature learning over noisy, sparse, and irregular clinical data.",
"abstract": "Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don't think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data - Electronic Medical Records - typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies."
},
{
"pmid": "29879470",
"title": "Benchmarking deep learning models on large healthcare datasets.",
"abstract": "Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models."
},
{
"pmid": "6499483",
"title": "A simplified acute physiology score for ICU patients.",
"abstract": "We used 14 easily measured biologic and clinical variables to develop a simple scoring system reflecting the risk of death in ICU patients. The simplified acute physiology score (SAPS) was evaluated in 679 consecutive patients admitted to eight multidisciplinary referral ICUs in France. Surgery accounted for 40% of admissions. Data were collected during the first 24 h after ICU admission. SAPS correctly classified patients in groups of increasing probability of death, irrespective of diagnosis, and compared favorably with the acute physiology score (APS), a more complex scoring system which has also been applied to ICU patients. SAPS was a simpler and less time-consuming method for comparative studies and management evaluation between different ICUs."
},
{
"pmid": "11588210",
"title": "Validation of a modified Early Warning Score in medical admissions.",
"abstract": "The Early Warning Score (EWS) is a simple physiological scoring system suitable for bedside application. The ability of a modified Early Warning Score (MEWS) to identify medical patients at risk of catastrophic deterioration in a busy clinical area was investigated. In a prospective cohort study, we applied MEWS to patients admitted to the 56-bed acute Medical Admissions Unit (MAU) of a District General Hospital (DGH). Data on 709 medical emergency admissions were collected during March 2000. Main outcome measures were death, intensive care unit (ICU) admission, high dependency unit (HDU) admission, cardiac arrest, survival and hospital discharge at 60 days. Scores of 5 or more were associated with increased risk of death (OR 5.4, 95%CI 2.8-10.7), ICU admission (OR 10.9, 95%CI 2.2-55.6) and HDU admission (OR 3.3, 95%CI 1.2-9.2). MEWS can be applied easily in a DGH medical admission unit, and identifies patients at risk of deterioration who require increased levels of care in the HDU or ICU. A clinical pathway could be created, using nurse practitioners and/or critical care physicians, to respond to high scores and intervene with appropriate changes in clinical management."
},
{
"pmid": "20637974",
"title": "ViEWS--Towards a national early warning score for detecting adult inpatient deterioration.",
"abstract": "AIM OF STUDY\nTo develop a validated, paper-based, aggregate weighted track and trigger system (AWTTS) that could serve as a template for a national early warning score (EWS) for the detection of patient deterioration.\n\n\nMATERIALS AND METHODS\nUsing existing knowledge of the relationship between physiological data and adverse clinical outcomes, a thorough review of the literature surrounding EWS and physiology, and a previous detailed analysis of published EWSs, we developed a new paper-based EWS - VitalPAC EWS (ViEWS). We applied ViEWS to a large vital signs database (n=198,755 observation sets) collected from 35,585 consecutive, completed acute medical admissions, and also evaluated the comparative performance of 33 other AWTTSs, for a range of outcomes using the area under the receiver-operating characteristics (AUROC) curve.\n\n\nRESULTS\nThe AUROC (95% CI) for ViEWS using in-hospital mortality with 24h of the observation set was 0.888 (0.880-0.895). The AUROCs (95% CI) for the 33 other AWTTSs tested using the same outcome ranged from 0.803 (0.792-0.815) to 0.850 (0.841-0.859). ViEWS performed better than the 33 other AWTTSs for all outcomes tested.\n\n\nCONCLUSIONS\nWe have developed a simple AWTTS - ViEWS - designed for paper-based application and demonstrated that its performance for predicting mortality (within a range of timescales) is superior to all other published AWTTSs that we tested. We have also developed a tool to provide a relative measure of the number of \"triggers\" that would be generated at different values of EWS and permits the comparison of the workload generated by different AWTTSs."
},
{
"pmid": "12544992",
"title": "Early indicators of prolonged intensive care unit stay: impact of illness severity, physician staffing, and pre-intensive care unit length of stay.",
"abstract": "OBJECTIVE\nScoring systems that predict mortality do not necessarily predict prolonged length of stay or costs in the intensive care unit (ICU). Knowledge of characteristics predicting prolonged ICU stay would be helpful, particularly if some factors could be modified. Such factors might include process of care, including active involvement of full-time ICU physicians and length of hospital stay before ICU admission.\n\n\nDESIGN\nDemographic data, clinical diagnosis at ICU admission, Simplified Acute Physiology Score, and organizational characteristics were examined by logistic regression for their effect on ICU and hospital length of stay and weighted hospital days (WHD), a proxy for high cost of care.\n\n\nSETTING\nA total of 34 ICUs at 27 hospitals participating in Project IMPACT during 1998.\n\n\nPATIENTS\nA total of 10,900 critically ill medical, surgical, and trauma patients qualifying for Simplified Acute Physiology Score assessment.\n\n\nINTERVENTIONS\nNone.\n\n\nRESULTS\nOverall, 9.8% of patients had excess WHD, but the percentage varied by diagnosis. Factors predicting high WHD include Simplified Acute Physiology Score survival probability, age of 40 to 80 yrs, presence of infection or mechanical ventilation 24 hrs after admission, male sex, emergency surgery, trauma, presence of critical care fellows, and prolonged pre-ICU hospital stay. Mechanical ventilation at 24 hrs predicts high WHD across diagnostic categories, with a relative risk of between 2.4 and 12.9. Factors protecting against high WHD include do-not-resuscitate order at admission, presence of coma 24 hrs after admission, and active involvement of full-time ICU physicians.\n\n\nCONCLUSIONS\nPatients with high WHD, and thus high costs, can be identified early. Severity of illness only partially explains high WHD. Age is less important as a predictor of high WHD than presence of infection or ventilator dependency at 24 hrs. Both long ward stays before ICU admission and lack of full-time ICU physician involvement in care increase the probability of long ICU stays. These latter two factors are potentially modifiable and deserve prospective study."
},
{
"pmid": "16826863",
"title": "Prediction of in-hospital mortality and length of stay using an early warning scoring system: clinical audit.",
"abstract": "This aim of this study was to assess the impact of the introduction of a standardised early warning scoring system (SEWS) on physiological observations and patient outcomes in unselected acute admissions at point of entry to care. A sequential clinical audit was performed on 848 patients admitted to a combined medical and surgical assessment unit during two separate 11-day periods. Physiological parameters (respiratory rate, oxygen saturation, temperature, blood pressure, heart rate, and conscious level), in-hospital mortality, length of stay, transfer to critical care and staff satisfaction were documented. Documentation of these physiological parameters improved (P<0.001-0.005) with the exception of oxygen saturation (P=0.069). The admission early warning score correlated both with in-hospital mortality (P<0.001) and length of stay (P=0.001). Following the introduction of the scoring system, inpatient mortality decreased (P=0.046). Staff responding to a questionnaire indicated that the scoring system increased awareness of illness severity (80%) and prompted earlier interventions (60%). A standardised early warning scoring system improves documentation of physiological parameters, correlates with in-hospital mortality, and helps predict length of stay."
},
{
"pmid": "9731816",
"title": "Use of an artificial neural network to predict length of stay in acute pancreatitis.",
"abstract": "Length of stay (LOS) predictions in acute pancreatitis could be used to stratify patients with severe acute pancreatitis, make treatment and resource allocation decisions, and for quality assurance. Artificial neural networks have been used to predict LOS in other conditions but not acute pancreatitis. The hypothesis of this study was that a neural network could predict LOS in patients with acute pancreatitis. The medical records of 195 patients admitted with acute pancreatitis were reviewed. A backpropagation neural network was developed to predict LOS >7 days. The network was trained on 156 randomly selected cases and tested on the remaining 39 cases. The neural network had the highest sensitivity (75%) for predicting LOS >7 days. Ranson criteria had the highest specificity (94%) for making this prediction. All methods incorrectly predicted LOS in two patients with severe acute pancreatitis who died early in their hospital course. An artificial neural network can predict LOS >7 days. The network and traditional prognostic indices were least accurate for predicting LOS in patients with severe acute pancreatitis who died early in their hospital course. The neural network has the advantage of making this prediction using admission data."
},
{
"pmid": "26420780",
"title": "The digital revolution in phenotyping.",
"abstract": "Phenotypes have gained increased notoriety in the clinical and biological domain owing to their application in numerous areas such as the discovery of disease genes and drug targets, phylogenetics and pharmacogenomics. Phenotypes, defined as observable characteristics of organisms, can be seen as one of the bridges that lead to a translation of experimental findings into clinical applications and thereby support 'bench to bedside' efforts. However, to build this translational bridge, a common and universal understanding of phenotypes is required that goes beyond domain-specific definitions. To achieve this ambitious goal, a digital revolution is ongoing that enables the encoding of data in computer-readable formats and the data storage in specialized repositories, ready for integration, enabling translational research. While phenome research is an ongoing endeavor, the true potential hidden in the currently available data still needs to be unlocked, offering exciting opportunities for the forthcoming years. Here, we provide insights into the state-of-the-art in digital phenotyping, by means of representing, acquiring and analyzing phenotype data. In addition, we provide visions of this field for future research work that could enable better applications of phenotype data."
},
{
"pmid": "27174893",
"title": "Learning statistical models of phenotypes using noisy labeled training data.",
"abstract": "OBJECTIVE\nTraditionally, patient groups with a phenotype are selected through rule-based definitions whose creation and validation are time-consuming. Machine learning approaches to electronic phenotyping are limited by the paucity of labeled training datasets. We demonstrate the feasibility of utilizing semi-automatically labeled training sets to create phenotype models via machine learning, using a comprehensive representation of the patient medical record.\n\n\nMETHODS\nWe use a list of keywords specific to the phenotype of interest to generate noisy labeled training data. We train L1 penalized logistic regression models for a chronic and an acute disease and evaluate the performance of the models against a gold standard.\n\n\nRESULTS\nOur models for Type 2 diabetes mellitus and myocardial infarction achieve precision and accuracy of 0.90, 0.89, and 0.86, 0.89, respectively. Local implementations of the previously validated rule-based definitions for Type 2 diabetes mellitus and myocardial infarction achieve precision and accuracy of 0.96, 0.92 and 0.84, 0.87, respectively.We have demonstrated feasibility of learning phenotype models using imperfectly labeled data for a chronic and acute phenotype. Further research in feature engineering and in specification of the keyword list can improve the performance of the models and the scalability of the approach.\n\n\nCONCLUSIONS\nOur method provides an alternative to manual labeling for creating training sets for statistical models of phenotypes. Such an approach can accelerate research with large observational healthcare datasets and may also be used to create local phenotype models."
},
{
"pmid": "27107443",
"title": "Electronic medical record phenotyping using the anchor and learn framework.",
"abstract": "BACKGROUND\nElectronic medical records (EMRs) hold a tremendous amount of information about patients that is relevant to determining the optimal approach to patient care. As medicine becomes increasingly precise, a patient's electronic medical record phenotype will play an important role in triggering clinical decision support systems that can deliver personalized recommendations in real time. Learning with anchors presents a method of efficiently learning statistically driven phenotypes with minimal manual intervention.\n\n\nMATERIALS AND METHODS\nWe developed a phenotype library that uses both structured and unstructured data from the EMR to represent patients for real-time clinical decision support. Eight of the phenotypes were evaluated using retrospective EMR data on emergency department patients using a set of prospectively gathered gold standard labels.\n\n\nRESULTS\nWe built a phenotype library with 42 publicly available phenotype definitions. Using information from triage time, the phenotype classifiers have an area under the ROC curve (AUC) of infection 0.89, cancer 0.88, immunosuppressed 0.85, septic shock 0.93, nursing home 0.87, anticoagulated 0.83, cardiac etiology 0.89, and pneumonia 0.90. Using information available at the time of disposition from the emergency department, the AUC values are infection 0.91, cancer 0.95, immunosuppressed 0.90, septic shock 0.97, nursing home 0.91, anticoagulated 0.94, cardiac etiology 0.92, and pneumonia 0.97.\n\n\nDISCUSSION\nThe resulting phenotypes are interpretable and fast to build, and perform comparably to statistically learned phenotypes developed with 5000 manually labeled patients.\n\n\nCONCLUSION\nLearning with anchors is an attractive option for building a large public repository of phenotype definitions that can be used for a range of health IT applications, including real-time decision support."
}
] |
Frontiers in Neurorobotics | 31244638 | PMC6581731 | 10.3389/fnbot.2019.00037 | SAE+LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG | EEG-based automatic emotion recognition can help brain-inspired robots in improving their interactions with humans. This paper presents a novel framework for emotion recognition using multi-channel electroencephalogram (EEG). The framework consists of a linear EEG mixing model and an emotion timing model. Our proposed framework considerably decomposes the EEG source signals from the collected EEG signals and improves classification accuracy by using the context correlations of the EEG feature sequences. Specially, Stack AutoEncoder (SAE) is used to build and solve the linear EEG mixing model and the emotion timing model is based on the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN). The framework was implemented on the DEAP dataset for an emotion recognition experiment, where the mean accuracy of emotion recognition achieved 81.10% in valence and 74.38% in arousal, and the effectiveness of our framework was verified. Our framework exhibited a better performance in emotion recognition using multi-channel EEG than the compared conventional approaches in the experiments. | 2. Related WorkSome recent studies have been working on emotion recognition using EEG signals.Khosrowabadi et al. presented a biologically inspired feedforward neural network named ERNN to recognize human emotions from EEG. To simulate the short term memory of emotion, a serial-in/parallel-out shift register memory was used in ERNN to accumulate the EEG signals. Compared with other feature extraction methods and feedforward learning algorithms, ERNN achieved the highest accuracy when using the radial basis function (Khosrowabadi et al., 2014).Soleymani et al. studied how to explore the emotional traces of videos and presented an approach in instantaneously detecting the emotions of video viewers from EEG signals and facial expressions. They utilized LSTM-RNN and continuous conditional random fields (CCRF) to detect emotions automatically and continuously. The results showed that EEG signals and facial expressions carried adequate information for detecting emotions (Soleymani et al., 2016).Li et al. explored the influence of different frequency bands and number of channels of the EEG signals on emotion recognition. The emotional states were classified into the dimensions of valence and arousal using different combinations of EEG channels. The results showed that the gamma frequency band was preferred and increasing the number of channels could increase the recognition rate (Li et al., 2018).Independent Component Analysis (ICA) approaches for multi-channel EEG processing are popular, especially for artifact removal and source extraction.You et al. presented a method of blind signal separation (BSS) for multi-channel EEG, which combined the Wavelet Transform and ICA together. The high-frequency noises were removed from the collected EEG by using the noise filtering function of wavelet transform, so that the ICA could extract the EEG source signals without regard to the problem of noise separation. The experimental results approved the effectiveness of this method in the BBS of multi-channel EEG (You et al., 2004).Brunner et al. compared three ICA methods (Informax, FastICA and SOBI) with other preprocessing methods (CSP) find out whether and to what extent spatial filtering of EEG data can improve single trial classification accuracy. The results showed that Informax outperformed the other two ICA algorithms (Brunner et al., 2007).Korats et al. compared the source separation performance of four major ICA algorithms (namely FastICA, AMICA, Extended InfoMax, and JADER) and defined a low bound of data length for robust separation results. AMICA showed an impressive performance with very short data length but required a lot of running time. FastICA took very little time but required twice the data length of AMICA (Korats et al., 2012).In recent years, autoencoder has drawn more and more attention in biological signal processing, especially in signal reconstruction and feature extraction.Liu et al. presented a multimodal deep learning approach to construct affective models with the DEAP and SEED datasets to enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications. Using EEG and eye features, the approach achieved mean accuracies of 91.01 and 83.25% on the SEED and DEAP datasets. The experiment results demonstrated that high-level representation features extracted by the BDAE (Bimodal Deep AutoEncoder) network were effective for emotion recognition (Liu et al., 2016).Majumdar et al. proposed an autoencoder-based framework that simultaneously reconstructed and classified biomedical signals. Using an autoencoder, a new paradigm for signal reconstruction was proposed. It has the advantage of not requiring any assumption regarding the signal as long as there was a sufficient amount of training data. The experiment results showed that the method was better in reconstruction and more than an order of magnitude faster than CS (Compressed Sensing)-based methods. It was capable of providing real-time operations. The method also achieved a satisfactory classification performance (Majumdar et al., 2016).In these reviewed studies, EEG-based emotional classification has been studied extensively, and corresponding achievements have been realized in the aspects of EEG signal preprocessing, feature extractions, and classifiers. However, decomposition of EEG signals is still a challenge. The current mainly used ICA method assumes the source signals that constitute the mixed EEG signals are independent of each other and do not conform to the normal distribution. But the physiological structure of the brain does not support this hypothesis, as the interconnected cerebral cortex makes the EEG signals have a natural correlation among each other. On the other hand, feature extractions in this area have seldom considered the association and contextual relationships between frames of different EEG signals, which leads to an inadequate utilization of multi-domain information of EEG signals in space-time and the frequency domain. In this work, we tried to explore the method in decomposing EEG signals to source signals and adopt the context correlation of EEG feature sequences to improve emotion recognition. | [
"18267787",
"24465281",
"16873662",
"12236331",
"24807454",
"7762889",
"29758974",
"28061779",
"20442037",
"15082325",
"29080913",
"29958457",
"7605074",
"24348375",
"20807577",
"28443015",
"24917813"
] | [
{
"pmid": "18267787",
"title": "Learning long-term dependencies with gradient descent is difficult.",
"abstract": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered."
},
{
"pmid": "24465281",
"title": "Time domain measures of inter-channel EEG correlations: a comparison of linear, nonparametric and nonlinear measures.",
"abstract": "Correlations between ten-channel EEGs obtained from thirteen healthy adult participants were investigated. Signals were obtained in two behavioral states: eyes open no task and eyes closed no task. Four time domain measures were compared: Pearson product moment correlation, Spearman rank order correlation, Kendall rank order correlation and mutual information. The psychophysiological utility of each measure was assessed by determining its ability to discriminate between conditions. The sensitivity to epoch length was assessed by repeating calculations with 1, 2, 3, …, 8 s epochs. The robustness to noise was assessed by performing calculations with noise corrupted versions of the original signals (SNRs of 0, 5 and 10 dB). Three results were obtained in these calculations. First, mutual information effectively discriminated between states with less data. Pearson, Spearman and Kendall failed to discriminate between states with a 1 s epoch, while a statistically significant separation was obtained with mutual information. Second, at all epoch durations tested, the measure of between-state discrimination was greater for mutual information. Third, discrimination based on mutual information was more robust to noise. The limitations of this study are discussed. Further comparisons should be made with frequency domain measures, with measures constructed with embedded data and with the maximal information coefficient."
},
{
"pmid": "16873662",
"title": "Reducing the dimensionality of data with neural networks.",
"abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
},
{
"pmid": "12236331",
"title": "Large-scale neural correlates of affective picture processing.",
"abstract": "Hemodynamic and electrophysiological studies indicate differential brain response to emotionally arousing, compared to neutral, pictures. The time course and source distribution of electrocortical potentials in response to emotional stimuli, using a high-density electrode (129-sensor) array were examined here. Event-related potentials (ERPs) were recorded while participants viewed pleasant, neutral, and unpleasant pictures. ERP voltages were examined in six time intervals, roughly corresponding to P1, N1, early P3, late P3 and a slow wave window. Differential activity was found for emotional, compared to neutral, pictures at both of the P3 intervals, as well as enhancement of later posterior positivity. Source space projection was performed using a minimum norm procedure that estimates the source currents generating the extracranially measured electrical gradient. Sources of slow wave modulation were located in occipital and posterior parietal cortex, with a right-hemispheric dominance."
},
{
"pmid": "24807454",
"title": "ERNN: a biologically inspired feedforward neural network to discriminate emotion from EEG signal.",
"abstract": "Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function."
},
{
"pmid": "7762889",
"title": "The emotion probe. Studies of motivation and attention.",
"abstract": "Emotions are action dispositions--states of vigilant readiness that vary widely in reported affect, physiology, and behavior. They are driven, however, by only 2 opponent motivational systems, appetitive and aversive--subcortical circuits that mediate reactions to primary reinforcers. Using a large emotional picture library, reliable affective psychophysiologies are shown, defined by the judged valence (appetitive/pleasant or aversive/unpleasant) and arousal of picture percepts. Picture-evoked affects also modulate responses to independently presented startle probe stimuli. In other words, they potentiate startle reflexes during unpleasant pictures and inhibit them during pleasant pictures, and both effects are augmented by high picture arousal. Implications are elucidated for research in basic emotions, psychopathology, and theories of orienting and defense. Conclusions highlight both the approach's constraints and promising paths for future study."
},
{
"pmid": "29758974",
"title": "Emotion recognition from multichannel EEG signals using K-nearest neighbor classification.",
"abstract": "BACKGROUND\nMany studies have been done on the emotion recognition based on multi-channel electroencephalogram (EEG) signals.\n\n\nOBJECTIVE\nThis paper explores the influence of the emotion recognition accuracy of EEG signals in different frequency bands and different number of channels.\n\n\nMETHODS\nWe classified the emotional states in the valence and arousal dimensions using different combinations of EEG channels. Firstly, DEAP default preprocessed data were normalized. Next, EEG signals were divided into four frequency bands using discrete wavelet transform, and entropy and energy were calculated as features of K-nearest neighbor Classifier.\n\n\nRESULTS\nThe classification accuracies of the 10, 14, 18 and 32 EEG channels based on the Gamma frequency band were 89.54%, 92.28%, 93.72% and 95.70% in the valence dimension and 89.81%, 92.24%, 93.69% and 95.69% in the arousal dimension. As the number of channels increases, the classification accuracy of emotional states also increases, the classification accuracy of the gamma frequency band is greater than that of the beta frequency band followed by the alpha and theta frequency bands.\n\n\nCONCLUSIONS\nThis paper provided better frequency bands and channels reference for emotion recognition based on EEG."
},
{
"pmid": "28061779",
"title": "A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees.",
"abstract": "BACKGROUND\nMost of the modern motorized prostheses are controlled with the surface electromyography (sEMG) recorded on the residual muscles of amputated limbs. However, the residual muscles are usually limited, especially after above-elbow amputations, which would not provide enough sEMG for the control of prostheses with multiple degrees of freedom. Signal fusion is a possible approach to solve the problem of insufficient control commands, where some non-EMG signals are combined with sEMG signals to provide sufficient information for motion intension decoding. In this study, a motion-classification method that combines sEMG and electroencephalography (EEG) signals were proposed and investigated, in order to improve the control performance of upper-limb prostheses.\n\n\nMETHODS\nFour transhumeral amputees without any form of neurological disease were recruited in the experiments. Five motion classes including hand-open, hand-close, wrist-pronation, wrist-supination, and no-movement were specified. During the motion performances, sEMG and EEG signals were simultaneously acquired from the skin surface and scalp of the amputees, respectively. The two types of signals were independently preprocessed and then combined as a parallel control input. Four time-domain features were extracted and fed into a classifier trained by the Linear Discriminant Analysis (LDA) algorithm for motion recognition. In addition, channel selections were performed by using the Sequential Forward Selection (SFS) algorithm to optimize the performance of the proposed method.\n\n\nRESULTS\nThe classification performance achieved by the fusion of sEMG and EEG signals was significantly better than that obtained by single signal source of either sEMG or EEG. An increment of more than 14% in classification accuracy was achieved when using a combination of 32-channel sEMG and 64-channel EEG. Furthermore, based on the SFS algorithm, two optimized electrode arrangements (10-channel sEMG + 10-channel EEG, 10-channel sEMG + 20-channel EEG) were obtained with classification accuracies of 84.2 and 87.0%, respectively, which were about 7.2 and 10% higher than the accuracy by using only 32-channel sEMG input.\n\n\nCONCLUSIONS\nThis study demonstrated the feasibility of fusing sEMG and EEG signals towards improving motion classification accuracy for above-elbow amputees, which might enhance the control performances of multifunctional myoelectric prostheses in clinical application.\n\n\nTRIAL REGISTRATION\nThe study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077."
},
{
"pmid": "20442037",
"title": "EEG-based emotion recognition in music listening.",
"abstract": "Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications."
},
{
"pmid": "15082325",
"title": "Human emotion and memory: interactions of the amygdala and hippocampal complex.",
"abstract": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory."
},
{
"pmid": "29080913",
"title": "Towards Efficient Decoding of Multiple Classes of Motor Imagery Limb Movements Based on EEG Spectral and Time Domain Descriptors.",
"abstract": "To control multiple degrees of freedom (MDoF) upper limb prostheses, pattern recognition (PR) of electromyogram (EMG) signals has been successfully applied. This technique requires amputees to provide sufficient EMG signals to decode their limb movement intentions (LMIs). However, amputees with neuromuscular disorder/high level amputation often cannot provide sufficient EMG control signals, and thus the applicability of the EMG-PR technique is limited especially to this category of amputees. As an alternative approach, electroencephalograph (EEG) signals recorded non-invasively from the brain have been utilized to decode the LMIs of humans. However, most of the existing EEG based limb movement decoding methods primarily focus on identifying limited classes of upper limb movements. In addition, investigation on EEG feature extraction methods for the decoding of multiple classes of LMIs has rarely been considered. Therefore, 32 EEG feature extraction methods (including 12 spectral domain descriptors (SDDs) and 20 time domain descriptors (TDDs)) were used to decode multiple classes of motor imagery patterns associated with different upper limb movements based on 64-channel EEG recordings. From the obtained experimental results, the best individual TDD achieved an accuracy of 67.05 ± 3.12% as against 87.03 ± 2.26% for the best SDD. By applying a linear feature combination technique, an optimal set of combined TDDs recorded an average accuracy of 90.68% while that of the SDDs achieved an accuracy of 99.55% which were significantly higher than those of the individual TDD and SDD at p < 0.05. Our findings suggest that optimal feature set combination would yield a relatively high decoding accuracy that may improve the clinical robustness of MDoF neuroprosthesis.\n\n\nTRIAL REGISTRATION\nThe study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077."
},
{
"pmid": "29958457",
"title": "A Review of Emotion Recognition Using Physiological Signals.",
"abstract": "Emotion recognition based on physiological signals has been a hot topic and applied in many areas such as safe driving, health care and social security. In this paper, we present a comprehensive review on physiological signal-based emotion recognition, including emotion models, emotion elicitation methods, the published emotional physiological datasets, features, classifiers, and the whole framework for emotion recognition based on the physiological signals. A summary and comparation among the recent studies has been conducted, which reveals the current existing problems and the future work has been discussed."
},
{
"pmid": "24348375",
"title": "EEG theta and Mu oscillations during perception of human and robot actions.",
"abstract": "The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other."
},
{
"pmid": "20807577",
"title": "A better oscillation detection method robustly extracts EEG rhythms across brain state changes: the human alpha rhythm as a test case.",
"abstract": "Oscillatory activity is a principal mode of operation in the brain. Despite an intense resurgence of interest in the mechanisms and functions of brain rhythms, methods for the detection and analysis of oscillatory activity in neurophysiological recordings are still highly variable across studies. We recently proposed a method for detecting oscillatory activity from time series data, which we call the BOSC (Better OSCillation detection) method. This method produces systematic, objective, and consistent results across frequencies, brain regions and tasks. It does so by modeling the functional form of the background spectrum by fitting the empirically observed spectrum at the recording site. This minimizes bias in oscillation detection across frequency, region and task. Here we show that the method is also robust to dramatic changes in state that are known to influence the shape of the power spectrum, namely, the presence versus absence of the alpha rhythm, and can be applied to independent components, which are thought to reflect underlying sources, in addition to individual raw signals. This suggests that the BOSC method is an effective tool for measuring changes in rhythmic activity in the more common research scenario wherein state is unknown."
},
{
"pmid": "28443015",
"title": "Cross-Subject EEG Feature Selection for Emotion Recognition Using Transfer Recursive Feature Elimination.",
"abstract": "Using machine-learning methodologies to analyze EEG signals becomes increasingly attractive for recognizing human emotions because of the objectivity of physiological data and the capability of the learning principles on modeling emotion classifiers from heterogeneous features. However, the conventional subject-specific classifiers may induce additional burdens to each subject for preparing multiple-session EEG data as training sets. To this end, we developed a new EEG feature selection approach, transfer recursive feature elimination (T-RFE), to determine a set of the most robust EEG indicators with stable geometrical distribution across a group of training subjects and a specific testing subject. A validating set is introduced to independently determine the optimal hyper-parameter and the feature ranking of the T-RFE model aiming at controlling the overfitting. The effectiveness of the T-RFE algorithm for such cross-subject emotion classification paradigm has been validated by DEAP database. With a linear least square support vector machine classifier implemented, the performance of the T-RFE is compared against several conventional feature selection schemes and the statistical significant improvement has been found. The classification rate and F-score achieve 0.7867, 0.7526, 0.7875, and 0.8077 for arousal and valence dimensions, respectively, and outperform several recent reported works on the same database. In the end, the T-RFE based classifier is compared against two subject-generic classifiers in the literature. The investigation of the computational time for all classifiers indicates the accuracy improvement of the T-RFE is at the cost of the longer training time."
},
{
"pmid": "24917813",
"title": "Predictable internal brain dynamics in EEG and its relation to conscious states.",
"abstract": "Consciousness is a complex and multi-faceted phenomenon defying scientific explanation. Part of the reason why this is the case is due to its subjective nature. In our previous computational experiments, to avoid such a subjective trap, we took a strategy to investigate objective necessary conditions of consciousness. Our basic hypothesis was that predictive internal dynamics serves as such a condition. This is in line with theories of consciousness that treat retention (memory), protention (anticipation), and primary impression as the tripartite temporal structure of consciousness. To test our hypothesis, we analyzed publicly available sleep and awake electroencephalogram (EEG) data. Our results show that EEG signals from awake or rapid eye movement (REM) sleep states have more predictable dynamics compared to those from slow-wave sleep (SWS). Since awakeness and REM sleep are associated with conscious states and SWS with unconscious or less consciousness states, these results support our hypothesis. The results suggest an intricate relationship among prediction, consciousness, and time, with potential applications to time perception and neurorobotics."
}
] |
JMIR mHealth and uHealth | 31199304 | PMC6592513 | 10.2196/12013 | Development of a Sensor-Based Behavioral Monitoring Solution to Support Dementia Care | BackgroundMobile and wearable technology presents exciting opportunities for monitoring behavior using widely available sensor data. This could support clinical research and practice aimed at improving quality of life among the growing number of people with dementia. However, it requires suitable tools for measuring behavior in a natural real-life setting that can be easily implemented by others.ObjectiveThe objectives of this study were to develop and test a set of algorithms for measuring mobility and activity and to describe a technical setup for collecting the sensor data that these algorithms require using off-the-shelf devices.MethodsA mobility measurement module was developed to extract travel trajectories and home location from raw GPS (global positioning system) data and to use this information to calculate a set of spatial, temporal, and count-based mobility metrics. Activity measurement comprises activity bout extraction from recognized activity data and daily step counts. Location, activity, and step count data were collected using smartwatches and mobile phones, relying on open-source resources as far as possible for accessing data from device sensors. The behavioral monitoring solution was evaluated among 5 healthy subjects who simultaneously logged their movements for 1 week.ResultsThe evaluation showed that the behavioral monitoring solution successfully measures travel trajectories and mobility metrics from location data and extracts multimodal activity bouts during travel between locations. While step count could be used to indicate overall daily activity level, a concern was raised regarding device validity for step count measurement, which was substantially higher from the smartwatches than the mobile phones.ConclusionsThis study contributes to clinical research and practice by providing a comprehensive behavioral monitoring solution for use in a real-life setting that can be replicated for a range of applications where knowledge about individual mobility and activity is relevant. | Related Work and Open ChallengesMeasurement of mobility and physical activity has traditionally been performed using surveys. This approach is limited by its reliance on patients’ memory and subjective perceptions of values such as the distances they cover or time spent active each day, which is especially problematic among people with cognitive impairment. Surveys require input from both patients and health care professionals and thus tend to be restricted to discrete measurements at widely spaced intervals with no information about changes that occur daily or even weekly or monthly. The last decade has seen significant progress toward sensor-based behavior measurement, including among the elderly and cognitively impaired. Mobility and activity features have been calculated using specialized global positioning system (GPS) kits and ankle-worn accelerometers [4,8,11,12]. Although these works offer valuable contributions toward sensor-based behavioral monitoring, the use of specialized systems or strict protocols regarding device placement to measure behavior under experimental conditions is unrealistic for long-term everyday use, therefore difficult to replicate in a real-world setting. This motivates a growing interest in leveraging the wide availability and acceptance of today’s personal devices [8,13-15]. Smartphones and wearables have successfully been applied to measure activity among older adults under free-living conditions [13], daily step count and distance covered among people with dementia [14], and life space among people with Parkinson disease and mild-to-moderate Alzheimer disease [8,15]. System design considerations for real-world use are addressed in a previous study [14], which demonstrates how adequate data are recorded over an extended period (5 months) to reveal behavioral patterns. In some studies [8,13,15], the sensor-based approach is evaluated by comparing measures between experimental and control groups, indicating that significant change in sensor-based behavioral measures might be detected with disease onset/progression; however, no comparison is made with manually reported data. The behavioral measures used vary as follows: activity measures range from daily steps to more detailed descriptions of active and sedentary states; and life space measures range from basic distances to trips or time frames away from home but without extraction of travel trajectories in their estimation. Instead, a threshold distance from home is used to determine whether points are at or away from home. A high threshold (such as 500m in one study [15]) may not detect trips within the subject’s neighborhood. Even with a lower threshold (such as 25m in another study [8]), it is not possible to infer how many places the subject visited if they did not travel home between places, or whether they are continuously moving (eg, going for a long walk) compared with staying at a single location (eg, visiting a friend, in hospital). Without reference data such as self-reports, it is difficult to evaluate the performance of such methods.This study therefore aimed to advance progress toward mobile/wearable technology–based behavioral monitoring by building upon noted strengths regarding real-world suitability, extending mobility measurement to incorporate GPS trajectory extraction, and providing evidence comparing sensor-derived measures with reference data in the form of self-reports. | [
"30033050",
"28546139",
"23548944",
"20145017",
"22874772",
"24356464",
"23710796",
"28735855",
"25548086",
"28974482",
"28865985",
"25100206",
"22163393",
"22255662",
"25495710",
"24768430",
"24652916",
"25881662",
"24770359",
"14687391",
"20101923"
] | [
{
"pmid": "28546139",
"title": "Cognitive Testing in People at Increased Risk of Dementia Using a Smartphone App: The iVitality Proof-of-Principle Study.",
"abstract": "BACKGROUND\nSmartphone-assisted technologies potentially provide the opportunity for large-scale, long-term, repeated monitoring of cognitive functioning at home.\n\n\nOBJECTIVE\nThe aim of this proof-of-principle study was to evaluate the feasibility and validity of performing cognitive tests in people at increased risk of dementia using smartphone-based technology during a 6 months follow-up period.\n\n\nMETHODS\nWe used the smartphone-based app iVitality to evaluate five cognitive tests based on conventional neuropsychological tests (Memory-Word, Trail Making, Stroop, Reaction Time, and Letter-N-Back) in healthy adults. Feasibility was tested by studying adherence of all participants to perform smartphone-based cognitive tests. Validity was studied by assessing the correlation between conventional neuropsychological tests and smartphone-based cognitive tests and by studying the effect of repeated testing.\n\n\nRESULTS\nWe included 151 participants (mean age in years=57.3, standard deviation=5.3). Mean adherence to assigned smartphone tests during 6 months was 60% (SD 24.7). There was moderate correlation between the firstly made smartphone-based test and the conventional test for the Stroop test and the Trail Making test with Spearman ρ=.3-.5 (P<.001). Correlation increased for both tests when comparing the conventional test with the mean score of all attempts a participant had made, with the highest correlation for Stroop panel 3 (ρ=.62, P<.001). Performance on the Stroop and the Trail Making tests improved over time suggesting a learning effect, but the scores on the Letter-N-back, the Memory-Word, and the Reaction Time tests remained stable.\n\n\nCONCLUSIONS\nRepeated smartphone-assisted cognitive testing is feasible with reasonable adherence and moderate relative validity for the Stroop and the Trail Making tests compared with conventional neuropsychological tests. Smartphone-based cognitive testing seems promising for large-scale data-collection in population studies."
},
{
"pmid": "23548944",
"title": "Mobility, disability, and social engagement in older adults.",
"abstract": "OBJECTIVE\nTo examine cross sectional associations between mobility with or without disability and social engagement in a community-based sample of older adults.\n\n\nMETHODS\nSocial engagement of participants (n = 676) was outside the home (participation in organizations and use of senior centers) and in home (talking by phone and use of Internet). Logistic or proportional odds models evaluated the association between social engagement and position in the disablement process (no mobility limitations, mobility limitations/no disability, and mobility limitations/disability).\n\n\nRESULTS\nLow mobility was associated with lower level of social engagement of all forms (Odds ratio (OR) = 0.59, confidence intervals (CI): 0.41-0.85 for organizations; OR = 0.67, CI: 0.42-1.06 for senior center; OR = 0.47, CI: 0.32-0.70 for phone; OR = 0.38, CI: 0.23-0.65 for Internet). For social engagement outside the home, odds of engagement were further reduced for individuals with disability.\n\n\nDISCUSSION\nLow mobility is associated with low social engagement even in the absence of disability; associations with disability differed by type of social engagement."
},
{
"pmid": "20145017",
"title": "Mobility in older adults: a comprehensive framework.",
"abstract": "Mobility is fundamental to active aging and is intimately linked to health status and quality of life. Although there is widespread acceptance regarding the importance of mobility in older adults, there have been few attempts to comprehensively portray mobility, and research has to a large extent been discipline specific. In this article, a new theoretical framework for mobility is presented with the goals of raising awareness of the complexity of factors that influence mobility and stimulating new integrative and interdisciplinary research ideas. Mobility is broadly defined as the ability to move oneself (e.g., by walking, by using assistive devices, or by using transportation) within community environments that expand from one's home, to the neighborhood, and to regions beyond. The concept of mobility is portrayed through 5 fundamental categories of determinants (cognitive, psychosocial, physical, environmental, and financial), with gender, culture, and biography (personal life history) conceptualized as critical cross-cutting influences. Each category of determinants consists of an increasing number of factors, demonstrating greater complexity, as the mobility environment expands farther from the home. The framework illustrates how mobility impairments can lead to limitations in accessing different life-spaces and stresses the associations among determinants that influence mobility. By bridging disciplines and representing mobility in an inclusive manner, the model suggests that research needs to be more interdisciplinary and current mobility findings should be interpreted more comprehensively, and new more complex strategies should be developed to address mobility concerns."
},
{
"pmid": "22874772",
"title": "Caregiving burden and out-of-home mobility of cognitively impaired care-recipients based on GPS tracking.",
"abstract": "BACKGROUND\nOut-of-home mobility refers to the realization of trips outside the home, by foot or by other means of transportation. Although out-of-home mobility is important for the well-being of older people with cognitive impairment, its importance for their caregivers is not clear. This study aims to clarify the relationship between caregiving burden and out-of-home mobility of care-recipients using Global Positioning Systems (GPS) technology.\n\n\nMETHODS\nSeventy-six dyads (care-recipients and caregivers) were recruited from a psychogeriatric center, where they underwent cognitive assessment, followed by psychosocial interviews at home. Care-recipients received GPS tracking kits to carry for a period of four weeks, whenever they left home. Mobility data and diagnostic and psychosocial data were examined in relation to caregiver burden.\n\n\nRESULTS\nThe strongest predictors of burden were care-recipients' lower cognitive status and more time spent walking out-of-home. An interaction was found between cognitive status and time spent walking in relation to caregiver burden. The relationship between walking and burden was stronger among caregivers of care-recipients with dementia than caregivers of care-recipients with no cognitive impairment or mild cognitive impairment. Care-recipients' behavioral and emotional states were also positively related to caregiver burden.\n\n\nCONCLUSIONS\nThe findings stress the importance of maintaining older persons' out-of-home mobility during cognitive decline."
},
{
"pmid": "24356464",
"title": "Measuring life space in older adults with mild-to-moderate Alzheimer's disease using mobile phone GPS.",
"abstract": "BACKGROUND\nAs an indicator of physical and cognitive functioning in community-dwelling older adults, there is increasing interest in measuring life space, defined as the geographical area a person covers in daily life. Typically measured through questionnaires, life space can be challenging to assess in amnestic dementia associated with Alzheimer's disease (AD). While global positioning system (GPS) technology has been suggested as a potential solution, there remains a lack of data validating GPS-based methods to measure life space in cognitively impaired populations.\n\n\nOBJECTIVE\nThe purpose of the study was to evaluate the construct validity of a GPS system to provide quantitative measurements of global movement for individuals with mild-to-moderate AD.\n\n\nMETHODS\nNineteen community-dwelling older adults with mild-to-moderate AD (Mini-Mental State Examination score 14-28, age 70.7 ± 2.2 years) and 33 controls (CTL; age 74.0 ± 1.2 years) wore a GPS-enabled mobile phone during the day for 3 days. Measures of geographical territory (area, perimeter, mean distance from home, and time away from home) were calculated from the GPS log. Following a log-transformation to produce symmetrical distributions, group differences were tested using two-sample t tests. Construct validity of the GPS measures was tested by examining the correlation between the GPS measures and indicators of physical function [steps/day, gait velocity, and Disability Assessment for Dementia (DAD)] and affective state (Apathy Evaluation Scale and Geriatric Depression Scale). Multivariate regression was performed to evaluate the relative strength of significantly correlated factors.\n\n\nRESULTS\nGPS-derived area (p < 0.01), perimeter (p < 0.01), and mean distance from home (p < 0.05) were smaller in the AD group compared to CTL. The correlation analysis found significant associations of the GPS measures area and perimeter with all measures of physical function (steps/day, DAD, and gait velocity; p < 0.01), symptoms of apathy (p < 0.01), and depression (p < 0.05). Multivariate regression analysis indicated that gait velocity and dependence were the strongest variables associated with GPS measures.\n\n\nCONCLUSION\nThis study demonstrated that GPS-derived area and perimeter: (1) distinguished mild-to-moderate AD patients from CTL and (2) were strongly correlated with physical function and affective state. These findings confirm the ability of GPS technology to assess life space behaviour and may be particularly valuable to continuously monitor functional decline associated with neurodegenerative disease, such as AD."
},
{
"pmid": "23710796",
"title": "Goal-oriented cognitive rehabilitation in early-stage dementia: study protocol for a multi-centre single-blind randomised controlled trial (GREAT).",
"abstract": "BACKGROUND\nPreliminary evidence suggests that goal-oriented cognitive rehabilitation (CR) may be a clinically effective intervention for people with early-stage Alzheimer's disease, vascular or mixed dementia and their carers. This study aims to establish whether CR is a clinically effective and cost-effective intervention for people with early-stage dementia and their carers.\n\n\nMETHODS/DESIGN\nIn this multi-centre, single-blind randomised controlled trial, 480 people with early-stage dementia, each with a carer, will be randomised to receive either treatment as usual or cognitive rehabilitation (10 therapy sessions over 3 months, followed by 4 maintenance sessions over 6 months). We will compare the effectiveness of cognitive rehabilitation with that of treatment as usual with regard to improving self-reported and carer-rated goal performance in areas identified as causing concern by people with early-stage dementia; improving quality of life, self-efficacy, mood and cognition of people with early-stage dementia; and reducing stress levels and ameliorating quality of life for carers of participants with early-stage dementia. The incremental cost-effectiveness of goal-oriented cognitive rehabilitation compared to treatment as usual will also be examined.\n\n\nDISCUSSION\nIf the study confirms the benefits and cost-effectiveness of cognitive rehabilitation, it will be important to examine how the goal-oriented cognitive rehabilitation approach can most effectively be integrated into routine health-care provision. Our aim is to provide training and develop materials to support the implementation of this approach following trial completion."
},
{
"pmid": "25548086",
"title": "Out-of-home behavior and cognitive impairment in older adults: findings of the SenTra Project.",
"abstract": "This study explores differences in the out-of-home behavior of community-dwelling older adults with different cognitive impairment. Three levels of complexity of out-of-home behavior were distinguished: (a) mostly automatized walking behavior (low complexity), (b) global out-of-home mobility (medium complexity), and (c) defined units of concrete out-of-home activities, particularly cognitively demanding activities (high complexity). A sample of 257 older adults aged 59 to 91 years (M = 72.9 years, SD = 6.4 years) included 35 persons with early-stage Alzheimer's disease (AD), 76 persons with mild cognitive impairment (MCI), and 146 cognitively healthy persons (CH). Mobility data were gathered by using a GPS tracking device as well as by questionnaire. Predicting cognitive impairment status by out-of-home behavior and a range of confounders by means of multinomial logistic regression revealed that only cognitively demanding activities showed at least a marginally significant difference between MCI and CH and were highly significant between AD and CH."
},
{
"pmid": "28974482",
"title": "Mobile Phone-Based Measures of Activity, Step Count, and Gait Speed: Results From a Study of Older Ambulatory Adults in a Naturalistic Setting.",
"abstract": "BACKGROUND\nCellular mobile telephone technology shows much promise for delivering and evaluating healthcare interventions in cost-effective manners with minimal barriers to access. There is little data demonstrating that these devices can accurately measure clinically important aspects of individual functional status in naturalistic environments outside of the laboratory.\n\n\nOBJECTIVE\nThe objective of this study was to demonstrate that data derived from ubiquitous mobile phone technology, using algorithms developed and previously validated by our lab in a controlled setting, can be employed to continuously and noninvasively measure aspects of participant (subject) health status including step counts, gait speed, and activity level, in a naturalistic community setting. A second objective was to compare our mobile phone-based data against current standard survey-based gait instruments and clinical physical performance measures in order to determine whether they measured similar or independent constructs.\n\n\nMETHODS\nA total of 43 ambulatory, independently dwelling older adults were recruited from Nebraska Medicine, including 25 (58%, 25/43) healthy control individuals from our Engage Wellness Center and 18 (42%, 18/43) functionally impaired, cognitively intact individuals (who met at least 3 of 5 criteria for frailty) from our ambulatory Geriatrics Clinic. The following previously-validated surveys were obtained on study day 1: (1) Late Life Function and Disability Instrument (LLFDI); (2) Survey of Activities and Fear of Falling in the Elderly (SAFFE); (3) Patient Reported Outcomes Measurement Information System (PROMIS), short form version 1.0 Physical Function 10a (PROMIS-PF); and (4) PROMIS Global Health, short form version 1.1 (PROMIS-GH). In addition, clinical physical performance measurements of frailty (10 foot Get up and Go, 4 Meter walk, and Figure-of-8 Walk [F8W]) were also obtained. These metrics were compared to our mobile phone-based metrics collected from the participants in the community over a 24-hour period occurring within 1 week of the initial assessment.\n\n\nRESULTS\nWe identified statistically significant differences between functionally intact and frail participants in mobile phone-derived measures of percent activity (P=.002, t test), active versus inactive status (P=.02, t test), average step counts (P<.001, repeated measures analysis of variance [ANOVA]) and gait speed (P<.001, t test). In functionally intact individuals, the above mobile phone metrics assessed aspects of functional status independent (Bland-Altman and correlation analysis) of both survey- and/or performance battery-based functional measures. In contrast, in frail individuals, the above mobile phone metrics correlated with submeasures of both SAFFE and PROMIS-GH.\n\n\nCONCLUSIONS\nContinuous mobile phone-based measures of participant community activity and mobility strongly differentiate between persons with intact functional status and persons with a frailty phenotype. These measures assess dimensions of functional status independent of those measured using current validated questionnaires and physical performance assessments to identify functional compromise. Mobile phone-based gait measures may provide a more readily accessible and less-time consuming measure of gait, while further providing clinicians with longitudinal gait measures that are currently difficult to obtain."
},
{
"pmid": "28865985",
"title": "Extended, continuous measures of functional status in community dwelling persons with Alzheimer's and related dementia: Infrastructure, performance, tradeoffs, preliminary data, and promise.",
"abstract": "BACKGROUND\nThe past decades have seen phenomenal growth in the availability of inexpensive and powerful personal computing devices. Efforts to leverage these devices to improve health care outcomes promise to remake many aspects of healthcare delivery, but remain in their infancy.\n\n\nNEW METHOD\nWe describe the development of a mobile health platform designed for daily measures of functional status in ambulatory, community dwelling subjects, including those who have Alzheimer's disease or related neurodegenerative disorders. Using Smartwatches and Smartphones we measure subject overall activity and outdoor location (to derive their lifespace). These clinically-relevant measures allow us to track a subject's functional status in their natural environment over prolonged periods of time without repeated visits to healthcare providers. Functional status metrics are integrated with medical information and caregiver reports, which are used by a caregiving team to guide referrals for physician/APRN/NP care. COMPARISON: with Existing Methods We describe the design tradeoffs involved in all aspects of our current system architecture, focusing on decisions with significant impact on system cost, performance, scalability, and user-adherence.\n\n\nRESULTS\nWe provide real-world data from current subject enrollees demonstrating system accuracy and reliability.\n\n\nCONCLUSIONS\nWe document real-world feasibility in a group of men and women with dementia that Smartwatches/Smartphones can provide long-term, relevant clinical data regarding individual functional status. We describe the underlying considerations of this system so that interested organizations can adapt and scale our approach to their needs. Finally, we provide a potential agenda to guide development of future systems."
},
{
"pmid": "25100206",
"title": "Measuring the lifespace of people with Parkinson's disease using smartphones: proof of principle.",
"abstract": "BACKGROUND\nLifespace is a multidimensional construct that describes the geographic area in which a person lives and conducts their activities, and reflects mobility, health, and well-being. Traditionally, it has been measured by asking older people to self-report the length and frequency of trips taken and assistance required. Global Positioning System (GPS) sensors on smartphones have been used to measure Lifespace of older people, but not with people with Parkinson's disease (PD).\n\n\nOBJECTIVE\nThe objective of this study was to investigate whether GPS data collected via smartphones could be used to indicate the Lifespace of people with PD.\n\n\nMETHODS\nThe dataset was supplied via the Michael J Fox Foundation Data Challenge and included 9 people with PD and 7 approximately matched controls. Participants carried smartphones with GPS sensors over two months. Data analysis compared the PD group and the control group. The impact of symptom severity on Lifespace was also investigated.\n\n\nRESULTS\nVisualization methods for comparing Lifespace were developed including scatterplots and heatmaps. Lifespace metrics for comparison included average daily distance, percentage of time spent at home, and number of trips into the community. There were no significant differences between the PD and the control groups on Lifespace metrics. Visual representations of Lifespace were organized based on the self-reported severity of symptoms, suggesting a trend of decreasing Lifespace with increasing PD symptoms.\n\n\nCONCLUSIONS\nLifespace measured by GPS-enabled smartphones may be a useful concept to measure the progression of PD and the impact of various therapies and rehabilitation programs. Directions for future use of GPS-based Lifespace are provided."
},
{
"pmid": "22163393",
"title": "Wearable systems for monitoring mobility-related activities in chronic disease: a systematic review.",
"abstract": "The use of wearable motion sensing technology offers important advantages over conventional methods for obtaining measures of physical activity and/or physical functioning in individuals with chronic diseases. This review aims to identify the actual state of applying wearable systems for monitoring mobility-related activity in individuals with chronic disease conditions. In this review we focus on technologies and applications, feasibility and adherence aspects, and clinical relevance of wearable motion sensing technology. PubMed (Medline since 1990), PEdro, and reference lists of all relevant articles were searched. Two authors independently reviewed randomised trials systematically. The quality of selected articles was scored and study results were summarised and discussed. 163 abstracts were considered. After application of inclusion criteria and full text reading, 25 articles were taken into account in a full text review. Twelve of these papers evaluated walking with pedometers, seven used uniaxial accelerometers to assess physical activity, six used multiaxial accelerometers, and two papers used a combination approach of a pedometer and a multiaxial accelerometer for obtaining overall activity and energy expenditure measures. Seven studies mentioned feasibility and/or adherence aspects. The number of studies that use movement sensors for monitoring of activity patterns in chronic disease (postural transitions, time spent in certain positions or activities) is nonexistent on the RCT level of study design. Although feasible methods for monitoring human mobility are available, evidence-based clinical applications of these methods in individuals with chronic diseases are in need of further development."
},
{
"pmid": "22255662",
"title": "Wireless inertial measurement unit with GPS (WIMU-GPS)--wearable monitoring platform for ecological assessment of lifespace and mobility in aging and disease.",
"abstract": "This paper proposes an innovative ambulatory mobility and activity monitoring approach based on a wearable datalogging platform that combines inertial sensing with GPS tracking to assess the lifespace and mobility profile of individuals in their home and community environments. The components, I/O architecture, sensors and functions of the WIMU-GPS are presented. Outcome variables that can be measured with it are described and illustrated. Data on the power usage, operating autonomy of the WIMU-GPS and the GPS tracking performances and time to first fix of the unit are presented. The study of lifespace and mobility with the WIMU-GPS can potentially provide unique insights into intrapersonal and environmental factors contributing to mobility restriction. On-going studies are underway to establish the validity and reliability of the WIMU-GPS in characterizing the lifespace and mobility profile of older adults."
},
{
"pmid": "25495710",
"title": "Generating GPS activity spaces that shed light upon the mobility habits of older adults: a descriptive analysis.",
"abstract": "BACKGROUND\nMeasuring mobility is critical for understanding neighborhood influences on older adults' health and functioning. Global Positioning Systems (GPS) may represent an important opportunity to measure, describe, and compare mobility patterns in older adults.\n\n\nMETHODS\nWe generated three types of activity spaces (Standard Deviation Ellipse, Minimum Convex Polygon, Daily Path Area) using GPS data from 95 older adults in Vancouver, Canada. Calculated activity space areas and compactness were compared across sociodemographic and resource characteristics.\n\n\nRESULTS\nArea measures derived from the three different approaches to developing activity spaces were highly correlated. Participants who were younger, lived in less walkable neighborhoods, had a valid driver's license, had access to a vehicle, or had physical support to go outside of their homes had larger activity spaces. Mobility space compactness measures also differed by sociodemographic and resource characteristics.\n\n\nCONCLUSIONS\nThis research extends the literature by demonstrating that GPS tracking can be used as a valuable tool to better understand the geographic mobility patterns of older adults. This study informs potential ways to maintain older adult independence by identifying factors that influence geographic mobility."
},
{
"pmid": "24768430",
"title": "Cognitive status moderates the relationship between out-of-home behavior (OOHB), environmental mastery and affect.",
"abstract": "Studies on the relationship between behavioral competence, such as the competence of exerting out-of-home behavior (OOHB), and well-being in older adults have rarely addressed cognitive status as a potentially moderating factor. We included 35 persons with early-stage dementia of the Alzheimer's type (DAT), 76 individuals with mild cognitive impairment (MCI) and 146 cognitively healthy (CH) study participants (grand mean age: M=72.9 years; SD=6.4 years). OOHB indicators were assessed based on a multi-method assessment strategy, using both GPS (global positioning system) tracking technology and structured self-reports. Environmental mastery and positive as well as negative affect served as well-being indicators and were assessed by established questionnaires. Three theoretically postulated OOHB dimensions of different complexity (out-of-home walking behavior, global out-of-home mobility, and out-of-home activities) were supported by confirmatory factor analysis (CFA). We also found in the DAT group that environmental mastery was substantially and positively related to less complex out-of-home walking behavior, which was not the case in MCI and CH individuals. In contrast, more complex out-of-home activities were associated with higher negative affect in the DAT as well as the MCI group, but not in CH persons. These findings point to the possibility that relationships between OOHB and well-being depend on the congruence between available cognitive resources and the complexity of the OOHB dimension considered."
},
{
"pmid": "24652916",
"title": "Identifying Mobility Types in Cognitively Heterogeneous Older Adults Based on GPS-Tracking: What Discriminates Best?",
"abstract": "Heterogeneity in older adults' mobility and its correlates have rarely been investigated based on objective mobility data and in samples including cognitively impaired individuals. We analyzed mobility profiles within a cognitively heterogeneous sample of N = 257 older adults from Israel and Germany based on GPS tracking technology. Participants were aged between 59 and 91 years (M = 72.9; SD = 6.4) and were either cognitively healthy (CH, n = 146), mildly cognitively impaired (MCI, n = 76), or diagnosed with an early-stage dementia of the Alzheimer's type (DAT, n = 35). Based on cluster analysis, we identified three mobility types (\"Mobility restricted,\" \"Outdoor oriented,\" \"Walkers\"), which could be predicted based on socio-demographic indicators, activity, health, and cognitive impairment status using discriminant analysis. Particularly demented individuals and persons with worse health exhibited restrictions in mobility. Our findings contribute to a better understanding of heterogeneity in mobility in old age."
},
{
"pmid": "25881662",
"title": "Validation of Physical Activity Tracking via Android Smartphones Compared to ActiGraph Accelerometer: Laboratory-Based and Free-Living Validation Studies.",
"abstract": "BACKGROUND\nThere is increasing interest in using smartphones as stand-alone physical activity monitors via their built-in accelerometers, but there is presently limited data on the validity of this approach.\n\n\nOBJECTIVE\nThe purpose of this work was to determine the validity and reliability of 3 Android smartphones for measuring physical activity among midlife and older adults.\n\n\nMETHODS\nA laboratory (study 1) and a free-living (study 2) protocol were conducted. In study 1, individuals engaged in prescribed activities including sedentary (eg, sitting), light (sweeping), moderate (eg, walking 3 mph on a treadmill), and vigorous (eg, jogging 5 mph on a treadmill) activity over a 2-hour period wearing both an ActiGraph and 3 Android smartphones (ie, HTC MyTouch, Google Nexus One, and Motorola Cliq). In the free-living study, individuals engaged in usual daily activities over 7 days while wearing an Android smartphone (Google Nexus One) and an ActiGraph.\n\n\nRESULTS\nStudy 1 included 15 participants (age: mean 55.5, SD 6.6 years; women: 56%, 8/15). Correlations between the ActiGraph and the 3 phones were strong to very strong (ρ=.77-.82). Further, after excluding bicycling and standing, cut-point derived classifications of activities yielded a high percentage of activities classified correctly according to intensity level (eg, 78%-91% by phone) that were similar to the ActiGraph's percent correctly classified (ie, 91%). Study 2 included 23 participants (age: mean 57.0, SD 6.4 years; women: 74%, 17/23). Within the free-living context, results suggested a moderate correlation (ie, ρ=.59, P<.001) between the raw ActiGraph counts/minute and the phone's raw counts/minute and a strong correlation on minutes of moderate-to-vigorous physical activity (MVPA; ie, ρ=.67, P<.001). Results from Bland-Altman plots suggested close mean absolute estimates of sedentary (mean difference=-26 min/day of sedentary behavior) and MVPA (mean difference=-1.3 min/day of MVPA) although there was large variation.\n\n\nCONCLUSIONS\nOverall, results suggest that an Android smartphone can provide comparable estimates of physical activity to an ActiGraph in both a laboratory-based and free-living context for estimating sedentary and MVPA and that different Android smartphones may reliably confer similar estimates."
},
{
"pmid": "24770359",
"title": "Measuring large-scale social networks with high resolution.",
"abstract": "This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection."
},
{
"pmid": "14687391",
"title": "Measuring life-space mobility in community-dwelling older adults.",
"abstract": "OBJECTIVES\nTo evaluate the validity and reliability of a standardized approach for assessing life-space mobility (the University of Alabama at Birmingham Study of Aging Life-Space Assessment (LSA)) and its ability to detect changes in life-space over time in community-dwelling older adults.\n\n\nDESIGN\nProspective, observational cohort study.\n\n\nSETTING\nFive counties (three rural and two urban) in central Alabama.\n\n\nPARTICIPANTS\nCommunity-dwelling Medicare beneficiaries (N=306; 46% male, 43% African American) who completed in-home baseline interviews and 2-week and 6-month telephone follow-up interviews.\n\n\nMEASUREMENTS\nThe LSA assessed the range, independence, and frequency of movement over the 4 weeks preceding assessments. Correlations between the baseline LSA and measures of physical and mental health (physical performance, activities of daily living, instrumental activities of daily living, a global measure of health (the short form-12 question survey), the Geriatric Depression Scale, and comorbidities) established validity. Follow-up LSA scores established short-term test-retest reliability and the ability of the LSA to detect change.\n\n\nRESULTS\nFor all LSA scoring methods, baseline and 2-week follow-up LSA correlations were greater than 0.86 (95% confidence interval=0.82-0.97). Highest correlations with measures of physical performance and function were noted for the LSA scoring method considering all attributes of mobility. The LSA showed both increases and decreases at 6 months.\n\n\nDISCUSSION\nLife-space correlated with observed physical performance and self-reported function. It was stable over a 2-week period yet showed changes at 6 months."
},
{
"pmid": "20101923",
"title": "Global physical activity questionnaire (GPAQ): nine country reliability and validity study.",
"abstract": "PURPOSE\nInstruments to assess physical activity are needed for (inter)national surveillance systems and comparison.\n\n\nMETHODS\nMale and female adults were recruited from diverse sociocultural, educational and economic backgrounds in 9 countries (total n = 2657). GPAQ and the International Physical Activity Questionnaire (IPAQ) were administered on at least 2 occasions. Eight countries assessed criterion validity using an objective measure (pedometer or accelerometer) over 7 days.\n\n\nRESULTS\nReliability coefficients were of moderate to substantial strength (Kappa 0.67 to 0.73; Spearman's rho 0.67 to 0.81). Results on concurrent validity between IPAQ and GPAQ also showed a moderate to strong positive relationship (range 0.45 to 0.65). Results on criterion validity were in the poor-fair (range 0.06 to 0.35). There were some observed differences between sex, education, BMI and urban/rural and between countries.\n\n\nCONCLUSIONS\nOverall GPAQ provides reproducible data and showed a moderate-strong positive correlation with IPAQ, a previously validated and accepted measure of physical activity. Validation of GPAQ produced poor results although the magnitude was similar to the range reported in other studies. Overall, these results indicate that GPAQ is a suitable and acceptable instrument for monitoring physical activity in population health surveillance systems, although further replication of this work in other countries is warranted."
}
] |
Scientific Reports | 31239499 | PMC6592954 | 10.1038/s41598-019-45708-9 | A 3-D Projection Model for X-ray Dark-field Imaging | The X-ray dark-field signal can be measured with a grating-based Talbot-Lau interferometer. It measures small angle scattering of micrometer-sized oriented structures. Interestingly, the signal is a function not only of the material, but also of the relative orientation of the sample, the X-ray beam direction, and the direction of the interferometer sensitivity. This property is very interesting for potential tomographically reconstructing structures below the imaging resolution. However, tomographic reconstruction itself is a substantial challenge. A key step of the reconstruction algorithm is the inversion of a forward projection model. In this work, we propose a very general 3-D projection model. We derive the projection model under the assumption that the observed scatter distribution has a Gaussian shape. We theoretically show the consistency of our model with existing, more constrained 2-D models. Furthermore, we experimentally show the compatibility of our model with simulations and real dark-field measurements. We believe that this 3-D projection model is an important step towards more flexible trajectories and, by extension, dark-field imaging protocols that are much better applicable in practice. | Related workX-ray Tomography is performed by rotating either the X-ray setup or the object during the acquisition. This rotation changes the orientation of the object relative to the sensitivity direction. A key difference between traditional X-ray absorption and dark-field is the impact of this relative orientation: X-ray absorption is independent of the relative orientation, while X-ray dark-field depends on it.This makes a major difference for the choice of reconstruction algorithm. The popular filtered backprojection (FBP) algorithm implicitly assumes that the signal strength is independent of the viewing direction — which does in general not hold for X-ray dark-field imaging.The tomographic reconstruction, in general, requires the inversion of a projection model. For the angle-dependent dark-field signal, several 2-D projection models were proposed, which are discussed briefly in the following.Jensen et al.21 first showed the angle dependency of dark-field projections. They rotated the object around the optical axis of the system, and found that the variations in visibility can be described by the first two orders of the Fourier expansion. Shortly afterwards, Revol et al.22 modeled the dark-field scatter by a 2-D Gaussian function and showed that the logarithm of the dark-field signal can be formulated as1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\tilde{V}(\omega )=A+B\cdot {\sin }^{2}(\omega -\theta ),$$\end{document}V˜(ω)=A+B⋅sin2(ω−θ),where ω is the rotation angle of the fiber around the optical axis, θ is the starting angle of the fiber in the xy-plane (see Fig. 2(a)) and A, B are an isotropic and anisotropic contribution of the scatter, respectively. The projection models21,22 assume that the object is rotated around the optical axis, which limits these models to thin sample layers. Malecki et al.34 investigated the signal formation for the superposition of layers with different fiber orientations. They conclude that the dark-field signal can be represented as the line integral along the beam direction over the anisotropic scattering components.Figure 2Sketch of three different 2-D projection models from previous works. The rotation angle is given as ω. The fiber vector is denoted as f, and θ, φ, and Θ denote the fiber angle, respectively. s is the sensitivity direction.In order to describe the dark-field for thicker objects, Bayer et al.20 proposed another projection model. They showed that the projection of a fibrous structure also depends on the azimuthal angle ϕ. This corresponds to the angle of the fiber projection in the xz plane in Fig. 2(b). They derive the dark-field signal as2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\tilde{V}(\varphi )=A+B\cdot {\sin }^{2}(\varphi -\omega )\mathrm{.}$$\end{document}V˜(φ)=A+B⋅sin2(φ−ω).The third projection model was proposed by Schaff et al.28 and is shown in Fig. 2(c). Here, the grating bars are aligned along the 2-D trajectory, and the dark-field signal is measured along the rotation axis. Schaff et al. approximate this signal as constant with respect to the tomographic rotation, such that the the scattering strength only depends on the angle between the fiber and the rotation axis.This approximation simplifies the reconstruction, since a normal FBP algorithm can be used. However, for the two other projection models, the resulting signal per voxel varies along the trajectory. 2-D object orientations are in this case reconstructed via iterative reconstruction23–27. Among these works, Bayer et al.23 proposed a method to reconstruct 2-D in-plane orientations of fibers. Hu et al.24 proposed to reconstruct the 3-D orientation by combining two 2-D in-plane scans with different trajectories. X-ray tensor tomography has been proposed by Malecki et al.25, Vogel et al.26, and Wieczorek et al.27 by combining multiple 2-D planes.Since all projection models describe the dark-field only as a function of one angle, it is only possible to reconstruct a 2-D slice. The reconstruction of the full 3-D distribution of oriented materials requires the combination of scans from several trajectories, which overall leads to quite complex acquisition protocols. Malecki et al.25 reconstructed a scattering tensor by using the model from Revol et al.22 and rotated the sample into a finite number of scattering directions. Hu et al.24 used the model by Bayer et al.20,23 and used two 2-D reconstructions to compute the 3-D fiber direction, while Schaff et al.28 fit a 3-D ellipse to individually reconstructed 2-D slices.Previous works take different approaches to describe the 3-D nature of X-ray dark-field, ranging from Gaussian distributions21 over a kartesian basis26 to a spherical harmonics basis27. However, to our knowledge, there exists to date no direct 3-D reconstruction algorithm. One of the reasons for this may be the fact that a reconstruction method requires the inversion of a projection model, which to our knowledge has not been defined yet in 3-D.The definition of a 3-D model makes it possible to use 3-D dark-field trajectories. For example, the helix is a popular 3-D trajectory with favorable properties in traditional absorption tomography. In this case, Tuy’s condition for absorption image can be applied, and the completeness of such a certain trajectory can be shown35. In principle, a similar system can be pursued for dark-field tomography if a well-described 3-D trajectory is available. As long as only 2-D trajectories can be used, the best known acquisition schemes that fully measure the scattering orientations are still quite complex36. | [
"18204454",
"19403849",
"27072871",
"25761095",
"29422512",
"28341830",
"23552903",
"25873414",
"20721081",
"24105538",
"25136091",
"26193497",
"28607346",
"21158316",
"22225312",
"23637802",
"20808030",
"27277024",
"23696682",
"25321796"
] | [
{
"pmid": "18204454",
"title": "Hard-X-ray dark-field imaging using a grating interferometer.",
"abstract": "Imaging with visible light today uses numerous contrast mechanisms, including bright- and dark-field contrast, phase-contrast schemes and confocal and fluorescence-based methods. X-ray imaging, on the other hand, has only recently seen the development of an analogous variety of contrast modalities. Although X-ray phase-contrast imaging could successfully be implemented at a relatively early stage with several techniques, dark-field imaging, or more generally scattering-based imaging, with hard X-rays and good signal-to-noise ratio, in practice still remains a challenging task even at highly brilliant synchrotron sources. In this letter, we report a new approach on the basis of a grating interferometer that can efficiently yield dark-field scatter images of high quality, even with conventional X-ray tube sources. Because the image contrast is formed through the mechanism of small-angle scattering, it provides complementary and otherwise inaccessible structural information about the specimen at the micrometre and submicrometre length scale. Our approach is fully compatible with conventional transmission radiography and a recently developed hard-X-ray phase-contrast imaging scheme. Applications to X-ray medical imaging, industrial non-destructive testing and security screening are discussed."
},
{
"pmid": "19403849",
"title": "Fourier X-ray scattering radiography yields bone structural information.",
"abstract": "PURPOSE\nTo characterize certain aspects of the microscopic structures of cortical and trabecular bone by using Fourier x-ray scattering imaging.\n\n\nMATERIALS AND METHODS\nProtocols approved by the National Institutes of Health Animal Care and Use Committee were used to examine ex vivo the hind limb of a rat and the toe of a pig. The Fourier x-ray scattering imaging technique involves the use of a grid mask to modulate the cone beam and Fourier spectral filters to isolate the harmonic images. The technique yields attenuation, scattering, and phase-contrast (PC) images from a single exposure. In the rat tibia cortical bone, the scattering signals from two orthogonal grid orientations were compared by using Wilcoxon signed rank tests. In the pig toe, the heterogeneity of scattering and PC signals was compared between trabecular and compact bone regions of uniform attenuation by using F tests.\n\n\nRESULTS\nIn cortical bone, the scattering signal was significantly higher (P < 10(-15)) when the grid was parallel to the periosteal surface. Trabecular bone, as compared with cortical bone, appeared highly heterogeneous on the scattering (P < 10(-34)) and PC (P < 10(-27)) images.\n\n\nCONCLUSION\nThe ordered alignment of the mineralized collagen fibrils in compact bone was reflected in the anisotropic scattering signal in this bone. In trabecular bone, the porosity of the mineralized matrix accounted for the granular pattern seen on the scattering and PC images."
},
{
"pmid": "27072871",
"title": "Visualization of neonatal lung injury associated with mechanical ventilation using x-ray dark-field radiography.",
"abstract": "Mechanical ventilation (MV) and supplementation of oxygen-enriched gas, often needed in postnatal resuscitation procedures, are known to be main risk factors for impaired pulmonary development in the preterm and term neonates. Unfortunately, current imaging modalities lack in sensitivity for the detection of early stage lung injury. The present study reports a new imaging approach for diagnosis and staging of early lung injury induced by MV and hyperoxia in neonatal mice. The imaging method is based on the Talbot-Lau x-ray grating interferometry that makes it possible to quantify the x-ray small-angle scattering on the air-tissue interfaces. This so-called dark-field signal revealed increasing loss of x-ray small-angle scattering when comparing images of neonatal mice undergoing hyperoxia and MV-O2 with animals kept at room air. The changes in the dark field correlated well with histologic findings and provided superior differentiation than conventional x-ray imaging and lung function testing. The results suggest that x-ray dark-field radiography is a sensitive tool for assessing structural changes in the developing lung. In the future, with further technical developments x-ray dark-field imaging could be an important tool for earlier diagnosis and sensitive monitoring of lung injury in neonates requiring postnatal oxygen or ventilator therapy."
},
{
"pmid": "25761095",
"title": "In Vivo Dark-Field Radiography for Early Diagnosis and Staging of Pulmonary Emphysema.",
"abstract": "OBJECTIVES\nThe aim of this study was to evaluate the suitability of in vivo x-ray dark-field radiography for early-stage diagnosis of pulmonary emphysema in mice. Furthermore, we aimed to analyze how the dark-field signal correlates with morphological changes of lung architecture at distinct stages of emphysema.\n\n\nMATERIALS AND METHODS\nFemale 8- to 10-week-old C57Bl/6N mice were used throughout all experiments. Pulmonary emphysema was induced by orotracheal injection of porcine pancreatic elastase (80-U/kg body weight) (n = 30). Control mice (n = 11) received orotracheal injection of phosphate-buffered saline. To monitor the temporal patterns of emphysema development over time, the mice were imaged 7, 14, or 21 days after the application of elastase or phosphate-buffered saline. X-ray transmission and dark-field images were acquired with a prototype grating-based small-animal scanner. In vivo pulmonary function tests were performed before killing the animals. In addition, lungs were obtained for detailed histopathological analysis, including mean cord length (MCL) quantification as a parameter for the assessment of emphysema. Three blinded readers, all of them experienced radiologists and familiar with dark-field imaging, were asked to grade the severity of emphysema for both dark-field and transmission images.\n\n\nRESULTS\nHistopathology and MCL quantification confirmed the introduction of different stages of emphysema, which could be clearly visualized and differentiated on the dark-field radiograms, whereas early stages were not detected on transmission images. The correlation between MCL and dark-field signal intensities (r = 0.85) was significantly higher than the correlation between MCL and transmission signal intensities (r = 0.37). The readers' visual ratings for dark-field images correlated significantly better with MCL (r = 0.85) than visual ratings for transmission images (r = 0.36). Interreader agreement and the diagnostic accuracy of both quantitative and visual assessment were significantly higher for dark-field imaging than those for conventional transmission images.\n\n\nCONCLUSIONS\nX-ray dark-field radiography can reliably visualize different stages of emphysema in vivo and demonstrates significantly higher diagnostic accuracy for early stages of emphysema than conventional attenuation-based radiography."
},
{
"pmid": "29422512",
"title": "Depiction of pneumothoraces in a large animal model using x-ray dark-field radiography.",
"abstract": "The aim of this study was to assess the diagnostic value of x-ray dark-field radiography to detect pneumothoraces in a pig model. Eight pigs were imaged with an experimental grating-based large-animal dark-field scanner before and after induction of a unilateral pneumothorax. Image contrast-to-noise ratios between lung tissue and the air-filled pleural cavity were quantified for transmission and dark-field radiograms. The projected area in the object plane of the inflated lung was measured in dark-field images to quantify the collapse of lung parenchyma due to a pneumothorax. Means and standard deviations for lung sizes and signal intensities from dark-field and transmission images were tested for statistical significance using Student's two-tailed t-test for paired samples. The contrast-to-noise ratio between the air-filled pleural space of lateral pneumothoraces and lung tissue was significantly higher in the dark-field (3.65 ± 0.9) than in the transmission images (1.13 ± 1.1; p = 0.002). In case of dorsally located pneumothoraces, a significant decrease (-20.5%; p > 0.0001) in the projected area of inflated lung parenchyma was found after a pneumothorax was induced. Therefore, the detection of pneumothoraces in x-ray dark-field radiography was facilitated compared to transmission imaging in a large animal model."
},
{
"pmid": "28341830",
"title": "X-ray Dark-field Radiography - In-Vivo Diagnosis of Lung Cancer in Mice.",
"abstract": "Accounting for about 1.5 million deaths annually, lung cancer is the prevailing cause of cancer deaths worldwide, mostly associated with long-term smoking effects. Numerous small-animal studies are performed currently in order to better understand the pathogenesis of the disease and to develop treatment strategies. Within this letter, we propose to exploit X-ray dark-field imaging as a novel diagnostic tool for the detection of lung cancer on projection radiographs. Here, we demonstrate in living mice bearing lung tumors, that X-ray dark-field radiography provides significantly improved lung tumor detection rates without increasing the number of false-positives, especially in the case of small and superimposed nodules, when compared to conventional absorption-based imaging. While this method still needs to be adapted to larger mammals and finally humans, the technique presented here can already serve as a valuable tool in evaluating novel lung cancer therapies, tested in mice and other small animal models."
},
{
"pmid": "23552903",
"title": "On a dark-field signal generated by micrometer-sized calcifications in phase-contrast mammography.",
"abstract": "We show that a distribution of micrometer-sized calcifications in the human breast which are not visible in clinical x-ray mammography at diagnostic dose levels can produce a significant dark-field signal in a grating-based x-ray phase-contrast imaging setup with a tungsten anode x-ray tube operated at 40 kVp. A breast specimen with invasive ductal carcinoma was investigated immediately after surgery by Talbot-Lau x-ray interferometry with a design energy of 25 keV. The sample contained two tumors which were visible in ultrasound and contrast-agent enhanced MRI but invisible in clinical x-ray mammography, in specimen radiography and in the attenuation images obtained with the Talbot-Lau interferometer. One of the tumors produced significant dark-field contrast with an exposure of 0.85 mGy air-kerma. Staining of histological slices revealed sparsely distributed grains of calcium phosphate with sizes varying between 1 and 40 μm in the region of this tumor. By combining the histological investigations with an x-ray wave-field simulation we demonstrate that a corresponding distribution of grains of calcium phosphate in the form of hydroxylapatite has the ability to produce a dark-field signal which would-to a substantial degree-explain the measured dark-field image. Thus we have found the appearance of new information (compared to attenuation and differential phase images) in the dark-field image. The second tumor in the same sample did not contain a significant fraction of these very fine calcification grains and was invisible in the dark-field image. We conclude that some tumors which are invisible in x-ray absorption mammography might be detected in the x-ray dark-field image at tolerable dose levels."
},
{
"pmid": "25873414",
"title": "Non-invasive differentiation of kidney stone types using X-ray dark-field radiography.",
"abstract": "Treatment of renal calculi is highly dependent on the chemical composition of the stone in question, which is difficult to determine using standard imaging techniques. The objective of this study is to evaluate the potential of scatter-sensitive X-ray dark-field radiography to differentiate between the most common types of kidney stones in clinical practice. Here, we examine the absorption-to-scattering ratio of 118 extracted kidney stones with a laboratory Talbot-Lau Interferometer. Depending on their chemical composition, microscopic growth structure and morphology the various types of kidney stones show strongly varying, partially opposite contrasts in absorption and dark-field imaging. By assessing the microscopic calculi morphology with high resolution micro-computed tomography measurements, we illustrate the dependence of dark-field signal strength on the respective stone type. Finally, we utilize X-ray dark-field radiography as a non-invasive, highly sensitive (100%) and specific (97%) tool for the differentiation of calcium oxalate, uric acid and mixed types of stones, while additionally improving the detectability of radio-lucent calculi. We prove clinical feasibility of the here proposed method by accurately classifying renal stones, embedded within a fresh pig kidney, using dose-compatible measurements and a quick and simple visual inspection."
},
{
"pmid": "20721081",
"title": "On the origin of visibility contrast in x-ray Talbot interferometry.",
"abstract": "The reduction in visibility in x-ray grating interferometry based on the Talbot effect is formulated by the autocorrelation function of spatial fluctuations of a wavefront due to unresolved micron-size structures in samples. The experimental results for microspheres and melamine sponge were successfully explained by this formula with three parameters characterizing the wavefront fluctuations: variance, correlation length, and the Hurst exponent. The ultra-small-angle x-ray scattering of these samples was measured, and the scattering profiles were consistent with the formulation. Furthermore, we discuss the relation between the three parameters and the features of the micron-sized structures. The visibility-reduction contrast observed by x-ray grating interferometry can thus be understood in relation to the structural parameters of the microstructures."
},
{
"pmid": "24105538",
"title": "Projection angle dependence in grating-based X-ray dark-field imaging of ordered structures.",
"abstract": "Over the recent years X-ray differential phase-contrast imaging was developed for the hard X-ray regime as produced from laboratory X-ray sources. The technique uses a grating-based Talbot-Lau interferometer and was shown to yield image contrast gain, which makes it very interesting to the fields of medical imaging and non-destructive testing, respectively. In addition to X-ray attenuation contrast, the differential phase-contrast and dark-field images provide different structural information about a specimen. For the dark-field even at length scales much smaller than the spatial resolution of the imaging system. Physical interpretation of the dark-field information as present in radiographic and tomographic (CT) images requires a detailed look onto the geometric orientation between specimen and the setup. During phase-stepping the drop in intensity modulation, due to local scattering effects within the specimen is reproduced in the dark-field signal. This signal shows strong dependencies on micro-porosity and micro-fibers if these are numerous enough in the object. Since a grating-interferometer using a common unidirectional line grating is sensitive to X-ray scattering in one plane only, the dark-field image is influenced by the fiber orientations with respect to the grating bars, which can be exploited to obtain anisotropic structural information. With this contribution, we attempt to extend existing models for 2D projections to 3D data by analyzing dark-field contrast tomography of anisotropically structured materials such as carbon fiber reinforced carbon (CFRC)."
},
{
"pmid": "25136091",
"title": "Reconstruction of scalar and vectorial components in X-ray dark-field tomography.",
"abstract": "Grating-based X-ray dark-field imaging is a novel technique for obtaining image contrast for object structures at size scales below setup resolution. Such an approach appears particularly beneficial for medical imaging and nondestructive testing. It has already been shown that the dark-field signal depends on the direction of observation. However, up to now, algorithms for fully recovering the orientation dependence in a tomographic volume are still unexplored. In this publication, we propose a reconstruction method for grating-based X-ray dark-field tomography, which models the orientation-dependent signal as an additional observable from a standard tomographic scan. In detail, we extend the tomographic volume to a tensorial set of voxel data, containing the local orientation and contributions to dark-field scattering. In our experiments, we present the first results of several test specimens exhibiting a heterogeneous composition in microstructure, which demonstrates the diagnostic potential of the method."
},
{
"pmid": "26193497",
"title": "Constrained X-ray tensor tomography reconstruction.",
"abstract": "Quite recently, a method has been presented to reconstruct X-ray scattering tensors from projections obtained in a grating interferometry setup. The original publications present a rather specialised approach, for instance by suggesting a single SART-based solver. In this work, we propose a novel approach to solving the inverse problem, allowing the use of other algorithms than SART (like conjugate gradient), a faster tensor recovery, and an intuitive visualisation. Furthermore, we introduce constraint enforcement for X-ray tensor tomography (cXTT) and demonstrate that this yields visually smoother results in comparison to the state-of-art approach, similar to regularisation."
},
{
"pmid": "28607346",
"title": "Non-iterative Directional Dark-field Tomography.",
"abstract": "Dark-field imaging is a scattering-based X-ray imaging method that can be performed with laboratory X-ray tubes. The possibility to obtain information about unresolvable structures has already seen a lot of interest for both medical and material science applications. Unlike conventional X-ray attenuation, orientation dependent changes of the dark-field signal can be used to reveal microscopic structural orientation. To date, reconstruction of the three-dimensional dark-field signal requires dedicated, highly complex algorithms and specialized acquisition hardware. This severely hinders the possible application of orientation-dependent dark-field tomography. In this paper, we show that it is possible to perform this kind of dark-field tomography with common Talbot-Lau interferometer setups by reducing the reconstruction to several smaller independent problems. This allows for the reconstruction to be performed with commercially available software and our findings will therefore help pave the way for a straightforward implementation of orientation-dependent dark-field tomography."
},
{
"pmid": "21158316",
"title": "A grating-based single-shot x-ray phase contrast and diffraction method for in vivo imaging.",
"abstract": "PURPOSE\nThe purpose of this study is to develop a single-shot version of the grating-based phase contrast x-ray imaging method and demonstrate its capability of in vivo animal imaging. Here, the authors describe the principle and experimental results. They show the source of artifacts in the phase contrast signal and optimal designs that minimize them. They also discuss its current limitations and ways to overcome them.\n\n\nMETHODS\nA single lead grid was inserted midway between an x-ray tube and an x-ray camera in the planar radiography setting. The grid acted as a transmission grating and cast periodic dark fringes on the camera. The camera had sufficient spatial resolution to resolve the fringes. Refraction and diffraction in the imaged object manifested as position shifts and amplitude attenuation of the fringes, respectively. In order to quantify these changes precisely without imposing a fixed geometric relationship between the camera pixel array and the fringes, a spatial harmonic method in the Fourier domain was developed. The level of the differential phase (refraction) contrast as a function of hardware specifications and device geometry was derived and used to guide the optimal placement of the grid and object. Both ex vivo and in vivo images of rodent extremities were collected to demonstrate the capability of the method. The exposure time using a 50 W tube was 28 s.\n\n\nRESULTS\nDifferential phase contrast images of glass beads acquired at various grid and object positions confirmed theoretical predictions of how phase contrast and extraneous artifacts vary with the device geometry. In anesthetized rats, a single exposure yielded artifact-free images of absorption, differential phase contrast, and diffraction. Differential phase contrast was strongest at bone-soft tissue interfaces, while diffraction was strongest in bone.\n\n\nCONCLUSIONS\nThe spatial harmonic method allowed us to obtain absorption, differential phase contrast, and diffraction images, all from a single raw image and is feasible in live animals. Because the sensitivity of the method scales with the density of the gratings, custom microfabricated gratings should be superior to off-the-shelf lead grids."
},
{
"pmid": "22225312",
"title": "Multicontrast x-ray computed tomography imaging using Talbot-Lau interferometry without phase stepping.",
"abstract": "PURPOSE\nThe purpose of this work is to demonstrate that multicontrast computed tomography (CT) imaging can be performed using a Talbot-Lau interferometer without phase stepping, thus allowing for an acquisition scheme like that used for standard absorption CT.\n\n\nMETHODS\nRather than using phase stepping to extract refraction, small-angle scattering (SAS), and absorption signals, the two gratings of a Talbot-Lau interferometer were rotated slightly to generate a moiré pattern on the detector. A Fourier analysis of the moiré pattern was performed to obtain separate projection images of each of the three contrast signals, all from the same single-shot of x-ray exposure. After the signals were extracted from the detector data for all view angles, image reconstruction was performed to obtain absorption, refraction, and SAS CT images. A physical phantom was scanned to validate the proposed data acquisition method. The results were compared with a phantom scan using the standard phase stepping approach.\n\n\nRESULTS\nThe reconstruction of each contrast mechanism produced the expected results. Signal levels and contrasts match those obtained using the phase stepping technique.\n\n\nCONCLUSIONS\nAbsorption, refraction, and SAS CT imaging can be achieved using the Talbot-Lau interferometer without the additional overhead of long scan time and phase stepping."
},
{
"pmid": "23637802",
"title": "Coherent superposition in grating-based directional dark-field imaging.",
"abstract": "X-ray dark-field scatter imaging allows to gain information on the average local direction and anisotropy of micro-structural features in a sample well below the actual detector resolution. For thin samples the morphological interpretation of the signal is straight forward, provided that only one average orientation of sub-pixel features is present in the specimen. For thick samples, however, where the x-ray beam may pass structures of many different orientations and dimensions, this simple assumption in general does not hold and a quantitative description of the resulting directional dark-field signal is required to draw deductions on the morphology. Here we present a description of the signal formation for thick samples with many overlying structures and show its validity in experiment. In contrast to existing experimental work this description follows from theoretical predictions of a numerical study using a Fourier optics approach. One can easily extend this description and perform a quantitative structural analysis of clinical or materials science samples with directional dark-field imaging or even direction-dependent dark-field CT."
},
{
"pmid": "20808030",
"title": "Quantitative x-ray dark-field computed tomography.",
"abstract": "The basic principles of x-ray image formation in radiology have remained essentially unchanged since Röntgen first discovered x-rays over a hundred years ago. The conventional approach relies on x-ray attenuation as the sole source of contrast and draws exclusively on ray or geometrical optics to describe and interpret image formation. Phase-contrast or coherent scatter imaging techniques, which can be understood using wave optics rather than ray optics, offer ways to augment or complement the conventional approach by incorporating the wave-optical interaction of x-rays with the specimen. With a recently developed approach based on x-ray optical gratings, advanced phase-contrast and dark-field scatter imaging modalities are now in reach for routine medical imaging and non-destructive testing applications. To quantitatively assess the new potential of particularly the grating-based dark-field imaging modality, we here introduce a mathematical formalism together with a material-dependent parameter, the so-called linear diffusion coefficient and show that this description can yield quantitative dark-field computed tomography (QDFCT) images of experimental test phantoms."
},
{
"pmid": "27277024",
"title": "A beam hardening and dispersion correction for x-ray dark-field radiography.",
"abstract": "PURPOSE\nX-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed.\n\n\nMETHODS\nThe experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image.\n\n\nRESULTS\nComparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased.\n\n\nCONCLUSIONS\nThe authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis."
},
{
"pmid": "23696682",
"title": "Pulmonary emphysema diagnosis with a preclinical small-animal X-ray dark-field scatter-contrast scanner.",
"abstract": "PURPOSE\nTo test the hypothesis that the joint distribution of x-ray transmission and dark-field signals obtained with a compact cone-beam preclinical scanner with a polychromatic source can be used to diagnose pulmonary emphysema in ex vivo murine lungs.\n\n\nMATERIALS AND METHODS\nThe animal care committee approved this study. Three excised murine lungs with pulmonary emphysema and three excised murine control lungs were imaged ex vivo by using a grating-based micro-computed tomographic (CT) scanner. To evaluate the diagnostic value, the natural logarithm of relative transmission and the natural logarithm of dark-field scatter signal were plotted on a per-pixel basis on a scatterplot. Probability density function was fit to the joint distribution by using principle component analysis. An emphysema map was calculated based on the fitted probability density function.\n\n\nRESULTS\nThe two-dimensional scatterplot showed a characteristic difference between control and emphysematous lungs. Control lungs had lower average median logarithmic transmission (-0.29 vs -0.18, P = .1) and lower average dark-field signal (-0.54 vs -0.37, P = .1) than emphysematous lungs. The angle to the vertical axis of the fitted regions also varied significantly (7.8° for control lungs vs 15.9° for emphysematous lungs). The calculated emphysema distribution map showed good agreement with histologic findings.\n\n\nCONCLUSION\nX-ray dark-field scatter images of murine lungs obtained with a preclinical scanner can be used in the diagnosis of pulmonary emphysema.\n\n\nSUPPLEMENTAL MATERIAL\nhttp://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.13122413/-/DC1."
},
{
"pmid": "25321796",
"title": "Simulation framework for coherent and incoherent X-ray imaging and its application in Talbot-Lau dark-field imaging.",
"abstract": "A simulation framework for coherent X-ray imaging, based on scalar diffraction theory, is presented. It contains a core C++ library and an additional Python interface. A workflow is presented to include contributions of inelastic scattering obtained with Monte-Carlo methods. X-ray Talbot-Lau interferometry is the primary focus of the framework. Simulations are in agreement with measurements obtained with such an interferometer. Especially, the dark-field signal of densely packed PMMA microspheres is predicted. A realistic modeling of the microsphere distribution, which is necessary for correct results, is presented. The framework can be used for both setup design and optimization but also to test and improve reconstruction methods."
}
] |
Scientific Reports | 31249365 | PMC6597553 | 10.1038/s41598-019-45814-8 | Trader as a new optimization algorithm predicts drug-target interactions efficiently | Several machine learning approaches have been proposed for predicting new benefits of the existing drugs. Although these methods have introduced new usage(s) of some medications, efficient methods can lead to more accurate predictions. To this end, we proposed a novel machine learning method which is based on a new optimization algorithm, named Trader. To show the capabilities of the proposed algorithm which can be applied to the different scope of science, it was compared with ten other state-of-the-art optimization algorithms based on the standard and advanced benchmark functions. Next, a multi-layer artificial neural network was designed and trained by Trader to predict drug-target interactions (DTIs). Finally, the functionality of the proposed method was investigated on some DTIs datasets and compared with other methods. The data obtained by Trader showed that it eliminates the disadvantages of different optimization algorithms, resulting in a better outcome. Further, the proposed machine learning method was found to achieve a significant level of performance compared to the other popular and efficient approaches in predicting unknown DTIs. All the implemented source codes are freely available at https://github.com/LBBSoft/Trader. | Related WorksOur proposed method, which is a combination of artificial neural network and Trader optimization algorithm (ANNTR), falls into the data-mining class of drug repositioning and on predicting DTIs. This section is allocated to reviewing the related literature from the data-mining viewpoints. The conducted investigations have been categorized into six classes, as follows:i)Learner-based methods: In these studies, learners such as Deep learning6,7, Support vector machine8–11, Regression algorithms12, K-nearest neighbors13, Rotation forest learner11, and Relevance vector machine14 aimed to find out the relationships between the input and output using labeled datasets. The acquired model is evaluated and applied to predict unknown DTIs. Since every learner uses a different method for separating samples, their results differ from each other. The biggest weakness of the mentioned literature works is generating negative datasets and obtaining a model based on them. For this reason, the percentage of error goes up due to a possible positive interaction between a drug and a target in the generated negative dataset. To tackle such restriction, one-class classification machine learning approaches can be used15. There is a low level of accuracy in the methods used in the related literature despite the fact that their obtained results are acceptable. To enhance the prediction accuracy, we have introduced an efficient machine learning method, which is based on a new optimization algorithm, so-called “Trader”, as well as an artificial neural network.ii)Network-based methods: This type of literature works formulate drugs and their various targets (genes, proteins, enzymes, metabolic pathways, etc.) and then analyze them for obtaining new information. In a series of related works, the designed network is examined by various algorithms such as Random walk16,17 and Random forest18. Unlike the first class of related works which depends on the negative dataset19, the second group only considers the existing information. As a result, the error of the second category is lower than the first one. Nevertheless, the performance of the first category is higher than the second group.iii)Prioritization-based methods: These types of researches calculate drug-drug, network-network or target-target similarities. After they are ranked based on acquired scores, the intended drugs are suggested for treating diseases. To compute the scores, chemical information of drugs, topological information of networks, and sequence information of targets are examined20. Considering different studies, it can be concluded that the similarity is not an only determinant factor in the repositioning of drugs. Hence, the false positive rates of prioritization-based methods are high. To overcome the restriction, some researches integrate different information and then calculate the similarity scores21.iv)Mathematics and probabilistic-based methods: This type of studies formulate the problem as a graph and then mine it to obtain new information22. These methods run into difficulties when there are orphan nodes in the generated graph. To deal with the existing constraint, a matrix regulation and factorization method may be usefull23.v)Ensemble-based methods: It has been shown that a proper combination of machine learning methods usually leads to better results in computer science problems. Inspired by the combination idea, some researchers have predicted DTIs using a combination of the above-mentioned classes24–26. Although these methods enhance the separability power of a drug-target predictor, they increase the error rate and suffer from the disadvantages of the combined methods.vi)Review-based approaches: Large numbers of drug-target prediction literature studies are considered just to review articles which have investigated the problem from various viewpoints such as applied tools27, methods28, databases, software applications29, etc. These articles usually include a discussion of the advantages and disadvantages of proposed methods and give some directions to be followed in the future30. | [
"23384594",
"27334201",
"23493874",
"22889966",
"25946000",
"30255785",
"30565313",
"28192537",
"27842479",
"28787438",
"28770474",
"30943889",
"29897326",
"30528728",
"29483097",
"30598065",
"28095781",
"29054256",
"30583320",
"30337070",
"30209997",
"30129407",
"18586719",
"10592173",
"16381955",
"20529913",
"29974489",
"24809305",
"908462",
"18514499",
"25249292"
] | [
{
"pmid": "23384594",
"title": "Structure and dynamics of molecular networks: a novel paradigm of drug discovery: a comprehensive review.",
"abstract": "Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The \"central hit strategy\" selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The \"network influence strategy\" works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes/edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing >1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach."
},
{
"pmid": "27334201",
"title": "Molecular Docking for Identification of Potential Targets for Drug Repurposing.",
"abstract": "Using existing drugs for new indications (drug repurposing) is an effective method not only to reduce drug development time and costs but also to develop treatments for new disease including those that are rare. In order to discover novel indications, potential target identification is a necessary step. One widely used method to identify potential targets is through molecule docking. It requires no prior information except structure inputs from both the drug and the target, and can identify potential targets for a given drug, or identify potential drugs for a specific target. Though molecular docking is popular for drug development and repurposing, challenges remain for the method. In order to improve the prediction accuracy, optimizing the target conformation, considering the solvents and adding cobinders to the system are possible solutions."
},
{
"pmid": "23493874",
"title": "Network-based drug repositioning.",
"abstract": "Network-based computational biology, with the emphasis on biomolecular interactions and omics-data integration, has had success in drug development and created new directions such as drug repositioning and drug combination. Drug repositioning, i.e., revealing a drug's new roles, is increasingly attracting much attention from the pharmaceutical community to tackle the problems of high failure rate and long-term development in drug discovery. While drug combination or drug cocktails, i.e., combining multiple drugs against diseases, mainly aims to alleviate the problems of the recurrent emergence of drug resistance and also reveal their synergistic effects. In this paper, we unify the two topics to reveal new roles of drug interactions from a network perspective by treating drug combination as another form of drug repositioning. In particular, first, we emphasize that rationally repositioning drugs in the large scale is driven by the accumulation of various high-throughput genome-wide data. These data can be utilized to capture the interplay among targets and biological molecules, uncover the resulting network structures, and further bridge molecular profiles and phenotypes. This motivates many network-based computational methods on these topics. Second, we organize these existing methods into two categories, i.e., single drug repositioning and drug combination, and further depict their main features by three data sources. Finally, we discuss the merits and shortcomings of these methods and pinpoint some future topics in this promising field."
},
{
"pmid": "22889966",
"title": "Applications of Connectivity Map in drug discovery and development.",
"abstract": "Genome-wide expression profiling of gene transcripts has been successfully applied in biomedical discovery for over a decade. Based on the premises of this technology, Connectivity Map provides a data-driven and systematic approach for discovering associations among genes, chemicals and biological conditions such as diseases. Since its first introduction in 2006, the approach has shown emerging promises in uncovering avenues for drug discovery and development such as in identifying and suggesting new indications for existing drugs and elucidating mode of actions for novel chemicals in addition to potentially predicting side effects."
},
{
"pmid": "25946000",
"title": "Drug repositioning for diabetes based on 'omics' data mining.",
"abstract": "Drug repositioning has shorter developmental time, lower cost and less safety risk than traditional drug development process. The current study aims to repurpose marketed drugs and clinical candidates for new indications in diabetes treatment by mining clinical 'omics' data. We analyzed data from genome wide association studies (GWAS), proteomics and metabolomics studies and revealed a total of 992 proteins as potential anti-diabetic targets in human. Information on the drugs that target these 992 proteins was retrieved from the Therapeutic Target Database (TTD) and 108 of these proteins are drug targets with drug projects information. Research and preclinical drug targets were excluded and 35 of the 108 proteins were selected as druggable proteins. Among them, five proteins were known targets for treating diabetes. Based on the pathogenesis knowledge gathered from the OMIM and PubMed databases, 12 protein targets of 58 drugs were found to have a new indication for treating diabetes. CMap (connectivity map) was used to compare the gene expression patterns of cells treated by these 58 drugs and that of cells treated by known anti-diabetic drugs or diabetes risk causing compounds. As a result, 9 drugs were found to have the potential to treat diabetes. Among the 9 drugs, 4 drugs (diflunisal, nabumetone, niflumic acid and valdecoxib) targeting COX2 (prostaglandin G/H synthase 2) were repurposed for treating type 1 diabetes, and 2 drugs (phenoxybenzamine and idazoxan) targeting ADRA2A (Alpha-2A adrenergic receptor) had a new indication for treating type 2 diabetes. These findings indicated that 'omics' data mining based drug repositioning is a potentially powerful tool to discover novel anti-diabetic indications from marketed drugs and clinical candidates. Furthermore, the results of our study could be related to other disorders, such as Alzheimer's disease."
},
{
"pmid": "30255785",
"title": "Deep learning-based transcriptome data classification for drug-target interaction prediction.",
"abstract": "BACKGROUND\nThe ability to predict the interaction of drugs with target proteins is essential to research and development of drug. However, the traditional experimental paradigm is costly, and previous in silico prediction paradigms have been impeded by the wide range of data platforms and data scarcity.\n\n\nRESULTS\nIn this paper, we modeled the prediction of drug-target interactions as a binary classification task. Using transcriptome data from the L1000 database of the LINCS project, we developed a framework based on a deep-learning algorithm to predict potential drug target interactions. Once fully trained, the model achieved over 98% training accuracy. The results of our research demonstrated that our framework could discover more reliable DTIs than found by other methods. This conclusion was validated further across platforms with a high percentage of overlapping interactions.\n\n\nCONCLUSIONS\nOur model's capacity of integrating transcriptome data from drugs and genes strongly suggests the strength of its potential for DTI prediction, thereby improving the drug discovery process."
},
{
"pmid": "30565313",
"title": "Similarity-based machine learning support vector machine predictor of drug-drug interactions with improved accuracies.",
"abstract": "WHAT IS KNOWN AND OBJECTIVE\nDrug-drug interactions (DDI) are frequent causes of adverse clinical drug reactions. Efforts have been directed at the early stage to achieve accurate identification of DDI for drug safety assessments, including the development of in silico predictive methods. In particular, similarity-based in silico methods have been developed to assess DDI with good accuracies, and machine learning methods have been employed to further extend the predictive range of similarity-based approaches. However, the performance of a developed machine learning method is lower than expectations partly because of the use of less diverse DDI training data sets and a less optimal set of similarity measures.\n\n\nMETHOD\nIn this work, we developed a machine learning model using support vector machines (SVMs) based on the literature-reported established set of similarity measures and comprehensive training data sets. The established similarity measures include the 2D molecular structure similarity, 3D pharmacophoric similarity, interaction profile fingerprint (IPF) similarity, target similarity and adverse drug effect (ADE) similarity, which were extracted from well-known databases, such as DrugBank and Side Effect Resource (SIDER). A pairwise kernel was constructed for the known and possible drug pairs based on the five established similarity measures and then used as the input vector of the SVM.\n\n\nRESULT\nThe 10-fold cross-validation studies showed a predictive performance of AUROC >0.97, which is significantly improved compared with the AUROC of 0.67 of an analogously developed machine learning model. Our study suggested that a similarity-based SVM prediction is highly useful for identifying DDI.\n\n\nCONCLUSION\nin silico methods based on multifarious drug similarities have been suggested to be feasible for DDI prediction in various studies. In this way, our pairwise kernel SVM model had better accuracies than some previous works, which can be used as a pharmacovigilance tool to detect potential DDI."
},
{
"pmid": "28192537",
"title": "SELF-BLM: Prediction of drug-target interactions via self-training SVM.",
"abstract": "Predicting drug-target interactions is important for the development of novel drugs and the repositioning of drugs. To predict such interactions, there are a number of methods based on drug and target protein similarity. Although these methods, such as the bipartite local model (BLM), show promise, they often categorize unknown interactions as negative interaction. Therefore, these methods are not ideal for finding potential drug-target interactions that have not yet been validated as positive interactions. Thus, here we propose a method that integrates machine learning techniques, such as self-training support vector machine (SVM) and BLM, to develop a self-training bipartite local model (SELF-BLM) that facilitates the identification of potential interactions. The method first categorizes unlabeled interactions and negative interactions among unknown interactions using a clustering method. Then, using the BLM method and self-training SVM, the unlabeled interactions are self-trained and final local classification models are constructed. When applied to four classes of proteins that include enzymes, G-protein coupled receptors (GPCRs), ion channels, and nuclear receptors, SELF-BLM showed the best performance for predicting not only known interactions but also potential interactions in three protein classes compare to other related studies. The implemented software and supporting data are available at https://github.com/GIST-CSBL/SELF-BLM."
},
{
"pmid": "27842479",
"title": "RFDT: A Rotation Forest-based Predictor for Predicting Drug-Target Interactions Using Drug Structure and Protein Sequence Information.",
"abstract": "BACKGROUND\nIdentification of interaction between drugs and target proteins plays an important role in discovering new drug candidates. However, through the experimental method to identify the drug-target interactions remain to be extremely time-consuming, expensive and challenging even nowadays. Therefore, it is urgent to develop new computational methods to predict potential drugtarget interactions (DTI).\n\n\nMETHODS\nIn this article, a novel computational model is developed for predicting potential drug-target interactions under the theory that each drug-target interaction pair can be represented by the structural properties from drugs and evolutionary information derived from proteins. Specifically, the protein sequences are encoded as Position-Specific Scoring Matrix (PSSM) descriptor which contains information of biological evolutionary and the drug molecules are encoded as fingerprint feature vector which represents the existence of certain functional groups or fragments.\n\n\nRESULTS\nFour benchmark datasets involving enzymes, ion channels, GPCRs and nuclear receptors, are independently used for establishing predictive models with Rotation Forest (RF) model. The proposed method achieved the prediction accuracy of 91.3%, 89.1%, 84.1% and 71.1% for four datasets respectively. In order to make our method more persuasive, we compared our classifier with the state-of-theart Support Vector Machine (SVM) classifier. We also compared the proposed method with other excellent methods.\n\n\nCONCLUSIONS\nExperimental results demonstrate that the proposed method is effective in the prediction of DTI, and can provide assistance for new drug research and development."
},
{
"pmid": "28787438",
"title": "Computational-experimental approach to drug-target interaction mapping: A case study on kinase inhibitors.",
"abstract": "Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications."
},
{
"pmid": "28770474",
"title": "In silico prediction of ROCK II inhibitors by different classification approaches.",
"abstract": "ROCK II is an important pharmacological target linked to central nervous system disorders such as Alzheimer's disease. The purpose of this research is to generate ROCK II inhibitor prediction models by machine learning approaches. Firstly, four sets of descriptors were calculated with MOE 2010 and PaDEL-Descriptor, and optimized by F-score and linear forward selection methods. In addition, four classification algorithms were used to initially build 16 classifiers with k-nearest neighbors [Formula: see text], naïve Bayes, Random forest, and support vector machine. Furthermore, three sets of structural fingerprint descriptors were introduced to enhance the predictive capacity of classifiers, which were assessed with fivefold cross-validation, test set validation and external test set validation. The best two models, MFK + MACCS and MLR + SubFP, have both MCC values of 0.925 for external test set. After that, a privileged substructure analysis was performed to reveal common chemical features of ROCK II inhibitors. Finally, binding modes were analyzed to identify relationships between molecular descriptors and activity, while main interactions were revealed by comparing the docking interaction of the most potent and the weakest ROCK II inhibitors. To the best of our knowledge, this is the first report on ROCK II inhibitors utilizing machine learning approaches that provides a new method for discovering novel ROCK II inhibitors."
},
{
"pmid": "30943889",
"title": "FeatureSelect: a software for feature selection based on machine learning approaches.",
"abstract": "BACKGROUND\nFeature selection, as a preprocessing stage, is a challenging problem in various sciences such as biology, engineering, computer science, and other fields. For this purpose, some studies have introduced tools and softwares such as WEKA. Meanwhile, these tools or softwares are based on filter methods which have lower performance relative to wrapper methods. In this paper, we address this limitation and introduce a software application called FeatureSelect. In addition to filter methods, FeatureSelect consists of optimisation algorithms and three types of learners. It provides a user-friendly and straightforward method of feature selection for use in any kind of research, and can easily be applied to any type of balanced and unbalanced data based on several score functions like accuracy, sensitivity, specificity, etc. RESULTS: In addition to our previously introduced optimisation algorithm (WCC), a total of 10 efficient, well-known and recently developed algorithms have been implemented in FeatureSelect. We applied our software to a range of different datasets and evaluated the performance of its algorithms. Acquired results show that the performances of algorithms are varying on different datasets, but WCC, LCA, FOA, and LA are suitable than others in the overall state. The results also show that wrapper methods are better than filter methods.\n\n\nCONCLUSIONS\nFeatureSelect is a feature or gene selection software application which is based on wrapper methods. Furthermore, it includes some popular filter methods and generates various comparison diagrams and statistical measurements. It is available from GitHub ( https://github.com/LBBSoft/FeatureSelect ) and is free open source software under an MIT license."
},
{
"pmid": "29897326",
"title": "Identification of drug-target interaction by a random walk with restart method on an interactome network.",
"abstract": "BACKGROUND\nIdentification of drug-target interactions acts as a key role in drug discovery. However, identifying drug-target interactions via in-vitro, in-vivo experiments are very laborious, time-consuming. Thus, predicting drug-target interactions by using computational approaches is a good alternative. In recent studies, many feature-based and similarity-based machine learning approaches have shown promising results in drug-target interaction predictions. A previous study showed that accounting connectivity information of drug-drug and protein-protein interactions increase performances of prediction by the concept of 'guilt-by-association'. However, the approach that only considers directly connected nodes often misses the information that could be derived from distance nodes. Therefore, in this study, we yield global network topology information by using a random walk with restart algorithm and apply the global topology information to the prediction model.\n\n\nRESULTS\nAs a result, our prediction model demonstrates increased prediction performance compare to the 'guilt-by-association' approach (AUC 0.89 and 0.67 in the training and independent test, respectively). In addition, we show how weighted features by a random walk with restart yields better performances than original features. Also, we confirmed that drugs and proteins that have high-degree of connectivity on the interactome network yield better performance in our model.\n\n\nCONCLUSIONS\nThe prediction models with weighted features by considering global network topology increased the prediction performances both in the training and testing compared to non-weighted models and previous a 'guilt-by-association method'. In conclusion, global network topology information on protein-protein interaction and drug-drug interaction effects to the prediction performance of drug-target interactions."
},
{
"pmid": "30528728",
"title": "Prediction of drug-target interaction by integrating diverse heterogeneous information source with multiple kernel learning and clustering methods.",
"abstract": "BACKGROUND\nIdentification of potential drug-target interaction pairs is very important for pharmaceutical innovation and drug discovery. Numerous machine learning-based and network-based algorithms have been developed for predicting drug-target interactions. However, large-scale pharmacological, genomic and chemical datum emerged recently provide new opportunity for further heightening the accuracy of drug-target interactions prediction.\n\n\nRESULTS\nIn this work, based on the assumption that similar drugs tend to interact with similar proteins and vice versa, we developed a novel computational method (namely MKLC-BiRW) to predict new drug-target interactions. MKLC-BiRW integrates diverse drug-related and target-related heterogeneous information source by using the multiple kernel learning and clustering methods to generate the drug and target similarity matrices, in which the low similarity elements are set to zero to build the drug and target similarity correction networks. By incorporating these drug and target similarity correction networks with known drug-target interaction bipartite graph, MKLC-BiRW constructs the heterogeneous network on which Bi-random walk algorithm is adopted to infer the potential drug-target interactions.\n\n\nCONCLUSIONS\nCompared with other existing state-of-the-art methods, MKLC-BiRW achieves the best performance in terms of AUC and AUPR. MKLC-BiRW can effectively predict the potential drug-target interactions."
},
{
"pmid": "29483097",
"title": "Patient-Customized Drug Combination Prediction and Testing for T-cell Prolymphocytic Leukemia Patients.",
"abstract": "The molecular pathways that drive cancer progression and treatment resistance are highly redundant and variable between individual patients with the same cancer type. To tackle this complex rewiring of pathway cross-talk, personalized combination treatments targeting multiple cancer growth and survival pathways are required. Here we implemented a computational-experimental drug combination prediction and testing (DCPT) platform for efficient in silico prioritization and ex vivo testing in patient-derived samples to identify customized synergistic combinations for individual cancer patients. DCPT used drug-target interaction networks to traverse the massive combinatorial search spaces among 218 compounds (a total of 23,653 pairwise combinations) and identified cancer-selective synergies by using differential single-compound sensitivity profiles between patient cells and healthy controls, hence reducing the likelihood of toxic combination effects. A polypharmacology-based machine learning modeling and network visualization made use of baseline genomic and molecular profiles to guide patient-specific combination testing and clinical translation phases. Using T-cell prolymphocytic leukemia (T-PLL) as a first case study, we show how the DCPT platform successfully predicted distinct synergistic combinations for each of the three T-PLL patients, each presenting with different resistance patterns and synergy mechanisms. In total, 10 of 24 (42%) of selective combination predictions were experimentally confirmed to show synergy in patient-derived samples ex vivo The identified selective synergies among approved drugs, including tacrolimus and temsirolimus combined with BCL-2 inhibitor venetoclax, may offer novel drug repurposing opportunities for treating T-PLL.Significance: An integrated use of functional drug screening combined with genomic and molecular profiling enables patient-customized prediction and testing of drug combination synergies for T-PLL patients. Cancer Res; 78(9); 2407-18. ©2018 AACR."
},
{
"pmid": "30598065",
"title": "Predicting adverse drug reactions of combined medication from heterogeneous pharmacologic databases.",
"abstract": "BACKGROUND\nEarly and accurate identification of potential adverse drug reactions (ADRs) for combined medication is vital for public health. Existing methods either rely on expensive wet-lab experiments or detecting existing associations from related records. Thus, they inevitably suffer under-reporting, delays in reporting, and inability to detect ADRs for new and rare drugs. The current application of machine learning methods is severely impeded by the lack of proper drug representation and credible negative samples. Therefore, a method to represent drugs properly and to select credible negative samples becomes vital in applying machine learning methods to this problem.\n\n\nRESULTS\nIn this work, we propose a machine learning method to predict ADRs of combined medication from pharmacologic databases by building up highly-credible negative samples (HCNS-ADR). Specifically, we fuse heterogeneous information from different databases and represent each drug as a multi-dimensional vector according to its chemical substructures, target proteins, substituents, and related pathways first. Then, a drug-pair vector is obtained by appending the vector of one drug to the other. Next, we construct a drug-disease-gene network and devise a scoring method to measure the interaction probability of every drug pair via network analysis. Drug pairs with lower interaction probability are preferentially selected as negative samples. Following that, the validated positive samples and the selected credible negative samples are projected into a lower-dimensional space using the principal component analysis. Finally, a classifier is built for each ADR using its positive and negative samples with reduced dimensions. The performance of the proposed method is evaluated on simulative prediction for 1276 ADRs and 1048 drugs, comparing using four machine learning algorithms and with two baseline approaches. Extensive experiments show that the proposed way to represent drugs characterizes drugs accurately. With highly-credible negative samples selected by HCNS-ADR, the four machine learning algorithms achieve significant performance improvements. HCNS-ADR is also shown to be able to predict both known and novel drug-drug-ADR associations, outperforming two other baseline approaches significantly.\n\n\nCONCLUSIONS\nThe results demonstrate that integration of different drug properties to represent drugs are valuable for ADR prediction of combined medication and the selection of highly-credible negative samples can significantly improve the prediction performance."
},
{
"pmid": "28095781",
"title": "Link prediction in drug-target interactions network using similarity indices.",
"abstract": "BACKGROUND\nIn silico drug-target interaction (DTI) prediction plays an integral role in drug repositioning: the discovery of new uses for existing drugs. One popular method of drug repositioning is network-based DTI prediction, which uses complex network theory to predict DTIs from a drug-target network. Currently, most network-based DTI prediction is based on machine learning - methods such as Restricted Boltzmann Machines (RBM) or Support Vector Machines (SVM). These methods require additional information about the characteristics of drugs, targets and DTIs, such as chemical structure, genome sequence, binding types, causes of interactions, etc., and do not perform satisfactorily when such information is unavailable. We propose a new, alternative method for DTI prediction that makes use of only network topology information attempting to solve this problem.\n\n\nRESULTS\nWe compare our method for DTI prediction against the well-known RBM approach. We show that when applied to the MATADOR database, our approach based on node neighborhoods yield higher precision for high-ranking predictions than RBM when no information regarding DTI types is available.\n\n\nCONCLUSION\nThis demonstrates that approaches purely based on network topology provide a more suitable approach to DTI prediction in the many real-life situations where little or no prior knowledge is available about the characteristics of drugs, targets, or their interactions."
},
{
"pmid": "29054256",
"title": "Drug-target interaction prediction: A Bayesian ranking approach.",
"abstract": "BACKGROUND AND OBJECTIVE\nIn silico prediction of drug-target interactions (DTI) could provide valuable information and speed-up the process of drug repositioning - finding novel usage for existing drugs. In our work, we focus on machine learning algorithms supporting drug-centric repositioning approach, which aims to find novel usage for existing or abandoned drugs. We aim at proposing a per-drug ranking-based method, which reflects the needs of drug-centric repositioning research better than conventional drug-target prediction approaches.\n\n\nMETHODS\nWe propose Bayesian Ranking Prediction of Drug-Target Interactions (BRDTI). The method is based on Bayesian Personalized Ranking matrix factorization (BPR) which has been shown to be an excellent approach for various preference learning tasks, however, it has not been used for DTI prediction previously. In order to successfully deal with DTI challenges, we extended BPR by proposing: (i) the incorporation of target bias, (ii) a technique to handle new drugs and (iii) content alignment to take structural similarities of drugs and targets into account.\n\n\nRESULTS\nEvaluation on five benchmark datasets shows that BRDTI outperforms several state-of-the-art approaches in terms of per-drug nDCG and AUC. BRDTI results w.r.t. nDCG are 0.929, 0.953, 0.948, 0.897 and 0.690 for G-Protein Coupled Receptors (GPCR), Ion Channels (IC), Nuclear Receptors (NR), Enzymes (E) and Kinase (K) datasets respectively. Additionally, BRDTI significantly outperformed other methods (BLM-NII, WNN-GIP, NetLapRLS and CMF) w.r.t. nDCG in 17 out of 20 cases. Furthermore, BRDTI was also shown to be able to predict novel drug-target interactions not contained in the original datasets. The average recall at top-10 predicted targets for each drug was 0.762, 0.560, 1.000 and 0.404 for GPCR, IC, NR, and E datasets respectively.\n\n\nCONCLUSIONS\nBased on the evaluation, we can conclude that BRDTI is an appropriate choice for researchers looking for an in silico DTI prediction technique to be used in drug-centric repositioning scenarios. BRDTI Software and supplementary materials are available online at www.ksi.mff.cuni.cz/∼peska/BRDTI."
},
{
"pmid": "30583320",
"title": "[Drug-target protein interaction prediction based on AdaBoost algorithm].",
"abstract": "The drug-target protein interaction prediction can be used for the discovery of new drug effects. Recent studies often focus on the prediction of an independent matrix filling algorithm, which apply a single algorithm to predict the drug-target protein interaction. The single-model matrix-filling algorithms have low accuracy, so it is difficult to obtain satisfactory results in the prediction of drug-target protein interaction. AdaBoost algorithm is a strong multiple classifier combination framework, which is proved by the past researches in classification applications. The drug-target interaction prediction is a matrix filling problem. Therefore, we need to adjust the matrix filling problem to a classification problem before predicting the interaction among drug-target protein. We make full use of the AdaBoost algorithm framework to integrate several weak classifiers to improve performance and make accurate prediction of drug-target protein interaction. Experimental results based on the metric datasets show that our algorithm outperforms the other state-of-the-art approaches and classical methods in accuracy. Our algorithm can overcome the limitations of the single algorithm based on machine learning method, exploit the hidden factors better and improve the accuracy of prediction effectively."
},
{
"pmid": "30337070",
"title": "BE-DTI': Ensemble framework for drug target interaction prediction using dimensionality reduction and active learning.",
"abstract": "BACKGROUND AND OBJECTIVE\nDrug-target interaction prediction plays an intrinsic role in the drug discovery process. Prediction of novel drugs and targets helps in identifying optimal drug therapies for various stringent diseases. Computational prediction of drug-target interactions can help to identify potential drug-target pairs and speed-up the process of drug repositioning. In our present, work we have focused on machine learning algorithms for predicting drug-target interactions from the pool of existing drug-target data. The key idea is to train the classifier using existing DTI so as to predict new or unknown DTI. However, there are various challenges such as class imbalance and high dimensional nature of data that need to be addressed before developing optimal drug-target interaction model.\n\n\nMETHODS\nIn this paper, we propose a bagging based ensemble framework named BE-DTI' for drug-target interaction prediction using dimensionality reduction and active learning to deal with class-imbalanced data. Active learning helps to improve under-sampling bagging based ensembles. Dimensionality reduction is used to deal with high dimensional data.\n\n\nRESULTS\nResults show that the proposed technique outperforms the other five competing methods in 10-fold cross-validation experiments in terms of AUC=0.927, Sensitivity=0.886, Specificity=0.864, and G-mean=0.874.\n\n\nCONCLUSION\nMissing interactions and new interactions are predicted using the proposed framework. Some of the known interactions are removed from the original dataset and their interactions are recalculated to check the accuracy of the proposed framework. Moreover, validation of the proposed approach is performed using the external dataset. All these results show that structurally similar drugs tend to interact with similar targets."
},
{
"pmid": "30209997",
"title": "A Brief Survey of Machine Learning Application in Cancerlectin Identification.",
"abstract": "Proteins with at least one carbohydrate recognition domain are lectins that can identify and reversibly interact with glycan moiety of glycoconjugates or a soluble carbohydrate. It has been proved that lectins can play various vital roles in mediating signal transduction, cell-cell recognition and interaction, immune defense, and so on. Most organisms can synthesize and secret lectins. A portion of lectins closely related to diverse cancers, called cancerlectins, are involved in tumor initiation, growth and recrudescence. Cancerlectins have been investigated for their applications in the laboratory study, clinical diagnosis and therapy, and drug delivery and targeting of cancers. The identification of cancerlectin genes from a lot of lectins is helpful for dissecting cancers. Several cancerlectin prediction tools based on machine learning approaches have been established and have become an excellent complement to experimental methods. In this review, we comprehensively summarize and expound the indispensable materials for implementing cancerlectin prediction models. We hope that this review will contribute to understanding cancerlectins and provide valuable clues for the study of cancerlectins. Novel systems for cancerlectin gene identification are expected to be developed for clinical applications and gene therapy."
},
{
"pmid": "30129407",
"title": "Recent Advances in the Machine Learning-Based Drug-Target Interaction Prediction.",
"abstract": "BACKGROUND\nThe identification of drug-target interactions is a crucial issue in drug discovery. In recent years, researchers have made great efforts on the drug-target interaction predictions, and developed databases, software and computational methods.\n\n\nRESULTS\nIn the paper, we review the recent advances in machine learning-based drug-target interaction prediction. First, we briefly introduce the datasets and data, and summarize features for drugs and targets which can be extracted from different data. Since drug-drug similarity and target-target similarity are important for many machine learning prediction models, we introduce how to calculate similarities based on data or features. Different machine learningbased drug-target interaction prediction methods can be proposed by using different features or information. Thus, we summarize, analyze and compare different machine learning-based prediction methods.\n\n\nCONCLUSION\nThis study provides the guide to the development of computational methods for the drug-target interaction prediction."
},
{
"pmid": "18586719",
"title": "Prediction of drug-target interaction networks from the integration of chemical and genomic spaces.",
"abstract": "MOTIVATION\nThe identification of interactions between drugs and target proteins is a key area in genomic drug discovery. Therefore, there is a strong incentive to develop new methods capable of detecting these potential drug-target interactions efficiently.\n\n\nRESULTS\nIn this article, we characterize four classes of drug-target interaction networks in humans involving enzymes, ion channels, G-protein-coupled receptors (GPCRs) and nuclear receptors, and reveal significant correlations between drug structure similarity, target sequence similarity and the drug-target interaction network topology. We then develop new statistical methods to predict unknown drug-target interaction networks from chemical structure and genomic sequence information simultaneously on a large scale. The originality of the proposed method lies in the formalization of the drug-target interaction inference as a supervised learning problem for a bipartite graph, the lack of need for 3D structure information of the target proteins, and in the integration of chemical and genomic spaces into a unified space that we call 'pharmacological space'. In the results, we demonstrate the usefulness of our proposed method for the prediction of the four classes of drug-target interaction networks. Our comprehensively predicted drug-target interaction networks enable us to suggest many potential drug-target interactions and to increase research productivity toward genomic drug discovery.\n\n\nAVAILABILITY\nSoftwares are available upon request.\n\n\nSUPPLEMENTARY INFORMATION\nDatasets and all prediction results are available at http://web.kuicr.kyoto-u.ac.jp/supp/yoshi/drugtarget/."
},
{
"pmid": "10592173",
"title": "KEGG: kyoto encyclopedia of genes and genomes.",
"abstract": "KEGG (Kyoto Encyclopedia of Genes and Genomes) is a knowledge base for systematic analysis of gene functions, linking genomic information with higher order functional information. The genomic information is stored in the GENES database, which is a collection of gene catalogs for all the completely sequenced genomes and some partial genomes with up-to-date annotation of gene functions. The higher order functional information is stored in the PATHWAY database, which contains graphical representations of cellular processes, such as metabolism, membrane transport, signal transduction and cell cycle. The PATHWAY database is supplemented by a set of ortholog group tables for the information about conserved subpathways (pathway motifs), which are often encoded by positionally coupled genes on the chromosome and which are especially useful in predicting gene functions. A third database in KEGG is LIGAND for the information about chemical compounds, enzyme molecules and enzymatic reactions. KEGG provides Java graphics tools for browsing genome maps, comparing two genome maps and manipulating expression maps, as well as computational tools for sequence comparison, graph comparison and path computation. The KEGG databases are daily updated and made freely available (http://www. genome.ad.jp/kegg/)."
},
{
"pmid": "16381955",
"title": "DrugBank: a comprehensive resource for in silico drug discovery and exploration.",
"abstract": "DrugBank is a unique bioinformatics/cheminformatics resource that combines detailed drug (i.e. chemical) data with comprehensive drug target (i.e. protein) information. The database contains >4100 drug entries including >800 FDA approved small molecule and biotech drugs as well as >3200 experimental drugs. Additionally, >14,000 protein or drug target sequences are linked to these drug entries. Each DrugCard entry contains >80 data fields with half of the information being devoted to drug/chemical data and the other half devoted to drug target or protein data. Many data fields are hyperlinked to other databases (KEGG, PubChem, ChEBI, PDB, Swiss-Prot and GenBank) and a variety of structure viewing applets. The database is fully searchable supporting extensive text, sequence, chemical structure and relational query searches. Potential applications of DrugBank include in silico drug target discovery, drug design, drug docking or screening, drug metabolism prediction, drug interaction prediction and general pharmaceutical education. DrugBank is available at http://redpoll.pharmacy.ualberta.ca/drugbank/."
},
{
"pmid": "20529913",
"title": "Drug-target interaction prediction from chemical, genomic and pharmacological data in an integrated framework.",
"abstract": "MOTIVATION\nIn silico prediction of drug-target interactions from heterogeneous biological data is critical in the search for drugs and therapeutic targets for known diseases such as cancers. There is therefore a strong incentive to develop new methods capable of detecting these potential drug-target interactions efficiently.\n\n\nRESULTS\nIn this article, we investigate the relationship between the chemical space, the pharmacological space and the topology of drug-target interaction networks, and show that drug-target interactions are more correlated with pharmacological effect similarity than with chemical structure similarity. We then develop a new method to predict unknown drug-target interactions from chemical, genomic and pharmacological data on a large scale. The proposed method consists of two steps: (i) prediction of pharmacological effects from chemical structures of given compounds and (ii) inference of unknown drug-target interactions based on the pharmacological effect similarity in the framework of supervised bipartite graph inference. The originality of the proposed method lies in the prediction of potential pharmacological similarity for any drug candidate compounds and in the integration of chemical, genomic and pharmacological data in a unified framework. In the results, we make predictions for four classes of important drug-target interactions involving enzymes, ion channels, GPCRs and nuclear receptors. Our comprehensively predicted drug-target interaction networks enable us to suggest many potential drug-target interactions and to increase research productivity toward genomic drug discovery.\n\n\nSUPPLEMENTARY INFORMATION\nDatasets and all prediction results are available at http://cbio.ensmp.fr/~yyamanishi/pharmaco/.\n\n\nAVAILABILITY\nSoftwares are available upon request."
},
{
"pmid": "29974489",
"title": "Drugs for treating severe hypertension in pregnancy: a network meta-analysis and trial sequential analysis of randomized clinical trials.",
"abstract": "AIMS\nSeveral antihypertensive drugs are used in the treatment of severe hypertension in pregnancy. The present study is a network meta-analysis comparing the efficacy and safety of these drugs.\n\n\nMETHODS\nElectronic databases were searched for randomized clinical trials comparing drugs used in the treatment of severe hypertension in pregnancy. The number of women achieving the target blood pressure (BP) was the primary outcome. Doses required and time taken for achieving the target BP, failure rate, and incidences of maternal tachycardia, palpitation, hypotension, headache, and neonatal death and stillbirth were the secondary outcomes. Mixed treatment comparison pooled estimates were generated using a random-effects model. Odds ratios for the categorical and mean difference for the numerical outcomes were the effect estimates.\n\n\nRESULTS\nFifty-one studies were included in the systematic review and 46 in the meta-analysis. No significant differences in the number of patients achieving target BP was observed between any of the drugs. Diazoxide [-15 (-20.6, -9.4)], nicardipine [-11.8 (-22.3, -1.2)], nifedipine/celastrol [-19.3 (-27.4, -11.1)], nifedipine/vitamin D [-17.1 (-25.7, -9.7)], nifedipine/resveratrol [-13.9 (-22.6, -5.2)] and glyceryl trinitrate [-33.8 (-36.7, -31)] were observed to achieve the target BP (in minutes) more rapidly than hydralazine. Nifedipine required fewer doses than hydralazine for achieving the target BP. Glyceryl trinitrate and labetalol were associated with fewer incidences of tachycardia and palpitation respectively than hydralazine. Trial sequential analysis concluded adequate evidence for hydralazine and nifedipine compared with labetalol. Moderate quality of evidence was observed for direct comparison estimate between labetalol and hydralazine but was either low or very low for other comparisons.\n\n\nCONCLUSION\nThe present evidence suggests similar efficacy between nifedipine, hydralazine and labetalol in the treatment of severe hypertension in pregnancy. Subtle differences may exist in their safety profile. The evidence is inadequate for other drugs."
},
{
"pmid": "24809305",
"title": "Moderate intensity exercise is associated with decreased angiotensin-converting enzyme, increased β2-adrenergic receptor gene expression, and lower blood pressure in middle-aged men.",
"abstract": "PURPOSE\nThe purpose of the current study was to characterize the role of aerobic exercise in the gene expression of the angiotensin-converting enzyme (ACE) and the β2-adrenergic receptor (ADRB2) in untrained men.\n\n\nMETHODS\nTwenty untrained middle-aged men were randomly assigned to exercise (Exe) and control (Con) groups. The Exe group performed aerobic exercises for eight weeks. ACE mRNA and ADRB2 mRNA were determined by PCR.\n\n\nRESULTS\nThe expression of ACE in week 4 and in the Exe group decreased significantly (p < .001). ADRB2 in the Exe group, in week 4 and in week 8, was markedly higher, and blood pressure was significantly lower than in the Con group (p < .001). In the Con group ADRB2 mRNA decreased.\n\n\nCONCLUSION\nThese results suggest that moderate intensity exercise promotes the leukocyte expression of gene markers that may affect blood pressure by improving cardiovascular fitness levels in middle-aged men."
},
{
"pmid": "908462",
"title": "On the mechanism of diazoxide-induced hyperglycemia.",
"abstract": "Infusion of diazoxide (16.5 mg./kg. in 10 minutes) into normal unanesthetized dogs resulted in a prompt hyperglycemia due to increased hepatic glucose production as measured with a 3-3H-glucose primer-infusion technique. Plasma insulin and glucagon were decreased. Glucose uptake failed to increase. Diazoxide administration during period of alpha adrenergic receptor blockade with phentolamine still caused hyperglycemia and increased glucose production. Glucose uptake was inhibited despite adequate plasma insulin. Infusion of somatostatin along with insulin prevented the effects of diazoxide on plasma glucose and glucose production. It is concluded that diazoxide hyperglycemia is not due solely to decreased insulin secretion or increased epinephrine secretion and that glucagon is not a contributory factor. Diazoxide may act directly to increase glucose production and inhibit glucose uptake. Somatostatin appears capable of blocking the effect of diazoxide on glucose production by an unknown mechanism."
},
{
"pmid": "18514499",
"title": "Mast cell inhibition by ketotifen reduces splanchnic inflammatory response in a portal hypertension model in rats.",
"abstract": "Experimental early prehepatic portal hypertension induces an inflammatory exudative response, including an increased infiltration of the intestinal mucosa and the mesenteric lymph nodes by mast cells and a dilation and tortuosity of the branches of the superior mesenteric vein. The aim of this study is to verify that the prophylactic administration of Ketotifen, a stabilizing drug for mast cells, reduces the consequence of splanchnic inflammatory response in prehepatic portal hypertension. Male Wistar rats were used: Sham-operated and with Triple Partial Portal Vein Ligation, which were subcutaneously administered poly(lactide-co-glycolide) acid microspheres with vehicle 24h before the intervention and SO and rats with Triple Partial Portal Vein Ligation, which were administered Ketotifen-loaded microspheres. Around 48h after surgery, the portal pressure was measured; the levels of chymase (Rat Mast Cell Protease-II) were assayed in the superior mesenteric lymph complex and granulated and degranulated mast cells in the ileum and cecum were quantified. Prophylactic administration of Ketotifen reduced portal pressure, the incidence of dilation and tortuosity of the superior mesenteric vein branches, the amount of Rat Mast Cell Protease-II in the superior mesenteric lymph complex and the number of activated mast cells in the cecum of rats with portal hypertension. In summary, the administration of Ketotifen reduces early splanchnic inflammatory reaction in the rat with prehepatic portal hypertension."
},
{
"pmid": "25249292",
"title": "Ocular side effects and trichomegaly of eyelashes induced by erlotinib: a case report and review of the literature.",
"abstract": "Therapeutics belonging to the group of epidermal growth factor inhibitors are currently in widespread use for the treatment of certain malignancies, especially in advanced non-small cell lung cancer. A wide spectrum of the cutaneous side effects of these drugs are well known but the ocular side effects and trichomegaly of eyelashes are rarely reported, particularly for an ophthalmology audience. This report presents a case of erlotinib induced eyelash trichomegaly and the other ocular side effects of this drug in a 74 year-old female patient with metastatic lung adenocarcinoma. Trichomegaly is not a drug-limiting side effect, however long eyelashes often cause eyeball irritation and corneal epithelial defects. Herein, the authors emphasize the importance of recognizing this side effect in order to avoid from severe complications such as corneal ulcers in uncared patients."
}
] |
Scientific Reports | 31263186 | PMC6603029 | 10.1038/s41598-019-45053-x | Radiomics based likelihood functions for cancer diagnosis | Radiomic features based classifiers and neural networks have shown promising results in tumor classification. The classification performance can be further improved greatly by exploring and incorporating the discriminative features towards cancer into mathematical models. In this research work, we have developed two radiomics driven likelihood models in Computed Tomography(CT) images to classify lung, colon, head and neck cancer. Initially, two diagnostic radiomic signatures were derived by extracting 105 3-D features from 200 lung nodules and by selecting the features with higher average scores from several supervised as well as unsupervised feature ranking algorithms. The signatures obtained from both the ranking approaches were integrated into two mathematical likelihood functions for tumor classification. Validation of the likelihood functions was performed on 265 public data sets of lung, colon, head and neck cancer with high classification rate. The achieved results show robustness of the models and suggest that diagnostic mathematical functions using general tumor phenotype can be successfully developed for cancer diagnosis. | Related WorkRadiomic features are quantitative features which are computed to characterize a disease in the medical images. The role of radiomic features in tumor classification has been researched from the broader perspectives of neural networks and machine learning algorithms. Radiomics based classification using machine learning algorithms is a more popular approach and investigates a set of features helpful towards diagnosis followed by the application of classifiers. In this regard, the relationship between radiomic features and the tumor histology was investigated by Wu et al.8 by applying classifiers of random Forests, naive Bayes, and K-nearest neighbors to the radiomic features. Chen et al.9 proposed a radiomics signature of four Laws features including minimum, energy, skewness and uniformity and employed Sequential Forward Selection (SFS) and Support Vector Machine (SVM) classifiers for nodule classification. A hierarchical clustering method was used by Choi et al.10 to identify bounding box anterior–posterior dimension and the standard deviation of inverse difference moment as the top two distinct features for lung cancer diagnosis.Another progressive approach towards tumor classification is the development of radiomics based efficient neural networks. Liu et al.11 proposed a multi-view convolutional neural networks (MV-CNN) which used multiple views as input channels, to classify the lung nodules in CT images. Causey et al.12 proposed a classification neural network based on deep learning features of a lung nodule in CT images. A computer aided diagnosis system was proposed by Kumar et al.13 which extracted deep features using an auto-encoder coupled with a decision tree classifier to classify the benign and malignant lung nodules.Contribution of the proposed workThe proposed research work contributes radiomics based likelihood functions for the diagnosis of cancer in contrast to the previously proposed classification methods in8–13 which were motivated by machine learning and neural networks. A mathematical solution incorporating radiomics is investigated to address the tumor classification problem. The proposed computational approach enables accurate and fast classification of a tumor as malignant or benign in CT images and can be further taken up by advance mathematical models to gain in-depth insights of the disease.To formulate the likelihood functions, diagnostic radiomic signatures were developed which can efficiently detect lung, colon, head and neck cancer. The radiomic signatures were incorporated into mathematical functions which were in turn employed for tumor classification. The performance of radiomic signatures suggest that a radiomic signature can successfully classify a tumor based on the general tumor phenotype.In addition, the research work has intuitively ranked the 3-D radiomic features of a tumor according to their diagnosis power towards cancer. Two feature ranking lists were prepared using the average score obtained from seven supervised and six unsupervised ranking algorithms. The presented selection approach resulted in accurate feature ranking as it performed feature ranking using multiple ranking algorithms and assigned each algorithm equal weight towards feature selection. In the past studies, feature selection was done by employing any one renowned feature selection algorithm subjecting the ranking potentially to errors8,10. This is particularly true since there is no study available in the literature regarding the performance of contemporary feature selection algorithms. Hence, the selection of a feature selection algorithm could affect the features ranks for cancer diagnosis. The assigned rank scores in our study were validated by integrating the two highly ranked features into the proposed likelihood functions for cancer diagnosis. | [
"29313949",
"21075475",
"22949379",
"19138936",
"27064691",
"29457229",
"21135434",
"29092951"
] | [
{
"pmid": "29313949",
"title": "Cancer statistics, 2018.",
"abstract": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data, available through 2014, were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data, available through 2015, were collected by the National Center for Health Statistics. In 2018, 1,735,350 new cancer cases and 609,640 cancer deaths are projected to occur in the United States. Over the past decade of data, the cancer incidence rate (2005-2014) was stable in women and declined by approximately 2% annually in men, while the cancer death rate (2006-2015) declined by about 1.5% annually in both men and women. The combined cancer death rate dropped continuously from 1991 to 2015 by a total of 26%, translating to approximately 2,378,600 fewer cancer deaths than would have been expected if death rates had remained at their peak. Of the 10 leading causes of death, only cancer declined from 2014 to 2015. In 2015, the cancer death rate was 14% higher in non-Hispanic blacks (NHBs) than non-Hispanic whites (NHWs) overall (death rate ratio [DRR], 1.14; 95% confidence interval [95% CI], 1.13-1.15), but the racial disparity was much larger for individuals aged <65 years (DRR, 1.31; 95% CI, 1.29-1.32) compared with those aged ≥65 years (DRR, 1.07; 95% CI, 1.06-1.09) and varied substantially by state. For example, the cancer death rate was lower in NHBs than NHWs in Massachusetts for all ages and in New York for individuals aged ≥65 years, whereas for those aged <65 years, it was 3 times higher in NHBs in the District of Columbia (DRR, 2.89; 95% CI, 2.16-3.91) and about 50% higher in Wisconsin (DRR, 1.78; 95% CI, 1.56-2.02), Kansas (DRR, 1.51; 95% CI, 1.25-1.81), Louisiana (DRR, 1.49; 95% CI, 1.38-1.60), Illinois (DRR, 1.48; 95% CI, 1.39-1.57), and California (DRR, 1.45; 95% CI, 1.38-1.54). Larger racial inequalities in young and middle-aged adults probably partly reflect less access to high-quality health care. CA Cancer J Clin 2018;68:7-30. © 2018 American Cancer Society."
},
{
"pmid": "21075475",
"title": "Sojourn time and lead time projection in lung cancer screening.",
"abstract": "OBJECTIVES\nWe investigate screening sensitivity, transition probability and sojourn time in lung cancer screening for male heavy smokers using the Mayo Lung Project data. We also estimate the lead time distribution, its property, and the projected effect of taking regular chest X-rays for lung cancer detection.\n\n\nMETHODS\nWe apply the statistical method developed by Wu et al. [1] using the Mayo Lung Project (MLP) data, to make Bayesian inference for the screening test sensitivity, the age-dependent transition probability from disease-free to preclinical state, and the sojourn time distribution, for male heavy smokers in a periodic screening program. We then apply the statistical method developed by Wu et al. [2] using the Bayesian posterior samples from the MLP data to make inference for the lead time, the time of diagnosis advanced by screening for male heavy smokers. The lead time is distributed as a mixture of a point mass at zero and a piecewise continuous distribution, which corresponds to the probability of no-early-detection, and the probability distribution of the early diagnosis time. We present estimates of these two measures for male heavy smokers by simulations.\n\n\nRESULTS\nThe posterior sensitivity is almost symmetric, with posterior mean 0.89, and posterior median 0.91; the 95% highest posterior density (HPD) interval is (0.72, 0.98). The posterior mean sojourn time is 2.24 years, with a posterior median of 2.20 years for male heavy smokers. The 95% HPD interval for the mean sojourn time is (1.57, 3.35) years. The age-dependent transition probability is not a monotone function of age; it has a single maximum at age 68. The mean lead time increases as the screening time interval decreases. The standard error of the lead time also increases as the screening time interval decreases.\n\n\nCONCLUSION\nAlthough the mean sojourn time for male heavy smokers is longer than expected, the predictive estimation of the lead time is much shorter. This may provide policy makers important information on the effectiveness of the chest X-rays and sputum cytology in lung cancer early detection."
},
{
"pmid": "22949379",
"title": "A multifactorial likelihood model for MMR gene variant classification incorporating probabilities based on sequence bioinformatics and tumor characteristics: a report from the Colon Cancer Family Registry.",
"abstract": "Mismatch repair (MMR) gene sequence variants of uncertain clinical significance are often identified in suspected Lynch syndrome families, and this constitutes a challenge for both researchers and clinicians. Multifactorial likelihood model approaches provide a quantitative measure of MMR variant pathogenicity, but first require input of likelihood ratios (LRs) for different MMR variation-associated characteristics from appropriate, well-characterized reference datasets. Microsatellite instability (MSI) and somatic BRAF tumor data for unselected colorectal cancer probands of known pathogenic variant status were used to derive LRs for tumor characteristics using the Colon Cancer Family Registry (CFR) resource. These tumor LRs were combined with variant segregation within families, and estimates of prior probability of pathogenicity based on sequence conservation and position, to analyze 44 unclassified variants identified initially in Australasian Colon CFR families. In addition, in vitro splicing analyses were conducted on the subset of variants based on bioinformatic splicing predictions. The LR in favor of pathogenicity was estimated to be ~12-fold for a colorectal tumor with a BRAF mutation-negative MSI-H phenotype. For 31 of the 44 variants, the posterior probabilities of pathogenicity were such that altered clinical management would be indicated. Our findings provide a working multifactorial likelihood model for classification that carefully considers mode of ascertainment for gene testing."
},
{
"pmid": "19138936",
"title": "A prediction model for lung cancer diagnosis that integrates genomic and clinical features.",
"abstract": "Lung cancer is the leading cause of cancer death due, in part, to lack of early diagnostic tools. Bronchoscopy represents a relatively noninvasive initial diagnostic test in smokers with suspect disease, but it has low sensitivity. We have reported a gene expression profile in cytologically normal large airway epithelium obtained via bronchoscopic brushings, which is a sensitive and specific biomarker for lung cancer. Here, we evaluate the independence of the biomarker from other clinical risk factors and determine the performance of a clinicogenomic model that combines clinical factors and gene expression. Training (n = 76) and test (n = 62) sets consisted of smokers undergoing bronchoscopy for suspicion of lung cancer at five medical centers. Logistic regression models describing the likelihood of having lung cancer using the biomarker, clinical factors, and these data combined were tested using the independent set of patients with nondiagnostic bronchoscopies. The model predictions were also compared with physicians' clinical assessment. The gene expression biomarker is associated with cancer status in the combined clinicogenomic model (P < 0.005). There is a significant difference in performance of the clinicogenomic relative to the clinical model (P < 0.05). In the test set, the clinicogenomic model increases sensitivity and negative predictive value to 100% and results in higher specificity (91%) and positive predictive value (81%) compared with other models. The clinicogenomic model has high accuracy where physician assessment is most uncertain. The airway gene expression biomarker provides information about the likelihood of lung cancer not captured by clinical factors, and the clinicogenomic model has the highest prediction accuracy. These findings suggest that use of the clinicogenomic model may expedite more invasive testing and definitive therapy for smokers with lung cancer and reduce invasive diagnostic procedures for individuals without lung cancer."
},
{
"pmid": "27064691",
"title": "Exploratory Study to Identify Radiomics Classifiers for Lung Cancer Histology.",
"abstract": "BACKGROUND\nRadiomics can quantify tumor phenotypic characteristics non-invasively by applying feature algorithms to medical imaging data. In this study of lung cancer patients, we investigated the association between radiomic features and the tumor histologic subtypes (adenocarcinoma and squamous cell carcinoma). Furthermore, in order to predict histologic subtypes, we employed machine-learning methods and independently evaluated their prediction performance.\n\n\nMETHODS\nTwo independent radiomic cohorts with a combined size of 350 patients were included in our analysis. A total of 440 radiomic features were extracted from the segmented tumor volumes of pretreatment CT images. These radiomic features quantify tumor phenotypic characteristics on medical images using tumor shape and size, intensity statistics, and texture. Univariate analysis was performed to assess each feature's association with the histological subtypes. In our multivariate analysis, we investigated 24 feature selection methods and 3 classification methods for histology prediction. Multivariate models were trained on the training cohort and their performance was evaluated on the independent validation cohort using the area under ROC curve (AUC). Histology was determined from surgical specimen.\n\n\nRESULTS\nIn our univariate analysis, we observed that fifty-three radiomic features were significantly associated with tumor histology. In multivariate analysis, feature selection methods ReliefF and its variants showed higher prediction accuracy as compared to other methods. We found that Naive Baye's classifier outperforms other classifiers and achieved the highest AUC (0.72; p-value = 2.3 × 10(-7)) with five features: Stats_min, Wavelet_HLL_rlgl_lowGrayLevelRunEmphasis, Wavelet_HHL_stats_median, Wavelet_HLL_stats_skewness, and Wavelet_HLH_glcm_clusShade.\n\n\nCONCLUSION\nHistological subtypes can influence the choice of a treatment/therapy for lung cancer patients. We observed that radiomic features show significant association with the lung tumor histology. Moreover, radiomics-based multivariate classifiers were independently validated for the prediction of histological subtypes. Despite achieving lower than optimal prediction accuracy (AUC 0.72), our analysis highlights the impressive potential of non-invasive and cost-effective radiomics for precision medicine. Further research in this direction could lead us to optimal performance and therefore to clinical applicability, which could enhance the efficiency and efficacy of cancer care."
},
{
"pmid": "29457229",
"title": "Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer.",
"abstract": "PURPOSE\nTo develop a radiomics prediction model to improve pulmonary nodule (PN) classification in low-dose CT. To compare the model with the American College of Radiology (ACR) Lung CT Screening Reporting and Data System (Lung-RADS) for early detection of lung cancer.\n\n\nMETHODS\nWe examined a set of 72 PNs (31 benign and 41 malignant) from the Lung Image Database Consortium image collection (LIDC-IDRI). One hundred three CT radiomic features were extracted from each PN. Before the model building process, distinctive features were identified using a hierarchical clustering method. We then constructed a prediction model by using a support vector machine (SVM) classifier coupled with a least absolute shrinkage and selection operator (LASSO). A tenfold cross-validation (CV) was repeated ten times (10 × 10-fold CV) to evaluate the accuracy of the SVM-LASSO model. Finally, the best model from the 10 × 10-fold CV was further evaluated using 20 × 5- and 50 × 2-fold CVs.\n\n\nRESULTS\nThe best SVM-LASSO model consisted of only two features: the bounding box anterior-posterior dimension (BB_AP) and the standard deviation of inverse difference moment (SD_IDM). The BB_AP measured the extension of a PN in the anterior-posterior direction and was highly correlated (r = 0.94) with the PN size. The SD_IDM was a texture feature that measured the directional variation of the local homogeneity feature IDM. Univariate analysis showed that both features were statistically significant and discriminative (P = 0.00013 and 0.000038, respectively). PNs with larger BB_AP or smaller SD_IDM were more likely malignant. The 10 × 10-fold CV of the best SVM model using the two features achieved an accuracy of 84.6% and 0.89 AUC. By comparison, Lung-RADS achieved an accuracy of 72.2% and 0.77 AUC using four features (size, type, calcification, and spiculation). The prediction improvement of SVM-LASSO comparing to Lung-RADS was statistically significant (McNemar's test P = 0.026). Lung-RADS misclassified 19 cases because it was mainly based on PN size, whereas the SVM-LASSO model correctly classified 10 of these cases by combining a size (BB_AP) feature and a texture (SD_IDM) feature. The performance of the SVM-LASSO model was stable when leaving more patients out with five- and twofold CVs (accuracy 84.1% and 81.6%, respectively).\n\n\nCONCLUSION\nWe developed an SVM-LASSO model to predict malignancy of PNs with two CT radiomic features. We demonstrated that the model achieved an accuracy of 84.6%, which was 12.4% higher than Lung-RADS."
},
{
"pmid": "21135434",
"title": "Feature Selection and Kernel Learning for Local Learning-Based Clustering.",
"abstract": "The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets."
},
{
"pmid": "29092951",
"title": "Computational Radiomics System to Decode the Radiographic Phenotype.",
"abstract": "Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI) technology, either based on engineered hard-coded algorithms or deep learning methods, can be used to develop noninvasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed PyRadiomics, a flexible open-source platform capable of extracting a large panel of engineered features from medical images. PyRadiomics is implemented in Python and can be used standalone or using 3D Slicer. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung lesions. Source code, documentation, and examples are publicly available at www.radiomics.io With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research. Cancer Res; 77(21); e104-7. ©2017 AACR."
}
] |
JMIR mHealth and uHealth | 31215514 | PMC6604512 | 10.2196/14239 | Understanding the Role of Healthy Eating and Fitness Mobile Apps in the Formation of Maladaptive Eating and Exercise Behaviors in Young People | BackgroundHealthy eating and fitness mobile apps are designed to promote healthier living. However, for young people, body dissatisfaction is commonplace, and these types of apps can become a source of maladaptive eating and exercise behaviors. Furthermore, such apps are designed to promote continuous engagement, potentially fostering compulsive behaviors.ObjectiveThe aim of this study was to identify potential risks around healthy eating and fitness app use and negative experience and behavior formation among young people and to inform the understanding around how current commercial healthy eating and fitness apps on the market may, or may not, be exasperating such behaviors.MethodsOur research was conducted in 2 phases. Through a survey (n=106) and 2 workshops (n=8), we gained an understanding of young people’s perceptions of healthy eating and fitness apps and any potential harm that their use might have; we then explored these further through interviews with experts (n=3) in eating disorder and body image. Using insights drawn from this initial phase, we then explored the degree to which leading apps are preventing, or indeed contributing to, the formation of maladaptive eating and exercise behaviors. We conducted a review of the top 100 healthy eating and fitness apps on the Google Play Store to find out whether or not apps on the market have the potential to elicit maladaptive eating and exercise behaviors.ResultsParticipants were aged between 18 and 25 years and had current or past experience of using healthy eating and fitness apps. Almost half of our survey participants indicated that they had experienced some form of negative experiences and behaviors through their app use. Our findings indicate a wide range of concerns around the wider impact of healthy eating and fitness apps on individuals at risk of maladaptive eating and exercise behavior, including (1) guilt formation because of the nature of persuasive models, (2) social isolation as a result of personal regimens around diet and fitness goals, (3) fear of receiving negative responses when targets are not achieved, and (4) feelings of being controlled by the app. The app review identified logging functionalities available across the apps that are used to promote the sustained use of the app. However, a significant number of these functionalities were seen to have the potential to cause negative experiences and behaviors.ConclusionsIn this study, we offer a set of responsibility guidelines for future researchers, designers, and developers of digital technologies aiming to support healthy eating and fitness behaviors. Our study highlights the necessity for careful considerations around the design of apps that promote weight loss or body modification through fitness training, especially when they are used by young people who are vulnerable to the development of poor body image and maladaptive eating and exercise behaviors. | Background and Related WorkBody dissatisfaction, the subjective experience of negative thoughts and feelings toward one’s own body [1], is so prevalent among young people (defined by the United Nations as those aged 15 to 24 years [2]) in modern Western societies that it is regarded as normative discontent [3,4]. Body dissatisfaction has been linked with a number of maladaptive eating and exercise behaviors, including restrained eating practices, consuming less fruit and vegetables, low levels of physical activity, excessive exercise, binge-purge cycles, and anabolic steroid use [5,6]. Furthermore, body dissatisfaction is regarded as both an important risk factor for, and is symptomatic of, clinical eating disorders, such as anorexia and bulimia [7,8], the majority of which develop during adolescence and early adulthood [9].The causes of body dissatisfaction and associated maladaptive eating and exercise behaviors are diverse, with research implicating a combination of biological, psychological, and sociocultural factors [7,10,11]. Sociocultural theories emphasize the role of specific agents, such as parents, peers, and the media, in shaping negative attitudes toward the body [12], with body dissatisfaction arising because of perceived pressure from sociocultural agents to conform to an unrealistic, culturally defined body and beauty ideal. For women, this has been described as thin and toned, yet curvaceous with pert breasts and buttocks, whereas for men it is muscular yet lean with little body fat [13]. The complex and unrealistic nature of this ideal makes it impossible for the majority of young people to achieve, leading to negative feelings around their own bodies [7,12]. In turn, these feelings can motivate an engagement in maladaptive eating and exercise behaviors, aimed at changing the body [7,12].Perpetuating these social and emotional pressures is the fact that many of these behaviors (eg, clean eating, over-exercising, and cutting out food groups) have become the cultural norm, with magazines and celebrities on social media advocating calorie restriction as an everyday part of how we think about food [14]. Following this longstanding cycle of diet culture, parents, who themselves engage in dieting behaviors, can be the ones who convey messages on calorie restriction and good versus bad foods to children from a young age [15]. Thus, to many young people growing up in this environment, these ways of thinking about food and exercise are seen as the norm and are often engaged with regardless of whether the young person is overweight or not [16]. Ironically, calorie restriction has been demonstrated to lead to weight gain and eating disorders over time in young people [17].In recent years, the emergence and increasing availability of new digital media and technologies has drastically changed the social landscape. As a consequence, theories of body dissatisfaction and maladaptive eating and exercise behaviors have needed to be adapted. Research in this field has typically focused on how social media influences how young people think, feel, and behave with regard to their body. For example, research has highlighted the role of social media image sharing practices in body dissatisfaction [18]; the further normalization of maladaptive body shaping strategies through user-generated social media content [13]; and the use of social media spaces to create communities centered around maladaptive eating and exercise behaviors [19]. | [
"11752484",
"21932970",
"12210660",
"16857537",
"26095891",
"23658086",
"18089156",
"23844558",
"10897085",
"16567152",
"26311205",
"29567619",
"28123997",
"29025694",
"30550507",
"30550506",
"15942543",
"21244144",
"22584372",
"28646889",
"26678569",
"27480144",
"29563080",
"29273575",
"25760773",
"30021706",
"16204405",
"25921657",
"28193554",
"26280376",
"26878220",
"18444705",
"11950103",
"25355131",
"25130682",
"22385782",
"23481424",
"21513547"
] | [
{
"pmid": "11752484",
"title": "Causes of eating disorders.",
"abstract": "Anorexia nervosa and bulimia nervosa have emerged as the predominant eating disorders. We review the recent research evidence pertaining to the development of these disorders, including sociocultural factors (e.g., media and peer influences), family factors (e.g., enmeshment and criticism), negative affect, low self-esteem, and body dissatisfaction. Also reviewed are cognitive and biological aspects of eating disorders. Some contributory factors appear to be necessary for the appearance of eating disorders, but none is sufficient. Eating disorders may represent a way of coping with problems of identity and personal control."
},
{
"pmid": "21932970",
"title": "It's not just a \"woman thing:\" the current state of normative discontent.",
"abstract": "This study assessed \"normative discontent,\" the concept that most women experience weight dissatisfaction, as an emerging societal stereotype for women and men (Rodin, Silberstein, & Streigel-Moore, 1984). Participants (N = 472) completed measures of stereotypes, eating disorders, and body image. Normative discontent stereotypes were pervasive for women and men. Endorsing stereotypes varied by sex and participants' own disturbance, with trends towards eating disorder symptomotology being positively correlated with stereotype endorsement. Individuals with higher levels of body image and eating disturbance may normalize their behavior by perceiving that most people share their experiences. Future research needs to test prevention and intervention strategies that incorporate the discrepancies between body image/eating-related stereotypes and reality with focus on preventing normalization of such experiences."
},
{
"pmid": "12210660",
"title": "Relationship among body image, exercise behavior, and exercise dependence symptoms.",
"abstract": "OBJECTIVE\nThe purpose of the present study was to examine the relationship among body image, exercise behavior, body mass index (BMI), and primary exercise dependence symptoms in physically active individuals.\n\n\nMETHOD\nMale and female university students (N = 474) completed self-report measures of exercise behavior, height, weight, exercise dependence symptoms, social physique anxiety, and body satisfaction.\n\n\nRESULTS\nHierarchical multiple regressions with forced block entry by gender were conducted to examine the effects of exercise behavior, BMI, and exercise dependence symptoms on body satisfaction and social physique anxiety. For females, BMI was the strongest positive predictor of body dissatisfaction and social physique anxiety. For males, exercise behavior was the strongest negative predictor of body dissatisfaction and social physique anxiety.\n\n\nDISCUSSION\nIt was concluded that after controlling for the effects of BMI and exercise behavior, primary exercise dependence symptoms were not strong predictors on body image, especially for females."
},
{
"pmid": "16857537",
"title": "Does body satisfaction matter? Five-year longitudinal associations between body satisfaction and health behaviors in adolescent females and males.",
"abstract": "PURPOSE\nThis study addresses the question, \"Does body satisfaction matter?\" by examining longitudinal associations between body satisfaction and weight-related health-promoting and health-compromising behaviors five years later among adolescents.\n\n\nMETHODS\nProject EAT-II followed an ethnically and socioeconomically diverse sample of 2516 adolescents from 1999 (Time 1) to 2004 (Time 2). Associations between body satisfaction at Time 1 and health behaviors at Time 2 were examined, adjusting for sociodemographic characteristics and Time 1 health behaviors, with and without adjustment for body mass index (BMI).\n\n\nRESULTS\nIn females, lower body satisfaction predicted higher levels of dieting, unhealthy and very unhealthy weight control behaviors and binge eating, and lower levels of physical activity and fruit and vegetable intake. After adjusting for BMI, associations between body satisfaction and dieting, very unhealthy weight control behaviors, and physical activity remained statistically significant. In males, lower body satisfaction predicted higher levels of dieting, healthy, unhealthy, and very unhealthy weight control behaviors, binge eating, and smoking, and lower levels of physical activity. After adjusting for BMI, associations between body satisfaction and dieting, unhealthy weight control behavior, and binge eating remained statistically significant.\n\n\nCONCLUSIONS\nThe study findings indicate that, in general, lower body satisfaction does not serve as a motivator for engaging in healthy weight management behaviors, but rather predicts the use of behaviors that may place adolescents at risk for weight gain and poorer overall health. Interventions with adolescents should strive to enhance body satisfaction and avoid messages likely to lead to decreases in body satisfaction."
},
{
"pmid": "26095891",
"title": "Research Review: What we have learned about the causes of eating disorders - a synthesis of sociocultural, psychological, and biological research.",
"abstract": "BACKGROUND\nEating disorders are severe psychiatric disorders with a complex etiology involving transactions among sociocultural, psychological, and biological influences. Most research and reviews, however, focus on only one level of analysis. To address this gap, we provide a qualitative review and summary using an integrative biopsychosocial approach.\n\n\nMETHODS\nWe selected variables for which there were available data using integrative methodologies (e.g., twin studies, gene-environment interactions) and/or data at the biological and behavioral level (e.g., neuroimaging). Factors that met these inclusion criteria were idealization of thinness, negative emotionality, perfectionism, negative urgency, inhibitory control, cognitive inflexibility, serotonin, dopamine, ovarian hormones. Literature searches were conducted using PubMed. Variables were classified as risk factors or correlates of eating disorder diagnoses and disordered eating symptoms using Kraemer et al.'s (1997) criteria.\n\n\nFINDINGS\nSociocultural idealization of thinness variables (media exposure, pressures for thinness, thin-ideal internalization, thinness expectancies) and personality traits (negative emotionality, perfectionism, negative urgency) attained 'risk status' for eating disorders and/or disordered eating symptoms. Other factors were identified as correlates of eating pathology or were not classified given limited data. Effect sizes for risk factors and correlates were generally small-to-moderate in magnitude.\n\n\nCONCLUSIONS\nMultiple biopsychosocial influences are implicated in eating disorders and/or disordered eating symptoms and several can now be considered established risk factors. Data suggest that psychological and environmental factors interact with and influence the expression of genetic risk to cause eating pathology. Additional studies that examine risk variables across multiple levels of analysis and that consider specific transactional processes amongst variables are needed to further elucidate the intersection of sociocultural, psychological, and biological influences on eating disorders."
},
{
"pmid": "23658086",
"title": "Psychosocial risk factors for eating disorders.",
"abstract": "OBJECTIVE\nOne goal in identifying psychosocial risk factors is to discover opportunities for intervention. The purpose of this review is to examine psychosocial risk factors for disordered eating, placing research findings in the larger context of how etiological models for eating disorders can be transformed into models for intervention.\n\n\nMETHOD\nA qualitative literature review was conducted focusing on psychological and social factors that increase the risk for developing eating disorders, with an emphasis on well-replicated findings from prospective longitudinal studies.\n\n\nRESULTS\nEpidemiological, cross-cultural, and longitudinal studies underscore the importance of the idealization of thinness and resulting weight concerns as psychosocial risk factors for eating disorders. Personality factors such as negative emotionality and perfectionism contribute to the development of eating disorders but may do so indirectly by increasing susceptibility to internalize the thin ideal or by influencing selection of peer environment. During adolescence, peers represent self-selected environments that influence risk.\n\n\nDISCUSSION\nPeer context may represent a key opportunity for intervention, as peer groups represent the nexus in which individual differences in psychological risk factors shape the social environment and social environment shapes psychological risk factors. Thus, peer-based interventions that challenge internalization of the thin ideal can protect against the development of eating pathology."
},
{
"pmid": "18089156",
"title": "An evaluation of the Tripartite Influence Model of body dissatisfaction and eating disturbance with adolescent girls.",
"abstract": "The Tripartite Influence Model of body image and eating disturbance proposes that three formative influences (peer, parents, and media) affect body image and eating problems through two mediational mechanisms: internalization of the thin-ideal and appearance comparison processes. The current study evaluated this model in a sample of 325 sixth through eighth grade girls. Simple path analyses indicated that internalization and comparison fully mediated the relationship between parental influence and body dissatisfaction and partially mediated the relationship between peer influence and body dissatisfaction. Additionally, internalization and comparison partially mediated the relationship between media influence and body dissatisfaction. Six a priori SEM models based on the full Tripartite Influence Model were also evaluated. A resulting model was found to be an adequate fit to the data, supporting the viability of the Tripartite Model as a useful framework for understanding processes that may predispose young women to develop body image disturbances and eating dysfunction."
},
{
"pmid": "23844558",
"title": "Weighing women down: messages on weight loss and body shaping in editorial content in popular women's health and fitness magazines.",
"abstract": "Exposure to idealized body images has been shown to lower women's body satisfaction. Yet some studies found the opposite, possibly because real-life media (as opposed to image-only stimuli) often embed such imagery in messages that suggest thinness is attainable. Drawing on social cognitive theory, the current content analysis investigated editorial body-shaping and weight-loss messages in popular women's health and fitness magazines. About five thousand magazine pages published in top-selling U.S. women's health and fitness magazines in 2010 were examined. The findings suggest that body shaping and weight loss are a major topic in these magazines, contributing to roughly one-fifth of all editorial content. Assessing standards of motivation and conduct, as well as behaviors promoted by the messages, the findings reflect overemphasis on appearance over health and on exercise-related behaviors over caloric reduction behaviors and the combination of both behaviors. These accentuations are at odds with public health recommendations."
},
{
"pmid": "10897085",
"title": "The emergence of dieting among female adolescents: age, body mass index, and seasonal effects.",
"abstract": "OBJECTIVE\nThe purpose of this brief report is to document the emergence of dieting in adolescent girls across a 2-year period, and to establish whether the changes in dieting status were related to the girls' age, body mass index, or to seasonal effects.\n\n\nMETHOD\nAs part of a large-scale longitudinal study concerned with adolescent health and well-being, 478 girls, initially aged 12 to 16 years old, completed Strong and Huon's (Eating Disorders 5:97-104, 1997) dieting status measure on four separate occasions across a 2-year period.\n\n\nRESULTS\nA total of 273 girls (57.1%) identified themselves as nondieters when we first visited their school. Of those, approximately 20% indicated that they had begun to diet on one of the subsequent testing occasions. The emergence of dieting was observed to occur more in the 13- and 14-year-olds than in any other age group. Higher body mass index was not associated with the initiation of dieting as some underweight, and even very underweight girls, began to diet.\n\n\nDISCUSSION\nThe emergence of dieting occurs in early adolescence and might be triggered by concerns about changes in body shape."
},
{
"pmid": "16567152",
"title": "Obesity, disordered eating, and eating disorders in a longitudinal study of adolescents: how do dieters fare 5 years later?",
"abstract": "OBJECTIVE\nTo determine if adolescents who report dieting and different weight-control behaviors are at increased or decreased risk for gains in body mass index, overweight status, binge eating, extreme weight-control behaviors, and eating disorders 5 years later.\n\n\nDESIGN\nPopulation-based 5-year longitudinal study.\n\n\nPARTICIPANTS\nAdolescents (N=2,516) from diverse ethnic and socioeconomic backgrounds who completed Project EAT (Eating Among Teens) surveys in 1999 (Time 1) and 2004 (Time 2).\n\n\nMAIN OUTCOME MEASURES\nWeight status, binge eating, extreme weight control, and self-reported eating disorder.\n\n\nSTATISTICAL ANALYSIS\nMultiple linear and logistic regressions.\n\n\nRESULTS\nAdolescents using unhealthful weight-control behaviors at Time 1 increased their body mass index by about 1 unit more than adolescents not using any weight-control behaviors and were at approximately three times greater risk for being overweight at Time 2 (odds ratio [OR]=2.7 for girls; OR=3.2 for boys). Adolescents using unhealthful weight-control behaviors were also at increased risk for binge eating with loss of control (OR=6.4 for girls; OR=5.9 for boys) and for extreme weight-control behaviors such as self-induced vomiting and use of diet pills, laxatives, and diuretics (OR=2.5 for girls; OR=4.8 for boys) 5 years later, compared with adolescents not using any weight-control behaviors.\n\n\nCONCLUSIONS\nDieting and unhealthful weight-control behaviors predict outcomes related to obesity and eating disorders 5 years later. A shift away from dieting and drastic weight-control measures toward the long-term implementation of healthful eating and physical activity behaviors is needed to prevent obesity and eating disorders in adolescents."
},
{
"pmid": "26311205",
"title": "Photoshopping the selfie: Self photo editing and photo investment are associated with body dissatisfaction in adolescent girls.",
"abstract": "OBJECTIVE\nSocial media engagement by adolescent girls is high. Despite its appeal, there are potential negative consequences for body dissatisfaction and disordered eating from social media use. This study aimed to examine, in a cross-sectional design, the relationship between social media use in general, and social media activities related to taking \"selfies\" and sharing specifically, with overvaluation of shape and weight, body dissatisfaction, and dietary restraint.\n\n\nMETHOD\nParticipants were 101 grade seven girls (M(age) = 13.1, SD = 0.3), who completed self-report questionnaires of social media use and body-related and eating concerns measures.\n\n\nRESULTS\nResults showed that girls who regularly shared self-images on social media, relative to those who did not, reported significantly higher overvaluation of shape and weight, body dissatisfaction, dietary restraint, and internalization of the thin ideal. In addition, among girls who shared photos of themselves on social media, higher engagement in manipulation of and investment in these photos, but not higher media exposure, were associated with greater body-related and eating concerns, including after accounting for media use and internalization of the thin ideal.\n\n\nDISCUSSION\nAlthough cross-sectional, these findings suggest the importance of social media activities for body-related and eating concerns as well as potential avenues for targeted social-media-based intervention."
},
{
"pmid": "29567619",
"title": "Tweeting weight loss: A comparison of #thinspiration and #fitspiration communities on Twitter.",
"abstract": "Thinspiration and fitspiration represent contemporary online trends designed to inspire viewers towards the thin ideal or towards health and fitness respectively. The aim of the present study was to compare thinspiration and fitspiration communities on Twitter. A total of 3289 English-language tweets with hashtags related to thinspiration (n = 1181) and fitspiration (n = 2578) were collected over a two-week period. Network analysis showed minimal overlap between the communities on Twitter, with the thinspiration community more closely-connected and having greater information flow than the fitspiration community. Frequency counts and sentiment analysis showed that although the tweets from both types of accounts focused on appearance and weight loss, fitspiration tweets were significantly more positive in sentiment. It was concluded that the thinspiration tweeters, unlike the fitspiration tweeters, represent a genuine on-line community on Twitter. Such a community of support may have negative consequences for collective body image and disordered eating identity."
},
{
"pmid": "28123997",
"title": "Behavior Change with Fitness Technology in Sedentary Adults: A Review of the Evidence for Increasing Physical Activity.",
"abstract": "Physical activity is closely linked with health and well-being; however, many Americans do not engage in regular exercise. Older adults and those with low socioeconomic status are especially at risk for poor health, largely due to their sedentary lifestyles. Fitness technology, including trackers and smartphone applications (apps), has become increasingly popular for measuring and encouraging physical activity in recent years. However, many questions remain regarding the effectiveness of this technology for promoting behavior change. Behavior change techniques such as goal setting, feedback, rewards, and social factors are often included in fitness technology. However, it is not clear which components are most effective and which are actually being used by consumers. We discuss additional strategies not typically included in fitness technology devices or apps that are promising for engaging inactive, vulnerable populations. These include action planning, restructuring negative attitudes, enhancing environmental conditions, and identifying other barriers to regular physical activity. We consider which strategies are most conducive to motivating behavior change among sedentary adults. Overall, fitness technology has the potential to significantly impact public health, research, and policies. We suggest ways in which app developers and behavior change experts can collaborate to develop successful apps. Advances are still needed to help inactive individuals determine how, when, where, and with whom they can increase their physical activity."
},
{
"pmid": "29025694",
"title": "Desire to Be Underweight: Exploratory Study on a Weight Loss App Community and User Perceptions of the Impact on Disordered Eating Behaviors.",
"abstract": "BACKGROUND\nMobile health (mHealth) apps for weight loss (weight loss apps) can be useful diet and exercise tools for individuals in need of losing weight. Most studies view weight loss app users as these types of individuals, but not all users have the same needs. In fact, users with disordered eating behaviors who desire to be underweight are also utilizing weight loss apps; however, few studies give a sense of the prevalence of these users in weight loss app communities and their perceptions of weight loss apps in relation to disordered eating behaviors.\n\n\nOBJECTIVE\nThe aim of this study was to provide an analysis of users' body mass indices (BMIs) in a weight loss app community and examples of how users with underweight BMI goals perceive the impact of the app on disordered eating behaviors.\n\n\nMETHODS\nWe focused on two aspects of a weight loss app (DropPounds): profile data and forum posts, and we moved from a broader picture of the community to a narrower focus on users' perceptions. We analyzed profile data to better understand the goal BMIs of all users, highlighting the prevalence of users with underweight BMI goals. Then we explored how users with a desire to be underweight discussed the weight loss app's impact on disordered eating behaviors.\n\n\nRESULTS\nWe found three main results: (1) no user (regardless of start BMI) starts with a weight gain goal, and most users want to lose weight; (2) 6.78% (1261/18,601) of the community want to be underweight, and most identify as female; (3) users with underweight BMI goals tend to view the app as positive, especially for reducing bingeing; however, some acknowledge its role in exacerbating disordered eating behaviors.\n\n\nCONCLUSIONS\nThese findings are important for our understanding of the different types of users who utilize weight loss apps, the perceptions of weight loss apps related to disordered eating, and how weight loss apps may impact users with a desire to be underweight. Whereas these users had underweight goals, they often view the app as helpful in reducing disordered eating behaviors, which led to additional questions. Therefore, future research is needed."
},
{
"pmid": "15942543",
"title": "Size acceptance and intuitive eating improve health for obese, female chronic dieters.",
"abstract": "OBJECTIVE\nExamine a model that encourages health at every size as opposed to weight loss. The health at every size concept supports homeostatic regulation and eating intuitively (ie, in response to internal cues of hunger, satiety, and appetite).\n\n\nDESIGN\nSix-month, randomized clinical trial; 2-year follow-up.\n\n\nSUBJECTS\nWhite, obese, female chronic dieters, aged 30 to 45 years (N=78).\n\n\nSETTING\nFree-living, general community.\n\n\nINTERVENTIONS\nSix months of weekly group intervention (health at every size program or diet program), followed by 6 months of monthly aftercare group support.\n\n\nMAIN OUTCOME MEASURES\nAnthropometry (weight, body mass index), metabolic fitness (blood pressure, blood lipids), energy expenditure, eating behavior (restraint, eating disorder pathology), and psychology (self-esteem, depression, body image). Attrition, attendance, and participant evaluations of treatment helpfulness were also monitored.\n\n\nSTATISTICAL ANALYSIS PERFORMED\nAnalysis of variance.\n\n\nRESULTS\nCognitive restraint decreased in the health at every size group and increased in the diet group, indicating that both groups implemented their programs. Attrition (6 months) was high in the diet group (41%), compared with 8% in the health at every size group. Fifty percent of both groups returned for 2-year evaluation. Health at every size group members maintained weight, improved in all outcome variables, and sustained improvements. Diet group participants lost weight and showed initial improvement in many variables at 1 year; weight was regained and little improvement was sustained.\n\n\nCONCLUSIONS\nThe health at every size approach enabled participants to maintain long-term behavior change; the diet approach did not. Encouraging size acceptance, reduction in dieting behavior, and heightened awareness and response to body signals resulted in improved health risk indicators for obese women."
},
{
"pmid": "21244144",
"title": "The acceptance model of intuitive eating: a comparison of women in emerging adulthood, early adulthood, and middle adulthood.",
"abstract": "The acceptance model of intuitive eating (Avalos & Tylka, 2006) posits that body acceptance by others helps women appreciate their body and resist adopting an observer's perspective of their body, which contribute to their eating intuitively/adaptively. We extended this model by integrating body mass index (BMI) into its structure and investigating it with emerging (ages 18-25 years old, n = 318), early (ages 26-39 years old, n = 238), and middle (ages 40-65 years old, n = 245) adult women. Multiple-group analysis revealed that this model fit the data for all age groups. Body appreciation and resistance to adopt an observer's perspective mediated the body acceptance by others-intuitive eating link. Body acceptance by others mediated the social support-body appreciation and BMI-body appreciation links. Early and middle adult women had stronger negative BMI-body acceptance by others and BMI-intuitive eating relationships and a stronger positive body acceptance by others-body appreciation relationship than emerging adult women. Early adult women had a stronger positive resistance to adopt observer's perspective-body appreciation relationship than emerging and middle adult women."
},
{
"pmid": "22584372",
"title": "There's an app for that: content analysis of paid health and fitness apps.",
"abstract": "BACKGROUND\nThe introduction of Apple's iPhone provided a platform for developers to design third-party apps, which greatly expanded the functionality and utility of mobile devices for public health.\n\n\nOBJECTIVE\nThis study provides an overview of the developers' written descriptions of health and fitness apps and appraises each app's potential for influencing behavior change.\n\n\nMETHODS\nData for this study came from a content analysis of health and fitness app descriptions available on iTunes during February 2011. The Health Education Curriculum Analysis Tool (HECAT) and the Precede-Proceed Model (PPM) were used as frameworks to guide the coding of 3336 paid apps.\n\n\nRESULTS\nCompared to apps with a cost less than US $0.99, apps exceeding US $0.99 were more likely to be scored as intending to promote health or prevent disease (92.55%, 1925/3336 vs 83.59%, 1411/3336; P<.001), to be credible or trustworthy (91.11%, 1895/3336 vs 86.14%, 1454/3349; P<.001), and more likely to be used personally or recommended to a health care client (72.93%, 1517/2644 vs 66.77%, 1127/2644; P<.001). Apps related to healthy eating, physical activity, and personal health and wellness were more common than apps for substance abuse, mental and emotional health, violence prevention and safety, and sexual and reproductive health. Reinforcing apps were less common than predisposing and enabling apps. Only 1.86% (62/3336) of apps included all 3 factors (ie, predisposing, enabling, and reinforcing).\n\n\nCONCLUSIONS\nDevelopment efforts could target public health behaviors for which few apps currently exist. Furthermore, practitioners should be cautious when promoting the use of apps as it appears most provide health-related information (predisposing) or make attempts at enabling behavior, with almost none including all theoretical factors recommended for behavior change."
},
{
"pmid": "28646889",
"title": "Apps to improve diet, physical activity and sedentary behaviour in children and adolescents: a review of quality, features and behaviour change techniques.",
"abstract": "BACKGROUND\nThe number of commercial apps to improve health behaviours in children is growing rapidly. While this provides opportunities for promoting health, the content and quality of apps targeting children and adolescents is largely unexplored. This review systematically evaluated the content and quality of apps to improve diet, physical activity and sedentary behaviour in children and adolescents, and examined relationships of app quality ratings with number of app features and behaviour change techniques (BCTs) used.\n\n\nMETHODS\nSystematic literature searches were conducted in iTunes and Google Play stores between May-November 2016. Apps were included if they targeted children or adolescents, focused on improving diet, physical activity and/or sedentary behaviour, had a user rating of at least 4+ based on at least 20 ratings, and were available in English. App inclusion, downloading and user-testing for quality assessment and content analysis were conducted independently by two reviewers. Spearman correlations were used to examine relationships between app quality, and number of technical app features and BCTs included.\n\n\nRESULTS\nTwenty-five apps were included targeting diet (n = 12), physical activity (n = 18) and sedentary behaviour (n = 7). On a 5-point Mobile App Rating Scale (MARS), overall app quality was moderate (total MARS score: 3.6). Functionality was the highest scoring domain (mean: 4.1, SD: 0.6), followed by aesthetics (mean: 3.8, SD: 0.8), and lower scoring for engagement (mean: 3.6, SD: 0.7) and information quality (mean: 2.8, SD: 0.8). On average, 6 BCTs were identified per app (range: 1-14); the most frequently used BCTs were providing 'instructions' (n = 19), 'general encouragement' (n = 18), 'contingent rewards' (n = 17), and 'feedback on performance' (n = 13). App quality ratings correlated positively with numbers of technical app features (rho = 0.42, p < 0.05) and BCTs included (rho = 0.54, p < 0.01).\n\n\nCONCLUSIONS\nPopular commercial apps to improve diet, physical activity and sedentary behaviour in children and adolescents had moderate quality overall, scored higher in terms of functionality. Most apps incorporated some BCTs and higher quality apps included more app features and BCTs. Future app development should identify factors that promote users' app engagement, be tailored to specific population groups, and be informed by health behaviour theories."
},
{
"pmid": "26678569",
"title": "The Most Popular Smartphone Apps for Weight Loss: A Quality Assessment.",
"abstract": "BACKGROUND\nAdvancements in mobile phone technology have led to the development of smartphones with the capability to run apps. The availability of a plethora of health- and fitness-related smartphone apps has the potential, both on a clinical and public health level, to facilitate healthy behavior change and weight management. However, current top-rated apps in this area have not been extensively evaluated in terms of scientific quality and behavioral theory evidence base.\n\n\nOBJECTIVE\nThe purpose of this study was to evaluate the quality of the most popular dietary weight-loss smartphone apps on the commercial market using comprehensive quality assessment criteria, and to quantify the behavior change techniques (BCTs) incorporated.\n\n\nMETHODS\nThe top 200-rated Health & Fitness category apps from the free and paid sections of Google Play and iTunes App Store in Australia (n=800) were screened in August 2014. To be included in further analysis, an app had to focus on weight management, include a facility to record diet intake (self-monitoring), and be in English. One researcher downloaded and used the eligible apps thoroughly for 5 days and assessed the apps against quality assessment criteria which included the following domains: accountability, scientific coverage and content accuracy of information relevant to weight management, technology-enhanced features, usability, and incorporation of BCTs. For inter-rater reliability purposes, a second assessor provided ratings on 30% of the apps. The accuracy of app energy intake calculations was further investigated by comparison with results from a 3-day weighed food record (WFR).\n\n\nRESULTS\nAcross the eligible apps reviewed (n=28), only 1 app (4%) received full marks for accountability. Overall, apps included an average of 5.1 (SD 2.3) out of 14 technology-enhanced features, and received a mean score of 13.5 (SD 3.7) out of 20 for usability. The majority of apps provided estimated energy requirements (24/28, 86%) and used a food database to calculate energy intake (21/28, 75%). When compared against the WFR, the mean absolute energy difference of apps which featured energy intake calculations (23/28, 82%) was 127 kJ (95% CI -45 to 299). An average of 6.3 (SD 3.7) of 26 BCTs were included.\n\n\nCONCLUSIONS\nOverall, the most popular commercial apps for weight management are suboptimal in quality, given the inadequate scientific coverage and accuracy of weight-related information, and the relative absence of BCTs across the apps reviewed. With the limited regulatory oversight around the quality of these types of apps, this evaluation provides clinicians and consumers an informed view of the highest-quality apps in the current popular app pool appropriate for recommendation and uptake. Further research is necessary to assess the effectiveness of apps for weight management."
},
{
"pmid": "27480144",
"title": "Popular Nutrition-Related Mobile Apps: A Feature Assessment.",
"abstract": "BACKGROUND\nA key challenge in human nutrition is the assessment of usual food intake. This is of particular interest given recent proposals of eHealth personalized interventions. The adoption of mobile phones has created an opportunity for assessing and improving nutrient intake as they can be used for digitalizing dietary assessments and providing feedback. In the last few years, hundreds of nutrition-related mobile apps have been launched and installed by millions of users.\n\n\nOBJECTIVE\nThis study aims to analyze the main features of the most popular nutrition apps and to compare their strategies and technologies for dietary assessment and user feedback.\n\n\nMETHODS\nApps were selected from the two largest online stores of the most popular mobile operating systems-the Google Play Store for Android and the iTunes App Store for iOS-based on popularity as measured by the number of installs and reviews. The keywords used in the search were as follows: calorie(s), diet, diet tracker, dietician, dietitian, eating, fit, fitness, food, food diary, food tracker, health, lose weight, nutrition, nutritionist, weight, weight loss, weight management, weight watcher, and ww calculator. The inclusion criteria were as follows: English language, minimum number of installs (1 million for Google Play Store) or reviews (7500 for iTunes App Store), relation to nutrition (ie, diet monitoring or recommendation), and independence from any device (eg, wearable) or subscription.\n\n\nRESULTS\nA total of 13 apps were classified as popular for inclusion in the analysis. Nine apps offered prospective recording of food intake using a food diary feature. Food selection was available via text search or barcode scanner technologies. Portion size selection was only textual (ie, without images or icons). All nine of these apps were also capable of collecting physical activity (PA) information using self-report, the global positioning system (GPS), or wearable integrations. Their outputs focused predominantly on energy balance between dietary intake and PA. None of these nine apps offered features directly related to diet plans and motivational coaching. In contrast, the remaining four of the 13 apps focused on these opportunities, but without food diaries. One app-FatSecret-also had an innovative feature for connecting users with health professionals, and another-S Health-provided a nutrient balance score.\n\n\nCONCLUSIONS\nThe high number of installs indicates that there is a clear interest and opportunity for diet monitoring and recommendation using mobile apps. All the apps collecting dietary intake used the same nutrition assessment method (ie, food diary record) and technologies for data input (ie, text search and barcode scanner). Emerging technologies, such as image recognition, natural language processing, and artificial intelligence, were not identified. None of the apps had a decision engine capable of providing personalized diet advice."
},
{
"pmid": "29563080",
"title": "Quality of Publicly Available Physical Activity Apps: Review and Content Analysis.",
"abstract": "BACKGROUND\nWithin the new digital health landscape, the rise of health apps creates novel prospects for health promotion. The market is saturated with apps that aim to increase physical activity (PA). Despite the wide distribution and popularity of PA apps, there are limited data on their effectiveness, user experience, and safety of personal data.\n\n\nOBJECTIVE\nThe purpose of this review and content analysis was to evaluate the quality of the most popular PA apps on the market using health care quality indicators.\n\n\nMETHODS\nThe top-ranked 400 free and paid apps from iTunes and Google Play stores were screened. Apps were included if the primary behavior targeted was PA, targeted users were adults, and the apps had stand-alone functionality. The apps were downloaded on mobile phones and assessed by 2 reviewers against the following quality assessment criteria: (1) users' data privacy and security, (2) presence of behavior change techniques (BCTs) and quality of the development and evaluation processes, and (3) user ratings and usability.\n\n\nRESULTS\nOut of 400 apps, 156 met the inclusion criteria, of which 65 apps were randomly selected to be downloaded and assessed. Almost 30% apps (19/65) did not have privacy policy. Every app contained at least one BCT, with an average number of 7 and a maximum of 13 BCTs. All but one app had commercial affiliation, 12 consulted an expert, and none reported involving users in the app development. Only 12 of 65 apps had a peer-reviewed study connected to the app. User ratings were high, with only a quarter of the ratings falling below 4 stars. The median usability score was excellent-86.3 out of 100.\n\n\nCONCLUSIONS\nDespite the popularity of PA apps available on the commercial market, there were substantial shortcomings in the areas of data safety and likelihood of effectiveness of the apps assessed. The limited quality of the apps may represent a missed opportunity for PA promotion."
},
{
"pmid": "29273575",
"title": "Insights From Google Play Store User Reviews for the Development of Weight Loss Apps: Mixed-Method Analysis.",
"abstract": "BACKGROUND\nSignificant weight loss takes several months to achieve, and behavioral support can enhance weight loss success. Weight loss apps could provide ongoing support and deliver innovative interventions, but to do so, developers must ensure user satisfaction.\n\n\nOBJECTIVE\nThe aim of this study was to conduct a review of Google Play Store apps to explore what users like and dislike about weight loss and weight-tracking apps and to examine qualitative feedback through analysis of user reviews.\n\n\nMETHODS\nThe Google Play Store was searched and screened for weight loss apps using the search terms weight loss and weight track*, resulting in 179 mobile apps. A content analysis was conducted based on the Oxford Food and Activity Behaviors taxonomy. Correlational analyses were used to assess the association between complexity of mobile health (mHealth) apps and popularity indicators. The sample was then screened for popular apps that primarily focus on weight-tracking. For the resulting subset of 15 weight-tracking apps, 569 user reviews were sampled from the Google Play Store. Framework and thematic analysis of user reviews was conducted to assess which features users valued and how design influenced users' responses.\n\n\nRESULTS\nThe complexity (number of components) of weight loss apps was significantly positively correlated with the rating (r=.25; P=.001), number of reviews (r=.28; P<.001), and number of downloads (r=.48; P<.001) of the app. In contrast, in the qualitative analysis of weight-tracking apps, users expressed preference for simplicity and ease of use. In addition, we found that positive reinforcement through detailed feedback fostered users' motivation for further weight loss. Smooth functioning and reliable data storage emerged as critical prerequisites for long-term app usage.\n\n\nCONCLUSIONS\nUsers of weight-tracking apps valued simplicity, whereas users of comprehensive weight loss apps appreciated availability of more features, indicating that complexity demands are specific to different target populations. The provision of feedback on progress can motivate users to continue their weight loss attempts. Users value seamless functioning and reliable data storage."
},
{
"pmid": "25760773",
"title": "Mobile app rating scale: a new tool for assessing the quality of health mobile apps.",
"abstract": "BACKGROUND\nThe use of mobile apps for health and well being promotion has grown exponentially in recent years. Yet, there is currently no app-quality assessment tool beyond \"star\"-ratings.\n\n\nOBJECTIVE\nThe objective of this study was to develop a reliable, multidimensional measure for trialling, classifying, and rating the quality of mobile health apps.\n\n\nMETHODS\nA literature search was conducted to identify articles containing explicit Web or app quality rating criteria published between January 2000 and January 2013. Existing criteria for the assessment of app quality were categorized by an expert panel to develop the new Mobile App Rating Scale (MARS) subscales, items, descriptors, and anchors. There were sixty well being apps that were randomly selected using an iTunes search for MARS rating. There were ten that were used to pilot the rating procedure, and the remaining 50 provided data on interrater reliability.\n\n\nRESULTS\nThere were 372 explicit criteria for assessing Web or app quality that were extracted from 25 published papers, conference proceedings, and Internet resources. There were five broad categories of criteria that were identified including four objective quality scales: engagement, functionality, aesthetics, and information quality; and one subjective quality scale; which were refined into the 23-item MARS. The MARS demonstrated excellent internal consistency (alpha = .90) and interrater reliability intraclass correlation coefficient (ICC = .79).\n\n\nCONCLUSIONS\nThe MARS is a simple, objective, and reliable tool for classifying and assessing the quality of mobile health apps. It can also be used to provide a checklist for the design and development of new high quality health apps."
},
{
"pmid": "30021706",
"title": "Improving Linkage to HIV Care Through Mobile Phone Apps: Randomized Controlled Trial.",
"abstract": "BACKGROUND\nIn HIV treatment program, gaps in the \"cascade of care\" where patients are lost between diagnosis, laboratory evaluation, treatment initiation, and retention in HIV care, is a well-described challenge. Growing access to internet-enabled mobile phones has led to an interest in using the technology to improve patient engagement with health care.\n\n\nOBJECTIVE\nThe objectives of this trial were: (1) to assess whether a mobile phone-enabled app could provide HIV patients with laboratory test results, (2) to better understand the implementation of such an intervention, and (3) to determine app effectiveness in improving linkage to HIV care after diagnosis.\n\n\nMETHODS\nWe developed and tested an app through a randomized controlled trial carried out in several primary health care facilities in Johannesburg. Newly diagnosed HIV-positive patients were screened, recruited, and randomized into the trial as they were giving a blood sample for initial CD4 staging. Trial eligibility included ownership of a phone compatible with the app and access to the internet. Trial participants were followed for a minimum of eight months to determine linkage to HIV care indicated by an HIV-related laboratory test result.\n\n\nRESULTS\nThe trial outcome results are being prepared for publication, but here we describe the significant operational and technological lessons provided by the implementation. Android was identified as the most suitable operating system for the app, due to Android functionality and communication characteristics. Android also had the most significant market share of all smartphone operating systems in South Africa. The app was successfully developed with laboratory results sent to personal smartphones. However, given the trial requirements and the app itself, only 10% of screened HIV patients successfully enrolled. We report on issues such as patient eligibility, app testing in a dynamic phone market, software installation and compatibility, safe identification of patients, linkage of laboratory results to patients lacking unique identifiers, and present lessons and potential solutions.\n\n\nCONCLUSIONS\nThe implementation challenges and lessons of this trial may assist future similar mHealth interventions to avoid some of the pitfalls. Ensuring sufficient expertise and understanding of the programmatic needs by the software developer, as well as in the implementation team, with adequate and rapid piloting within the target groups, could have led to better trial recruitment. However, the majority of screened patients were interested in the study, and the app was installed successfully in patients with suitable smartphones, suggesting that this may be a way to engage patients with their health care data in future.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02756949; https://clinicaltrials.gov/ct2/show/NCT02756949 (Archived by WebCite at http://www.webcitation.org/6z1GTJCNW)."
},
{
"pmid": "16204405",
"title": "Three approaches to qualitative content analysis.",
"abstract": "Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic paradigm. The major differences among the approaches are coding schemes, origins of codes, and threats to trustworthiness. In conventional content analysis, coding categories are derived directly from the text data. With a directed approach, analysis starts with a theory or relevant research findings as guidance for initial codes. A summative content analysis involves counting and comparisons, usually of keywords or content, followed by the interpretation of the underlying context. The authors delineate analytic procedures specific to each approach and techniques addressing trustworthiness with hypothetical examples drawn from the area of end-of-life care."
},
{
"pmid": "25921657",
"title": "What is and what is not positive body image? Conceptual foundations and construct definition.",
"abstract": "A decade ago, research on positive body image as a unique construct was relatively nonexistent, and now this area is flourishing. How and why did positive body image scholarship emerge? What is known about this contemporary construct? This article situates and contextualizes positive body image within Cash's scholarship, eating disorder prevention efforts, feminist influences, strength-based disciplines within psychology, and Buddhism. Extracting insights from quantitative and qualitative research, this article demonstrates that positive body image is (a) distinct from negative body image; (b) multifaceted (including body appreciation, body acceptance/love, conceptualizing beauty broadly, adaptive investment in appearance, inner positivity, interpreting information in a body-protective manner); (c) holistic; (d) stable and malleable; (e) protective; (f) linked to self-perceived body acceptance by others; and (g) shaped by social identities. Complementing what positive body image is, this article further details what positive body image is not to provide a more nuanced understanding of this construct."
},
{
"pmid": "28193554",
"title": "\"I just feel so guilty\": The role of introjected regulation in linking appearance goals for exercise with women's body image.",
"abstract": "Appearance goals for exercise are consistently associated with negative body image, but research has yet to consider the processes that link these two variables. Self-determination theory offers one such process: introjected (guilt-based) regulation of exercise behavior. Study 1 investigated these relationships within a cross-sectional sample of female UK students (n=215, 17-30 years). Appearance goals were indirectly, negatively associated with body image due to links with introjected regulation. Study 2 experimentally tested this pathway, manipulating guilt relating to exercise and appearance goals independently and assessing post-test guilt and body anxiety (n=165, 18-27 years). The guilt manipulation significantly increased post-test feelings of guilt, and these increases were associated with increased post-test body anxiety, but only for participants in the guilt condition. The implications of these findings for self-determination theory and the importance of guilt for the body image literature are discussed."
},
{
"pmid": "26280376",
"title": "Expand Your Horizon: A programme that improves body image and reduces self-objectification by training women to focus on body functionality.",
"abstract": "This study tested Expand Your Horizon, a programme designed to improve body image by training women to focus on the functionality of their body using structured writing assignments. Eighty-one women (Mage=22.77) with a negative body image were randomised to the Expand Your Horizon programme or to an active control programme. Appearance satisfaction, functionality satisfaction, body appreciation, and self-objectification were measured at pretest, posttest, and one-week follow-up. Following the intervention, participants in the Expand Your Horizon programme experienced greater appearance satisfaction, functionality satisfaction, and body appreciation, and lower levels of self-objectification, compared to participants in the control programme. Partial eta-squared effect sizes were of small to medium magnitude. This study is the first to show that focusing on body functionality can improve body image and reduce self-objectification in women with a negative body image. These findings provide support for addressing body functionality in programmes designed to improve body image."
},
{
"pmid": "26878220",
"title": "A pilot study investigating whether focusing on body functionality can protect women from the potential negative effects of viewing thin-ideal media images.",
"abstract": "This pilot study explored whether focusing on body functionality (i.e., everything the body can do) can protect women from potential harmful effects of exposure to thin-ideal images. Seventy women (Mage=20.61) completed an assignment wherein they either described the functionality of their body or the routes that they often travel (control). Afterward, participants were exposed to a series of thin-ideal images. Appearance and functionality satisfaction were measured before the assignment; appearance and functionality satisfaction, self-objectification, and body appreciation were measured after exposure. Results showed that participants who focused on body functionality experienced greater functionality satisfaction and body appreciation compared to control participants. Therefore, focusing on body functionality could be a beneficial individual-level technique that women can use to protect and promote a positive body image in the face of thin-ideal images. Research including a condition wherein participants are exposed to (product-only) control images is necessary to draw firmer conclusions."
},
{
"pmid": "18444705",
"title": "The role of the media in body image concerns among women: a meta-analysis of experimental and correlational studies.",
"abstract": "Research suggests that exposure to mass media depicting the thin-ideal body may be linked to body image disturbance in women. This meta-analysis examined experimental and correlational studies testing the links between media exposure to women's body dissatisfaction, internalization of the thin ideal, and eating behaviors and beliefs with a sample of 77 studies that yielded 141 effect sizes. The mean effect sizes were small to moderate (ds = -.28, -.39, and -.30, respectively). Effects for some outcome variables were moderated by publication year and study design. The findings support the notion that exposure to media images depicting the thin-ideal body is related to body image concerns for women."
},
{
"pmid": "11950103",
"title": "Risk factors for binge eating onset in adolescent girls: a 2-year prospective investigation.",
"abstract": "Because little is known about the predictors of binge eating (a risk factor for obesity), a set of putative risk factors for binge eating was investigated in a longitudinal study of adolescent girls. Results verified that binge eating predicted obesity onset. Elevated dieting, pressure to be thin, modeling of eating disturbances, appearance overvaluation, body dissatisfaction, depressive symptoms, emotional eating, body mass, and low self-esteem and social support predicted binge eating onset with 92% accuracy. Classification tree analysis revealed an interaction between appearance overvaluation, body mass, dieting, and depressive symptoms, suggesting qualitatively different pathways to binge eating and identifying subgroups at extreme risk for this outcome. Results support the assertion that these psychosocial and biological factors increase risk for binge eating."
},
{
"pmid": "25355131",
"title": "Understanding usage of a hybrid website and smartphone app for weight management: a mixed-methods study.",
"abstract": "BACKGROUND\nAdvancements in mobile phone technology offer huge potential for enhancing the timely delivery of health behavior change interventions. The development of smartphone-based health interventions (apps) is a rapidly growing field of research, yet there have been few longitudinal examinations of how people experience and use these apps within their day-to-day routines, particularly within the context of a hybrid Web- and app-based intervention.\n\n\nOBJECTIVE\nThis study used an in-depth mixed-methods design to examine individual variation in (1) impact on self-reported goal engagement (ie, motivation, self-efficacy, awareness, effort, achievement) of access to a weight management app (POWeR Tracker) when provided alongside a Web-based weight management intervention (POWeR) and (2) usage and views of POWeR Tracker.\n\n\nMETHODS\nThirteen adults were provided access to POWeR and were monitored over a 4-week period. Access to POWeR Tracker was provided in 2 alternate weeks (ie, weeks 1 and 3 or weeks 2 and 4). Participants' goal engagement was measured daily via self-report. Mixed effects models were used to examine change in goal engagement between the weeks when POWeR Tracker was and was not available and whether the extent of change in goal engagement varied between individual participants. Usage of POWeR and POWeR Tracker was automatically recorded for each participant. Telephone interviews were conducted and analyzed using inductive thematic analysis to further explore participants' experiences using POWeR and POWeR Tracker.\n\n\nRESULTS\nAccess to POWeR Tracker was associated with a significant increase in participants' awareness of their eating (β1=0.31, P=.04) and physical activity goals (β1=0.28, P=.03). The level of increase varied between individual participants. Usage data showed that participants used the POWeR website for similar amounts of time during the weeks when POWeR Tracker was (mean 29 minutes, SD 31 minutes) and was not available (mean 27 minutes, SD 33 minutes). POWeR Tracker was mostly accessed in short bursts (mean 3 minutes, SD 2 minutes) during convenient moments or moments when participants deemed the intervention content most relevant. The qualitative data indicated that nearly all participants agreed that it was more convenient to access information on-the-go via their mobiles compared to a computer. However, participants varied in their views and usage of the Web- versus app-based components and the informational versus tracking tools provided by POWeR Tracker.\n\n\nCONCLUSIONS\nThis study provides evidence that smartphones have the potential to improve individuals' engagement with their health-related goals when used as a supplement to an existing online intervention. The perceived convenience of mobile access to information does not appear to deter use of Web-based interventions or strengthen the impact of app access on goal engagement. A mixed-methods design enabled exploration of individual variation in daily usage of the app-based tools."
},
{
"pmid": "25130682",
"title": "How can weight-loss app designers' best engage and support users? A qualitative investigation.",
"abstract": "OBJECTIVES\nThis study explored young adults' experiences of using e-health internet-based computer or mobile phone applications (apps) and what they valued about those apps.\n\n\nDESIGN AND METHODS\nA qualitative design was used. Semi-structured interviews were conducted with a community sample of 19 young adults who had used a publicly available phone or internet-based application. Transcripts were analysed using thematic analysis.\n\n\nRESULTS\nParticipants valued an attractive user interface. Structure, ease of use, personalised features and accessibility (including dual phone-computer access) were all important to participants and users indicated that continued use depended on these design features. Many believed that a focus on calorie counting was too limiting. Some users mentioned behaviour change strategies and known behaviour change techniques utilised by apps including; self-monitoring, goal setting and behavioural feedback. Only a few users reported positive changes in physical activity levels.\n\n\nCONCLUSIONS\nUse of particular design features and application of evidence-based behaviour change techniques could optimise continued use and the effectiveness of internet/smart phone interventions. Statement of contribution What is already known on this subject? E-health is increasingly used to deliver weight loss/control programs. Most e-health programs have not been founded on evidence-based designs and it is unclear what features and functions users find useful or not so useful. What does this study add? Weight loss app users valued structure, ease of use, personalised features and accessibility. Goal setting and feedback on calorie intake/energy balance were the most widely used behaviour change techniques. Designers should consider an extensive food database, a food scanner, and provision of diaries."
},
{
"pmid": "22385782",
"title": "Motivational dynamics of eating regulation: a self-determination theory perspective.",
"abstract": "Within Western society, many people have difficulties adequately regulating their eating behaviors and weight. Although the literature on eating regulation is vast, little attention has been given to motivational dynamics involved in eating regulation. Grounded in Self-Determination Theory (SDT), the present contribution aims to provide a motivational perspective on eating regulation. The role of satisfaction and thwarting of the basic psychological needs for autonomy, competence, and relatedness is introduced as a mechanism to (a) explain the etiology of body image concerns and disordered eating and (b) understand the optimal regulation of ongoing eating behavior for healthy weight maintenance. An overview of empirical studies on these two research lines is provided. In a final section, the potential relevance and value of SDT in relation to prevailing theoretical models in the domain of eating regulation is discussed. Although research on SDT in the domain of eating regulation is still in its early stages and more research is clearly needed, this review suggests that the SDT represents a promising framework to more thoroughly study and understand the motivational processes involved in eating regulation and associated problems."
},
{
"pmid": "21513547",
"title": "The behaviour change wheel: a new method for characterising and designing behaviour change interventions.",
"abstract": "BACKGROUND\nImproving the design and implementation of evidence-based practice depends on successful behaviour change interventions. This requires an appropriate method for characterising interventions and linking them to an analysis of the targeted behaviour. There exists a plethora of frameworks of behaviour change interventions, but it is not clear how well they serve this purpose. This paper evaluates these frameworks, and develops and evaluates a new framework aimed at overcoming their limitations.\n\n\nMETHODS\nA systematic search of electronic databases and consultation with behaviour change experts were used to identify frameworks of behaviour change interventions. These were evaluated according to three criteria: comprehensiveness, coherence, and a clear link to an overarching model of behaviour. A new framework was developed to meet these criteria. The reliability with which it could be applied was examined in two domains of behaviour change: tobacco control and obesity.\n\n\nRESULTS\nNineteen frameworks were identified covering nine intervention functions and seven policy categories that could enable those interventions. None of the frameworks reviewed covered the full range of intervention functions or policies, and only a minority met the criteria of coherence or linkage to a model of behaviour. At the centre of a proposed new framework is a 'behaviour system' involving three essential conditions: capability, opportunity, and motivation (what we term the 'COM-B system'). This forms the hub of a 'behaviour change wheel' (BCW) around which are positioned the nine intervention functions aimed at addressing deficits in one or more of these conditions; around this are placed seven categories of policy that could enable those interventions to occur. The BCW was used reliably to characterise interventions within the English Department of Health's 2010 tobacco control strategy and the National Institute of Health and Clinical Excellence's guidance on reducing obesity.\n\n\nCONCLUSIONS\nInterventions and policies to change behaviour can be usefully characterised by means of a BCW comprising: a 'behaviour system' at the hub, encircled by intervention functions and then by policy categories. Research is needed to establish how far the BCW can lead to more efficient design of effective interventions."
}
] |
Heliyon | 31309162 | PMC6606991 | 10.1016/j.heliyon.2019.e01998 | A real-time virtual machine for task placement in loosely-coupled computer systems | Nowadays, virtualization and real-time systems are increasingly relevant. Real-time virtual machines are adequate for closely-coupled computer systems, execute tasks from associated language only and re-target tasks to the new platform at runtime. Complex systems in space, avionics, and military applications usually operate with Loosely-Coupled Computer Systems in a real-time environment for years. In this paper, a new approach is introduced to support task transfer between loosely-coupled computers in a real-time environment to add more features without software upgrading. The approach is based on automatic source code transformation into a platform-independent “Structured Byte-Code” (SBC) and a real-time virtual machine (SBC-RVM). Unlike Ordinary virtual machines which virtualize a specific processor for a specific code only, SBC-RVM transforms source code from any language with a known grammar into SBC without re-targeting the new platform. SBC-RVM executes local or placed tasks and preserving real-time constraints and adequate for Loosely-coupled computer systems. | 2Related workThe proposed approach based on task transferee between nodes on loosely coupled computers especially in centralized control systems using an execution environment, which is a virtual machine. In this section, a state of art for those topics is discussed.2.1Task transfer techniquesTask transfer techniques were introduced to produce more processing power and resources’ sharing among processors on the network. The two types of task transfer are task placement and task migration. Task placement is defined as the transfer of a task which did not start yet, whereas task migration is the preemptive transfer of a task that had been started out but in a waiting state. The upshot of task transfer can be concluded, but not limited to Dynamic Load Balancing by migrating tasks from the overloaded node to a relaxed one [2]. Availability, which is moving off a task from the failed node to a healthy one. System Administration, which is the ability to migrate a task from the source node to another one for maintenance purpose. Fault Recovery, which is the procedure of stopping a task on the isolated faulty node, migrating to a healthy one and resume execution [3, 4]. When it is required to migrate a task from one node to another one, then both nodes should have a shared memory (i.e., shared address space) or common execution language. For Homogenous computer system, Common execution language such as machine code and assembly language can be sent to another node for remote execution. However, this technique is limited to that architecture and is not convenient in a loosely-coupled computer system where different computer systems are connected to a data bus as a network. In this case, an interpreted scripting language; like java byte-code or system emulator can execute the machine code [5] Many researches introduced various task transfer techniques for a different systems architecture, that are categorized as: Shared Memory Multiprocessors, where main memory is shared among all processors and Distributed Multi-Processors, where processors are on separate nodes [6]. Although task transfer is carried out between processors over a network, most of the implemented techniques were introduced for computer systems with a shared memory only, such as Grid computing [2], Cloud Computing [7], Heterogeneous/homogeneous multiprocessor system-on-chip (MP-SoC) [6, 8]. Unfortunately, there are no implemented techniques were introduced to support task transfer in the loosely coupled computer architecture.The decision of migrating or placement of a task to a new host has two costs which are delay cost and migration cost. This optimization problem is proved to be NP-hard which can be converted to a weighted bipartite matching problem [9]. In a real-time application, the delay is acceptable if all tasks will meet the desired deadline.2.2Centralized control systemCentralized control systems such as satellite control system, avionics, cruise missiles, and similar systems usually have a loosely coupled computer architecture [10]. The Central control unit controls all application tasks, manages data transfer over the network. These capabilities require high demand requirements for the onboard computer (OBC) and OBC-software (OBCSW) complexity. Spacecraft may travel in the deep space in a critical mission for years. Satellite control computer system; as shown in Fig. 1, consists of loosely-coupled computers connected via a common data bus such as SpaceWire, Military standard 1553, ARINC422, CAN buses.Fig. 1Typical OBC network -ESA OBCDH architecture.Fig. 1Each computer came from various vendors with different architecture, processors, memory, and RTOS. It is required to have a common language to communicate with each other rather than exchanging data only. As long as the mission in space, a more off-nominal situation occurs and new features are required to be added. It is necessary to perform the desired concurrent control to the OBC and subsystems by accepting new remote tasks to be executed. Furthermore, if a piece of code could be sent to a subsystem over the network many of remarkable features will be added. The most interesting are: overcoming an off-nominal situation, solving off-design contingency remotely and adding new features. Therefore, overall system reliability is enhanced. This is the main motivation to introduce a new task placement technique in such systems where it is difficult to perform ordinary system maintenance or significant upgrading remotely.2.3Process real-time virtual machineIn the beginning, the software was written for a specific instruction set architecture (ISA) and a specific operating system (OS). Applications layer communicate via the application binary interface (ABI) and application programming interface (API), where applications are bounded by the OS-ISA pair as shown in Fig. 2 a.Fig. 2Different Virtual machines models.Fig. 2Process virtual machine (PVM) manages the run-time environment and overcome the OS-ISA pair limitation; as shown in Fig. 2 b by providing a higher abstraction level to execute code from different programming languages [11] on a different host machine. PVM provides a platform-independent environment for programming languages that interprets the code for an implicitly such as JVM [12]. The last model is the system virtual machine; as shown in Fig. 2 c, which is a lower virtualization level that the system platform or hardware is represented at a specific abstraction level. System-VM may host an operating system and applications together.Most of the compilers that target embedded systems are platform specific. Limitations that are imposed when porting applications to a new platform appears. Thus, when a code is written for a specific machine, it becomes more challenging to be ported to another processor architecture and/or OS [13]. Some approaches tried to solve this problem such as cross-compilers capability to create a code which can run on another platform. The idea of the cross compilers is to reconfigure source code, which was developed for a specific platform into suitable code for the new host [14]. Compiled programs are bounded by the Application Binary Interface (ABI) to be operated for a specific OS and instruction set architecture pair, whereas PVM overcomes this limitation [15].Virtualization in embedded systems shall satisfy real time requirements like timing constraints, performance and cost. Real-time virtual machines RVM is a research field that has many challenges such as worst-case execution time (WCET) analysis, porting on multiprocessor environment, time-predictable dynamic compilation [12, 16]. Another important challenge is VM in networked systems. Monolithic virtual machines are suitable for closely-coupled systems only, and far away to be applied to the modern networked system.2.4Java virtual machineVirtual machines differ in virtualization methodology and what to virtualize. Java Virtual Machine (JVM) abstracts the hardware and the machine to the developer [17]. This allows developers, not to concern on platform architecture. The code was written in Java should safely run on various platforms with JVM. The process is starting by translation of Java code into Java-bytecode as an intermediate machine-independent language as shown in Fig. 3. Java bytecode can be transfer over the network. On the target, JVM shall translate the bytecode into local machine native code to be executed. Hence Java's slogan, “Write once, run anywhere”. The just-in-time (JIT) compiles Java bytecode into a platform-specific executable code that is executed [18].Fig. 3Java virtual machine task exchange on a loosely-coupled network.Fig. 3The overload of translating bytecode to the target machine native code limits the real-time capability for tasks immediately placement over the network. For that, and for java source language translation limitation, we were motivated to present a non-monolithic virtual machine for real time systems which run a unified code on any machine without retranslating and concerning about satisfying migrated tasks real-time requirements. This RTVM shall be used in long life centralized control systems such as satellite, nuclear plants, and similar systems, where subsystems are heterogeneous and running various RTOS. The proposed RTVM shall accept tasks written from different languages like C, Java, Python, etc., convert source code to a unified code which is able to run on a different machine without a need for re-compiling while preserving the required real-time constraints. | [] | [] |
Frontiers in Neurology | 31297079 | PMC6607281 | 10.3389/fneur.2019.00647 | STIR-Net: Deep Spatial-Temporal Image Restoration Net for Radiation Reduction in CT Perfusion | Computed Tomography Perfusion (CTP) imaging is a cost-effective and fast approach to provide diagnostic images for acute stroke treatment. Its cine scanning mode allows the visualization of anatomic brain structures and blood flow; however, it requires contrast agent injection and continuous CT scanning over an extended time. In fact, the accumulative radiation dose to patients will increase health risks such as skin irritation, hair loss, cataract formation, and even cancer. Solutions for reducing radiation exposure include reducing the tube current and/or shortening the X-ray radiation exposure time. However, images scanned at lower tube currents are usually accompanied by higher levels of noise and artifacts. On the other hand, shorter X-ray radiation exposure time with longer scanning intervals will lead to image information that is insufficient to capture the blood flow dynamics between frames. Thus, it is critical for us to seek a solution that can preserve the image quality when the tube current and the temporal frequency are both low. We propose STIR-Net in this paper, an end-to-end spatial-temporal convolutional neural network structure, which exploits multi-directional automatic feature extraction and image reconstruction schema to recover high-quality CT slices effectively. With the inputs of low-dose and low-resolution patches at different cross-sections of the spatio-temporal data, STIR-Net blends the features from both spatial and temporal domains to reconstruct high-quality CT volumes. In this study, we finalize extensive experiments to appraise the image restoration performance at different levels of tube current and spatial and temporal resolution scales.The results demonstrate the capability of our STIR-Net to restore high-quality scans at as low as 11% of absorbed radiation dose of the current imaging protocol, yielding an average of 10% improvement for perfusion maps compared to the patch-based log likelihood method. | 2. Related WorkIt is necessary to develop low-dose CTP protocols to reduce the risks associated with excessive X-ray radiation exposure. Different acquisition parameters such as tube current, temporal sampling frequency, and the spatial resolution are meticulously related to the quality of the reconstructed CTP images, especially for generating perfusion maps that will be directly used by doctors to make treatment decisions. Related work includes radiation dose reduction approaches with respect to image processing strategies, deep learning approaches, image SR methods, and denoising methods. The previous work of our spatio-temporal architecture is introduced at the end of this section.2.1. Radiation Dose Reduction ApproachesRadiation dose reduction approaches include reducing tube current, temporal sampling frequency, and beam number. There is a linear relationship between radiation dose and the tube current. For example, lowering 50% of the tube current will lead to a 50% reduction in radiation dose. However, image noise and the square root of tube current have an inverse proportional relationship. Simply reducing the tube current will deteriorate the CTP image quality with increased noise and artifacts. Current simulation studies demonstrate the possibility and the effectiveness of maintaining image quality at reduced tube current (13, 14). Reducing temporal sampling frequency is the same as the increment of time intervals between acquiring two CTP slices in the same CT study. Similar to the decrement of the tube current, the reduction in temporal sampling frequency will reduce radiation correspondingly, as the total amount of scanning period is fixed and the time interval has been increased. However, current research (15–17) shows that the reductions in sampling interval yield little advantages when the time intervals are greater than 1 s.2.2. Image-Based Radiation Dose Reduction ApproachesAcquiring CT scans at low-dose and long scanning intervals will result in noisy and low-resolution (LR) images, with insufficient hemodynamic information. It is important to obtain higher quality CT images from limited data. Therefore, we address this problem of CT radiation reduction as image-based dose reduction. Recent work shows that an image-based dose reduction approach is a promising way for CT radiation reduction. For example in Yu et al. (18), a study of pediatric abdomen, pelvis, and chest CT examinations demonstrate that a 50% dose reduction can still maintain diagnostic quality. The image-based approaches include iterative reconstruction algorithm, sparse representation and dictionary learning, and example-based restoration methods. We review the relevant work as follows.The iterative reconstruction (IR) algorithm is a promising approach for dose reduction. It produces a set of synthesized projections by meticulously modeling the data acquisition process in CT imaging. For example, adaptive statistical iterative reconstruction (ASIR) algorithm (19) was the first IR algorithm to be used in the clinic. By modeling the noise distribution of the acquired data, ASIR can provide clinically acceptable image quality at reduced doses. Many CT systems apply ASIR as an assuring radiation dose reduction approach because it can reduce image noise and provide dose-reduced clinical images with preserved diagnostic value (20). Another IR algorithm is called model-based iterative reconstruction, which is more complicated and accurate than ASIR, as it models photons and system optics jointly.Sparse representation and dictionary learning describe data as linear combinations of several fundamental elements from a predefined collection called a dictionary. In the computer vision and medical image analysis domains, sparse representation and dictionary learning have shown promising results in various image restoration applications. Such applications include sparsity-based simultaneous denoising and interpolation (21) for optical coherence tomography images reconstruction, dictionary learning with group sparsity and graph regularization (22) for medical image denoising and fusion, and (23) for magnetic resonance image reconstruction.The example-based restoration approach is another popular method for image restoration. It extracts and stores patch pairs from both low-quality images and high-quality images in a database as prior knowledge. At the restoring phase, it learns a model that can synthesize high-quality images by searching the best-matched paired patches. Applications in image restoration (24–26) show the promising performance by using prior knowledge.2.3. Deep LearningIn recent years, deep learning methods have emerged in various computer vision tasks, including image classification (27) and object detection (28), and have dramatically improved the performance of these systems. These approaches have also achieved significant improvement in image restoration (29, 30), super-resolution (31), and optical flow (32). The reason for the significant performance is due to the advanced modeling capabilities of the deep structure and the corresponding non-linearity combined with discriminative learning on large datasets.Convolutional Neural Network (CNN), as one of the most renowned deep learning architectures, shows promising results for image-based problems. CNN structures are usually composed of several convolutional layers with activation layers, followed by one or more fully connected layers. The CNN architecture design utilizes image structures via local connections, weights sharing, and non-linearity. Another benefit of CNN is that they are easier to train and have fewer parameters than fully connected networks with the same number of hidden units. CNN structures allow automatic feature extraction and learning from limited information to reconstruct high-quality images.2.4. Image Super-ResolutionImage super-resolution aims at restoring HR images from the observed LR images. SR methods use different portions of LR images, or separate images, to approximate the HR image. There are two types of SR algorithms: frequency domain-based and spatial domain-based. Initially, SR methods were mostly for problems in the frequency domain (33, 34). Algorithms addressed in the frequency domain using a simple theoretical basis for observing the relationships between HR and LR images. Though these algorithms show high computational efficiency, they are limited due to sensitivity to model errors and difficulty in managing complex motion models. Algorithms for the spatial domain then became the main trend by overcoming the drawbacks of the frequency domain algorithms (35). Predominate spatial domain methods include non-uniform interpolation (36), iterative back-projection (37), projection onto convex sets (38), regularized methods (39), and a number of hybrid algorithms (40).Deep learning is a popular approach for image SR problems, and it has achieved significant performance (31, 41–43). However, most SR frameworks focus on 2D images, as involving the temporal dimension is more challenging, especially in CTP imaging. In this work, we propose to overcome the difficulties involving spatial dimension and to prove the feasibility of our framework in cerebral CTP image restoration.2.5. Image DenoisingImage denoising tasks aim at recovering a clean image from an observed noisy image, whereas the observed image is intruded by additive Gaussian noise. One of the main challenges for image denoising is to accurately identify the noise and remove it from the observed image. Based on the image properties being used, existing methods can be classified as prior-based (44), sparse coding based (25), low-rank-based (45), filter-based (46), and deep learning based (47, 48). The filter-based approach (46) methods are classical and fundamental, and many subsequent studies are developed from it (49).Numerous works have reconstructed clean CT images that can preserve the image quality of perfusion maps successfully; these works include methods such as bilateral filtering, non-local mean (50), nonlinear diffusion filter (51), and wavelet-based methods (52). The oscillatory nature of the truncated singular value decomposition (TSVD)-based method has initiated research that incorporates different regularization methods to stabilize the deconvolution. This research has shown varying degrees of success in stabilizing the residue functions by enforcing both temporal and spatial regularization on the residue function (53, 54). However, prior studies have focused exclusively on regularizing the noisy low-dose CTP, without considering the corpus of high-dose CTP data and the multi-dimensional data properties of CT images.Recently, deep learning based methods (47, 48) have shown many advantages in learning the mapping of the observed low-quality images to the high-quality ones. These methods use CNN models that are trained on tens of thousands of samples; however, paired training data is usually scarce in the medical field. Hence, an effective learning based model is desired. In this work, we utilize data extracted from different cross-sections of the CTP volume to achieve better performance in image SR and denoising. The experiment result shows that the proposed network can handle various noise and image degradation levels.2.6. Spatial-Temporal ArchitectureIn our previous work, we proposed Spatio-Temporal Architecture for Super-Resolution (STAR) (55) for low-dose CTP image super-resolution. It is an end-to-end spatio-temporal architecture that preserves image quality at reduced scanning time and radiation that has been reduced to one-third of its original level. This is an image-based dose reduction approach that focuses on super-resolution only. STAR is inspired by the work in Kim et al. (31) and is extended to three-dimensional volumes by conjoining multiple cross-sections. Through this work, we found that features extracted from both spatial and temporal directions are helpful to improve SR performance. The integration of multiple single-directional networks (SDNs) can boost the performance of SR for the spatio-temporal CTP data. The experimental results show that the proposed basic model of SDN improves both spatial and temporal resolution, while the multi-directional conjoint network further enhances the SR results—comparing favorably with only temporal or only spatial SR. However, this work only addresses low spatial and temporal resolution; it misses the important noise issue in low dose CTP.In this paper, we propose STIR-Net, an end-to-end spatial-temporal image restoration net for CTP radiation reduction. We compose and integrate several SRDNs instead of SDNs at different cross-sections for both image super-resolution and denoising simultaneously. The STIR-Net structure is explained in section 3. In section 4, we provide the experiment platform setup and describe the data acquisition method and the preprocessing procedures. In section 5, we detail the experiments and results. Finally, section 6 concludes the paper. | [
"28880858",
"28122885",
"19789227",
"26494635",
"23568701",
"19892810",
"18664497",
"20008689",
"27824812",
"23557960",
"25252738",
"23345379",
"26452610",
"21816919",
"26496550",
"20643609",
"22592622",
"23846467",
"22968202",
"21742542",
"17153947",
"24808354",
"18249647",
"2585170",
"18285235",
"28166495",
"21654042",
"17679327",
"11296876",
"25706579",
"23542422",
"15107323",
"26571527"
] | [
{
"pmid": "28880858",
"title": "Vital Signs: Recent Trends in Stroke Death Rates - United States, 2000-2015.",
"abstract": "INTRODUCTION\nThe prominent decline in U.S. stroke death rates observed for more than 4 decades has slowed in recent years. CDC examined trends and patterns in recent stroke death rates among U.S. adults aged ≥35 years by age, sex, race/ethnicity, state, and census region.\n\n\nMETHODS\nTrends in the rates of stroke as the underlying cause of death during 2000-2015 were analyzed using data from the National Vital Statistics System. Joinpoint software was used to identify trends in stroke death rates, and the excess number of stroke deaths resulting from unfavorable changes in trends was estimated.\n\n\nRESULTS\nAmong adults aged ≥35 years, age-standardized stroke death rates declined 38%, from 118.4 per 100,000 persons in 2000 to 73.3 per 100,000 persons in 2015. The annual percent change (APC) in stroke death rates changed from 2000 to 2015, from a 3.4% decrease per year during 2000-2003, to a 6.6% decrease per year during 2003-2006, a 3.1% decrease per year during 2006-2013, and a 2.5% (nonsignificant) increase per year during 2013-2015. The last trend segment indicated a reversal from a decrease to a statistically significant increase among Hispanics (APC = 5.8%) and among persons in the South Census Region (APC = 4.2%). Declines in stroke death rates failed to continue in 38 states, and during 2013-2015, an estimated 32,593 excess stroke deaths might not have occurred if the previous rate of decline could have been sustained.\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PUBLIC HEALTH PRACTICE\nPrior declines in stroke death rates have not continued in recent years, and substantial variations exist in timing and magnitude of change by demographic and geographic characteristics. These findings suggest the importance of strategically identifying opportunities for prevention and intervening in vulnerable populations, especially because effective and underused interventions to prevent stroke incidence and death are known to exist."
},
{
"pmid": "19789227",
"title": "Radiologic and nuclear medicine studies in the United States and worldwide: frequency, radiation dose, and comparison with other radiation sources--1950-2007.",
"abstract": "The U.S. National Council on Radiation Protection and Measurements and United Nations Scientific Committee on Effects of Atomic Radiation each conducted respective assessments of all radiation sources in the United States and worldwide. The goal of this article is to summarize and combine the results of these two publicly available surveys and to compare the results with historical information. In the United States in 2006, about 377 million diagnostic and interventional radiologic examinations and 18 million nuclear medicine examinations were performed. The United States accounts for about 12% of radiologic procedures and about one-half of nuclear medicine procedures performed worldwide. In the United States, the frequency of diagnostic radiologic examinations has increased almost 10-fold (1950-2006). The U.S. per-capita annual effective dose from medical procedures has increased about sixfold (0.5 mSv [1980] to 3.0 mSv [2006]). Worldwide estimates for 2000-2007 indicate that 3.6 billion medical procedures with ionizing radiation (3.1 billion diagnostic radiologic, 0.5 billion dental, and 37 million nuclear medicine examinations) are performed annually. Worldwide, the average annual per-capita effective dose from medicine (about 0.6 mSv of the total 3.0 mSv received from all sources) has approximately doubled in the past 10-15 years."
},
{
"pmid": "26494635",
"title": "Nationwide survey of radiation exposure during pediatric computed tomography examinations and proposal of age-based diagnostic reference levels for Japan.",
"abstract": "BACKGROUND\nDiagnostic reference levels (DRLs) have not been established in Japan.\n\n\nOBJECTIVE\nTo propose DRLs for CT of the head, chest and abdomen for three pediatric age groups.\n\n\nMATERIALS AND METHODS\nWe sent a nationwide questionnaire by post to 339 facilities. Questions focused on pediatric CT technology, exposure parameters, CT protocols, and radiation doses for age groups <1 year, 1-5 years, and 6-10 years.\n\n\nRESULTS\nFor the three age groups in the 196 facilities that responded, the 75th percentile values of volume CT dose index based on a 16-cm phantom (CTDIvol 16 [mGy]) for head, chest and abdominal CT were for infants 39.1, 11.1 and 12.0, respectively; for 1-to 5-year-olds 46.9, 14.3 and 16.7, respectively; and for 6-to 10-year-olds 67.7, 15.0 and 17.0, respectively. The corresponding dose–length products (DLP 16 [mGy・cm]) for head, chest and abdominal CT were for infants 526.1, 209.1 and 261.5, respectively; for 1-to 5-year-olds 665.5, 296.0 and 430.8, respectively; and for 6-to 10-year-olds 847.9, 413.0 and 532.2, respectively.\n\n\nCONCLUSION\nThe majority of CTDIvol 16 and DLP 16 values for the head were higher than DRLs reported from other countries. For risk reduction, it is necessary to establish DRLs for pediatric CT in Japan."
},
{
"pmid": "23568701",
"title": "Whole-brain CT perfusion: reliability and reproducibility of volumetric perfusion deficit assessment in patients with acute ischemic stroke.",
"abstract": "INTRODUCTION\nThe aim of this study was to examine reliability and reproducibility of volumetric perfusion deficit assessment in patients with acute ischemic stroke who underwent recently introduced whole-brain CT perfusion (WB-CTP).\n\n\nMETHODS\nTwenty-five consecutive patients underwent 128-row WB-CTP with extended scan coverage of 100 mm in the z-axis using adaptive spiral scanning technique. Volumetric analysis of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), time to peak (TTP), and time to drain (TTD) was performed twice by two blinded and experienced readers using OsiriX V.4.0 imaging software. Interreader agreement and intrareader agreement were assessed by intraclass correlation coefficients (ICCs) and Bland-Altman Analysis.\n\n\nRESULTS\nInterreader agreement was highest for TTD (ICC 0.982), followed by MTT (0.976), CBF (0.955), CBV (0.933), and TTP (0.865). Intrareader agreement was also highest for TTD (ICC 0.993), followed by MTT (0.988), CBF (0.981), CBV (9.953), and TTP (0.927). The perfusion deficits showed the highest absolute volumes in the time-related parametric maps TTD (mean volume 121.4 ml), TTP (120.0 ml), and MTT (112.6 ml) and did not differ significantly within this group (each with p > 0.05). In comparison to time-related maps, the mean CBF perfusion deficit volume was significantly smaller (92.1 ml, each with p < 0.05). The mean CBV lesion size was 23.4 ml.\n\n\nCONCLUSIONS\nVolumetric assessment in WB-CTP is reliable and reproducible. It might serve for a more accurate assessment of stroke outcome prognosis and definition of flow-volume mismatch. Time to drain showed the highest agreement and therefore might be an interesting parameter to define tissue at risk."
},
{
"pmid": "18664497",
"title": "Risk of cataract after exposure to low doses of ionizing radiation: a 20-year prospective cohort study among US radiologic technologists.",
"abstract": "The study aim was to determine the risk of cataract among radiologic technologists with respect to occupational and nonoccupational exposures to ionizing radiation and to personal characteristics. A prospective cohort of 35,705 cataract-free US radiologic technologists aged 24-44 years was followed for nearly 20 years (1983-2004) by using two follow-up questionnaires. During the study period, 2,382 cataracts and 647 cataract extractions were reported. Cigarette smoking for >or=5 pack-years; body mass index of >or=25 kg/m(2); and history of diabetes, hypertension, hypercholesterolemia, or arthritis at baseline were significantly (p <or= 0.05) associated with increased risk of cataract. In multivariate models, self-report of >or=3 x-rays to the face/neck was associated with a hazard ratio of cataract of 1.25 (95% confidence interval: 1.06, 1.47). For workers in the highest category (mean, 60 mGy) versus lowest category (mean, 5 mGy) of occupational dose to the lens of the eye, the adjusted hazard ratio of cataract was 1.18 (95% confidence interval: 0.99, 1.40). Findings challenge the National Council on Radiation Protection and International Commission on Radiological Protection assumptions that the lowest cumulative ionizing radiation dose to the lens of the eye that can produce a progressive cataract is approximately 2 Gy, and they support the hypothesis that the lowest cataractogenic dose in humans is substantially less than previously thought."
},
{
"pmid": "20008689",
"title": "Projected cancer risks from computed tomographic scans performed in the United States in 2007.",
"abstract": "BACKGROUND\nThe use of computed tomographic (CT) scans in the United States (US) has increased more than 3-fold since 1993 to approximately 70 million scans annually. Despite the great medical benefits, there is concern about the potential radiation-related cancer risk. We conducted detailed estimates of the future cancer risks from current CT scan use in the US according to age, sex, and scan type.\n\n\nMETHODS\nRisk models based on the National Research Council's \"Biological Effects of Ionizing Radiation\" report and organ-specific radiation doses derived from a national survey were used to estimate age-specific cancer risks for each scan type. These models were combined with age- and sex-specific scan frequencies for the US in 2007 obtained from survey and insurance claims data. We estimated the mean number of radiation-related incident cancers with 95% uncertainty limits (UL) using Monte Carlo simulations.\n\n\nRESULTS\nOverall, we estimated that approximately 29 000 (95% UL, 15 000-45 000) future cancers could be related to CT scans performed in the US in 2007. The largest contributions were from scans of the abdomen and pelvis (n = 14 000) (95% UL, 6900-25 000), chest (n = 4100) (95% UL, 1900-8100), and head (n = 4000) (95% UL, 1100-8700), as well as from chest CT angiography (n = 2700) (95% UL, 1300-5000). One-third of the projected cancers were due to scans performed at the ages of 35 to 54 years compared with 15% due to scans performed at ages younger than 18 years, and 66% were in females.\n\n\nCONCLUSIONS\nThese detailed estimates highlight several areas of CT scan use that make large contributions to the total cancer risk, including several scan types and age groups with a high frequency of use or scans involving relatively high doses, in which risk-reduction efforts may be warranted."
},
{
"pmid": "27824812",
"title": "Projected cancer risks potentially related to past, current, and future practices in paediatric CT in the United Kingdom, 1990-2020.",
"abstract": "BACKGROUND\nTo project risks of developing cancer and the number of cases potentially induced by past, current, and future computed tomography (CT) scans performed in the United Kingdom in individuals aged <20 years.\n\n\nMETHODS\nOrgan doses were estimated from surveys of individual scan parameters and CT protocols used in the United Kingdom. Frequencies of scans were estimated from the NHS Diagnostic Imaging Dataset. Excess lifetime risks (ELRs) of radiation-related cancer were calculated as cumulative lifetime risks, accounting for survival probabilities, using the RadRAT risk assessment tool.\n\n\nRESULTS\nIn 2000-2008, ELRs ranged from 0.3 to 1 per 1000 head scans and 1 to 5 per 1000 non-head scans. ELRs per scan were reduced by 50-70% in 2000-2008 compared with 1990-1995, subsequent to dose reduction over time. The 130 750 scans performed in 2015 in the United Kingdom were projected to induce 64 (90% uncertainty interval (UI): 38-113) future cancers. Current practices would lead to about 300 (90% UI: 230-680) future cancers induced by scans performed in 2016-2020.\n\n\nCONCLUSIONS\nAbsolute excess risks from single exposures would be low compared with background risks, but even small increases in annual CT rates over the next years would substantially increase the number of potential subsequent cancers."
},
{
"pmid": "23557960",
"title": "Effects of increased image noise on image quality and quantitative interpretation in brain CT perfusion.",
"abstract": "BACKGROUND AND PURPOSE\nThere is a desire within many institutions to reduce the radiation dose in CTP examinations. The purpose of this study was to simulate dose reduction through the addition of noise in brain CT perfusion examinations and to determine the subsequent effects on quality and quantitative interpretation.\n\n\nMATERIALS AND METHODS\nA total of 22 consecutive reference CTP scans were identified from an institutional review board-approved prospective clinical trial, all performed at 80 keV and 190 mAs. Lower-dose scans at 188, 177, 167, 127, and 44 mAs were generated through the addition of spatially correlated noise to the reference scans. A standard software package was used to generate CBF, CBV, and MTT maps. Six blinded radiologists determined quality scores of simulated scans on a Likert scale. Quantitative differences were calculated.\n\n\nRESULTS\nFor qualitative analysis, the correlation coefficients for CBF (-0.34; P < .0001), CBV (-0.35; P < .0001), and MTT (-0.44; P < .0001) were statistically significant. Interobserver agreements in quality for the simulated 188-, 177-, 167-, 127-, and 44-mAs scans for CBF were 0.95, 0.98, 0.98, 0.95, and 0.52, respectively. Interobserver agreements in quality for the simulated CBV were 1, 1, 1, 1, and 0.83, respectively. For MTT, the interobserver agreements were 0.83, 0.86, 0.88, 0.74, and 0.05, respectively. For quantitative analysis, only the lowest simulated dose of 44 mAs showed statistically significant differences from the reference scan values for CBF (-1.8; P = .04), CBV (0.07; P < .0001), and MTT (0.46; P < .0001).\n\n\nCONCLUSIONS\nFrom a reference CTP study performed at 80 keV and 190 mAs, this simulation study demonstrates the potential of a 33% reduction in tube current and dose while maintaining image quality and quantitative interpretations. This work can be used to inform future studies by using true, nonsimulated scans."
},
{
"pmid": "25252738",
"title": "Low dose CT perfusion in acute ischemic stroke.",
"abstract": "INTRODUCTION\nThe purpose of this investigation is to determine if CT perfusion (CTP) measurements at low doses (LD = 20 or 50 mAs) are similar to those obtained at regular doses (RD = 100 mAs), with and without the addition of adaptive statistical iterative reconstruction (ASIR).\n\n\nMETHODS\nA single-center, prospective study was performed in patients with acute ischemic stroke (n = 37; 54% male; age = 74 ± 15 years). Two CTP scans were performed on each subject: one at 100 mAs (RD) and one at either 50 or 20 mAs (LD). CTP parameters were compared between the RD and LD scans in regions of ischemia, infarction, and normal tissue. Differences were determined using a within-subjects ANOVA (p < 0.05) followed by a paired t test post hoc analysis (p < 0.01).\n\n\nRESULTS\nAt 50 mAs, there was no significant difference between cerebral blood flow (CBF), cerebral blood volume (CBV), or time to maximum enhancement (Tmax) values for the RD and LD scans in the ischemic, infarcted, or normal contralateral regions (p < 0.05). At 20 mAs, there were significant differences between the RD and LD scans for all parameters in the ischemic and normal tissue regions (p > 0.05).\n\n\nCONCLUSION\nCTP-derived CBF and CBV are not different at 50 mAs compared to 100 mAs, even without the addition of ASIR. Current CTP protocols can be modified to reduce the effective dose by 50 % without altering CTP measurements."
},
{
"pmid": "23345379",
"title": "Effect of sampling frequency on perfusion values in perfusion CT of lung tumors.",
"abstract": "OBJECTIVE\nThe purpose of this study was to assess as a potential means of limiting radiation exposure the effect on perfusion CT values of increasing sampling intervals in lung perfusion CT acquisition.\n\n\nSUBJECTS AND METHODS\nLung perfusion CT datasets in patients with lung tumors (> 2.5 cm diameter) were analyzed by distributed parameter modeling to yield tumor blood flow, blood volume, mean transit time, and permeability values. Scans were obtained 2-7 days apart with a 16-MDCT scanner without intervening therapy. Linear mixed-model analyses were used to compare perfusion CT values for the reference standard sampling interval of 0.5 second with those of datasets obtained at sampling intervals of 1, 2, and 3 seconds, which included relative shifts to account for uncertainty in preenhancement set points. Scan-rescan reproducibility was assessed by between-visit coefficient of variation.\n\n\nRESULTS\nTwenty-four lung perfusion CT datasets in 12 patients were analyzed. With increasing sampling interval, mean and 95% CI blood flow and blood volume values were increasingly overestimated by up to 14% (95% CI, 11-18%) and 8% (95% CI, 5-11%) at the 3-second sampling interval, and mean transit time and permeability values were underestimated by up to 11% (95% CI, 9-13%) and 3% (95% CI, 1-6%) compared with the results in the standard sampling interval of 0.5 second. The differences were significant for blood flow, blood volume, and mean transit time for sampling intervals of 2 and 3 seconds (p ≤ 0.0002) but not for the 1-second sampling interval. The between-visit coefficient of variation increased with subsampling for blood flow (32.9-34.2%), blood volume (27.1-33.5%), and permeability (39.0-42.4%) compared with the values in the 0.5-second sampling interval (21.3%, 23.6%, and 32.2%).\n\n\nCONCLUSION\nIncreasing sampling intervals beyond 1 second yields significantly different perfusion CT parameter values compared with the reference standard (up to 18% for 3 seconds of sampling). Scan-rescan reproducibility is also adversely affected."
},
{
"pmid": "26452610",
"title": "Radiation dose reduction in perfusion CT imaging of the brain: A review of the literature.",
"abstract": "Perfusion CT (PCT) of the brain is widely used in the settings of acute ischemic stroke and vasospasm monitoring. The high radiation dose associated with PCT is a central topic and has been a focus of interest for many researchers. Many studies have examined the effect of radiation dose reduction in PCT using different approaches. Reduction of tube current and tube voltage can be efficient and lead to a remarkable reduction of effective radiation dose while preserving acceptable image quality. The use of novel noise reduction techniques such as iterative reconstruction or spatiotemporal smoothing can produce sufficient image quality from low-dose perfusion protocols. Reduction of sampling frequency of perfusion images has only little potential to reduce radiation dose. In the present article we aimed to summarize the available data on radiation dose reduction in PCT imaging of the brain."
},
{
"pmid": "21816919",
"title": "CT perfusion in acute ischemic stroke: a comparison of 2-second and 1-second temporal resolution.",
"abstract": "BACKGROUND AND PURPOSE\nCT perfusion data sets are commonly acquired using a temporal resolution of 1 image per second. To limit radiation dose and allow for increased spatial coverage, the reduction of temporal resolution is a possible strategy. The aim of this study was to evaluate the effect of reduced temporal resolution in CT perfusion scans with regard to color map quality, quantitative perfusion parameters, ischemic lesion extent, and clinical decision-making when using DC and MS algorithms.\n\n\nMATERIALS AND METHODS\nCTP datasets from 50 patients with acute stroke were acquired with a TR of 1 second. Two-second TR datasets were created by removing every second image. Various perfusion parameters (CBF, CBV, MTT, TTP, TTD) and color maps were calculated by using identical data-processing settings for 2-second and 1-second TR. Color map quality, quantitative region-of-interest-based perfusion measurements, and TAR/NVT lesions (indicated by CBF/CBV mismatch) derived from the 2-second and 1-second processed data were statistically compared.\n\n\nRESULTS\nColor map quality was similar for 2-second versus 1-second TR when using DC and was reduced when using MS. Regarding quantitative values, differences between 2-second and 1-second TR datasets were statistically significant by using both algorithms. Using DC, corresponding tissue-at-risk lesions were slightly smaller at 2-second versus 1-second TR (P < .05), whereas corresponding NVT lesions showed excellent agreement. With MS, corresponding tissue-at-risk lesions showed excellent agreement but more artifacts, whereas NVT lesions were larger (P < .001) compared with 1-second TR. Therapeutic decisions would have remained the same in all patients.\n\n\nCONCLUSIONS\nCTP studies obtained with 2-second TR are typically still diagnostic, and the same therapy would have been provided. However, with regard to perfusion quantitation and image-quality-based confidence, our study indicates that 1-second TR is preferable to 2-second TR."
},
{
"pmid": "26496550",
"title": "Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method.",
"abstract": "OBJECTIVE\nThe objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method.\n\n\nMATERIALS AND METHODS\nData were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded.\n\n\nRESULTS\nThe mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration.\n\n\nCONCLUSION\nAlthough a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric abdominopelvic CT examinations, it performed worse in pediatric chest CT examinations. A 50% dose reduction can be achieved while maintaining diagnostic quality."
},
{
"pmid": "20643609",
"title": "Fast model-based X-ray CT reconstruction using spatially nonhomogeneous ICD optimization.",
"abstract": "Recent applications of model-based iterative reconstruction (MBIR) algorithms to multislice helical CT reconstructions have shown that MBIR can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of MBIR in practical applications. Among the various iterative methods that have been studied for MBIR, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 1-D optimization algorithm that uses a quadratic substitute function to upper bound the local 1-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. We examine the performance of the proposed algorithm using several clinical data sets of various anatomy. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries."
},
{
"pmid": "22592622",
"title": "Comparison of hybrid and pure iterative reconstruction techniques with conventional filtered back projection: dose reduction potential in the abdomen.",
"abstract": "PURPOSE\nAssess the effect of filtered back projection (FBP) and hybrid (adaptive statistical iterative reconstruction [ASIR]) and pure (model-based iterative reconstruction [MBIR]) iterative reconstructions on abdominal computed tomography (CT) acquired with 75% radiation dose reduction.\n\n\nMATERIALS AND METHODS\nIn an institutional review board-approved prospective study, 10 patients (mean [standard deviation] age, 60 (8) years; 4 men and 6 women) gave informed consent for acquisition of additional abdominal images on 64-slice multidetector-row CT (GE 750HD, GE Healthcare). Scanning was repeated over a 10-cm scan length at 200 and 50 milliampere second (mA s), with remaining parameters held constant at 120 kilovolt (peak), 0.984:1 pitch, and standard reconstruction kernel. Projection data were deidentified, exported, and reconstructed to obtain 4 data sets (200-mA s FBP, 50-mA s FBP, 50-mA s ASIR, 50-mA s MBIR), which were evaluated by 2 abdominal radiologists for lesions and subjective image quality. Objective noise and noise spectral density were measured for each image series.\n\n\nRESULTS\nAmong the 10 patients, the maximum weight recorded was 123 kg, with maximum transverse diameter measured as 43.7 cm. Lesion conspicuity at 50-mA s MBIR was better than on 50-mA s FBP and ASIR images (P < 0.01). Image noise was rated as suboptimal on low-dose FBP and ASIR but deemed acceptable in MBIR images. Objective noise with 50-mA s MBIR was 2 to 3 folds lower compared to 50-mA s ASIR, 50-mA s FBP, and 200-mA s FBP (P < 0.0001). Noise spectral density analyses demonstrated that ASIR retains the noise spectrum signature of FBP, whereas MBIR has much lower noise with a more regularized noise spectrum pattern.\n\n\nCONCLUSION\nModel-based iterative reconstruction renders acceptable image quality and diagnostic confidence in 50- mA s abdominal CT images, whereas FBP and ASIR images are associated with suboptimal image quality at this radiation dose level."
},
{
"pmid": "23846467",
"title": "Fast acquisition and reconstruction of optical coherence tomography images via sparse representation.",
"abstract": "In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods."
},
{
"pmid": "22968202",
"title": "Group-sparse representation with dictionary learning for medical image denoising and fusion.",
"abstract": "Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches."
},
{
"pmid": "21742542",
"title": "Efficient MR image reconstruction for compressed MR imaging.",
"abstract": "In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction."
},
{
"pmid": "17153947",
"title": "Image denoising via sparse and redundant representations over learned dictionaries.",
"abstract": "We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods."
},
{
"pmid": "24808354",
"title": "Novel example-based method for super-resolution and denoising of medical images.",
"abstract": "In this paper, we propose a novel example-based method for denoising and super-resolution of medical images. The objective is to estimate a high-resolution image from a single noisy low-resolution image, with the help of a given database of high and low-resolution image patch pairs. Denoising and super-resolution in this paper is performed on each image patch. For each given input low-resolution patch, its high-resolution version is estimated based on finding a nonnegative sparse linear representation of the input patch over the low-resolution patches from the database, where the coefficients of the representation strongly depend on the similarity between the input patch and the sample patches in the database. The problem of finding the nonnegative sparse linear representation is modeled as a nonnegative quadratic programming problem. The proposed method is especially useful for the case of noise-corrupted and low-resolution image. Experimental results show that the proposed method outperforms other state-of-the-art super-resolution methods while effectively removing noise."
},
{
"pmid": "18249647",
"title": "A computationally efficient superresolution image reconstruction algorithm.",
"abstract": "Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence."
},
{
"pmid": "2585170",
"title": "High-resolution image recovery from image-plane arrays, using convex projections.",
"abstract": "We consider the problem of reconstructing remotely obtained images from image-plane detector arrays. Although the individual detectors may be larger than the blur spot of the imaging optics, high-resolution reconstructions can be obtained by scanning or rotating the image with respect to the detector. As an alternative to matrix inversion or least-squares estimation [Appl. Opt. 26, 3615 (1987)], the method of convex projections is proposed. We show that readily obtained prior knowledge can be used to obtain good-quality imagery with reduced data. The effect of noise on the reconstruction process is considered."
},
{
"pmid": "18285235",
"title": "Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.",
"abstract": "The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology."
},
{
"pmid": "28166495",
"title": "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.",
"abstract": "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing."
},
{
"pmid": "21654042",
"title": "TIPS bilateral noise reduction in 4D CT perfusion scans produces high-quality cerebral blood flow maps.",
"abstract": "Cerebral computed tomography perfusion (CTP) scans are acquired to detect areas of abnormal perfusion in patients with cerebrovascular diseases. These 4D CTP scans consist of multiple sequential 3D CT scans over time. Therefore, to reduce radiation exposure to the patient, the amount of x-ray radiation that can be used per sequential scan is limited, which results in a high level of noise. To detect areas of abnormal perfusion, perfusion parameters are derived from the CTP data, such as the cerebral blood flow (CBF). Algorithms to determine perfusion parameters, especially singular value decomposition, are very sensitive to noise. Therefore, noise reduction is an important preprocessing step for CTP analysis. In this paper, we propose a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in 4D CTP scans, while preserving the time-intensity profiles (fourth dimension) that are essential for determining the perfusion parameters. The proposed TIPS bilateral filter is compared to standard Gaussian filtering, and 4D and 3D (applied separately to each sequential scan) bilateral filtering on both phantom and patient data. Results on the phantom data show that the TIPS bilateral filter is best able to approach the ground truth (noise-free phantom), compared to the other filtering methods (lowest root mean square error). An observer study is performed using CBF maps derived from fifteen CTP scans of acute stroke patients filtered with standard Gaussian, 3D, 4D and TIPS bilateral filtering. These CBF maps were blindly presented to two observers that indicated which map they preferred for (1) gray/white matter differentiation, (2) detectability of infarcted area and (3) overall image quality. Based on these results, the TIPS bilateral filter ranked best and its CBF maps were scored to have the best overall image quality in 100% of the cases by both observers. Furthermore, quantitative CBF and cerebral blood volume values in both the phantom and the patient data showed that the TIPS bilateral filter resulted in realistic mean values with a smaller standard deviation than the other evaluated filters and higher contrast-to-noise ratios. Therefore, applying the proposed TIPS bilateral filtering method to 4D CTP data produces higher quality CBF maps than applying the standard Gaussian, 3D bilateral or 4D bilateral filter. Furthermore, the TIPS bilateral filter is computationally faster than both the 3D and 4D bilateral filters."
},
{
"pmid": "17679327",
"title": "Comparison of PDE-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography.",
"abstract": "A comparison between two nonlinear diffusion methods for denoising OCT images is performed. Specifically, we compare and contrast the performance of the traditional nonlinear Perona-Malik filter with a complex diffusion filter that has been recently introduced by Gilboa et al.. The complex diffusion approach based on the generalization of the nonlinear scale space to the complex domain by combining the diffusion and the free Schridinger equation is evaluated on synthetic images and also on representative OCT images at various noise levels. The performance improvement over the traditional nonlinear Perona-Malik filter is quantified in terms of noise suppression, image structural preservation and visual quality. An average signal-to-noise ratio (SNR) improvement of about 2.5 times and an average contrast to noise ratio (CNR) improvement of 49% was obtained while mean structure similarity (MSSIM) was practically not degraded after denoising. The nonlinear complex diffusion filtering can be applied with success to many OCT imaging applications. In summary, the numerical values of the image quality metrics along with the qualitative analysis results indicated the good feature preservation performance of the complex diffusion process, as desired for better diagnosis in medical imaging processing."
},
{
"pmid": "11296876",
"title": "Improving PET-based physiological quantification through methods of wavelet denoising.",
"abstract": "The goal of this study was to evaluate methods of multidimensional wavelet denoising on restoring the fidelity of biological signals hidden within dynamic positron emission tomography (PET) images. A reduction of noise within pixels, between adjacent regions, and time-serial frames was achieved via redundant multiscale representations. In analyzing dynamic PET data of healthy volunteers, a multiscale method improved the estimate-to-error ratio of flows fivefold without loss of detail. This technique also maintained accuracy of flow estimates in comparison with the \"gold standard,\" using dynamic PET with O15-water. In addition, in studies of coronary disease patients, flow patterns were preserved and infarcted regions were well differentiated from normal regions. The results show that a wavelet-based noise-suppression method produced reliable approximations of salient underlying signals and led to an accurate quantification of myocardial perfusion. The described protocol can be generalized to other temporal biomedical imaging modalities including functional magnetic resonance imaging and ultrasound."
},
{
"pmid": "25706579",
"title": "Robust Low-Dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization.",
"abstract": "Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. \"Time is brain\" is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions."
},
{
"pmid": "23542422",
"title": "Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.",
"abstract": "Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain."
},
{
"pmid": "15107323",
"title": "The addition of computer simulated noise to investigate radiation dose and image quality in images with spatial correlation of statistical noise: an example application to X-ray CT of the brain.",
"abstract": "This study validates a method to add spatially correlated statistical noise to an image, applied to transaxial X-ray CT images of the head to simulate exposure reduction by up to 50%. 23 patients undergoing routine head CT had three additional slices acquired for validation purposes, two at the same clinical 420 mAs exposure and one at 300 mAs. Images at the level of the cerebrospinal fluid filled ventricles gave readings of noise from a single image, with subtraction of image pairs to obtain noise readings from non-uniform tissue regions. The spatial correlation of the noise was determined and added to the acquired 420 mAs image to simulate images at 340 mAs, 300 mAs, 260 mAs and 210 mAs. Two radiologists assessed the images, finding little difference between the 300 mAs simulated and acquired images. The presence of periventricular low density lesions (PVLD) was used as an example of the effect of simulated dose reduction on diagnostic accuracy, and visualization of the internal capsule was used as a measure of image quality. Diagnostic accuracy for the diagnosis of PVLD did not fall significantly even down to 210 mAs, though visualization of the internal capsule was poorer at lower exposure. Further work is needed to investigate means of measuring statistical noise without the need for uniform tissue areas, or image pairs. This technique has been shown to allow sufficiently accurate simulation of dose reduction and image quality degradation, even when the statistical noise is spatially correlated."
},
{
"pmid": "26571527",
"title": "Multi-Scale Patch-Based Image Restoration.",
"abstract": "Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms--the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively."
}
] |
Frontiers in Neuroinformatics | 31316365 | PMC6609999 | 10.3389/fninf.2019.00045 | Detection of EEG K-Complexes Using Fractal Dimension of Time Frequency Images Technique Coupled With Undirected Graph Features | K-complexes identification is a challenging task in sleep research. The detection of k-complexes in electroencephalogram (EEG) signals based on visual inspection is time consuming, prone to errors, and requires well-trained knowledge. Many existing methods for k-complexes detection rely mainly on analyzing EEG signals in time and frequency domains. In this study, an efficient method is proposed to detect k-complexes from EEG signals based on fractal dimension (FD) of time frequency (T-F) images coupled with undirected graph features. Firstly, an EEG signal is partitioned into smaller segments using a sliding window technique. Each EEG segment is passed through a spectrogram of short time Fourier transform (STFT) to obtain the T-F images. Secondly, the box counting method is applied to each T-F image to discover the FDs in EEG signals. A vector of FD features are extracted from each T-F image and then mapped into an undirected graph. The structural properties of the graphs are used as the representative features of the original EEG signals for the input of a least square support vector machine (LS-SVM) classifier. Key graphic features are extracted from the undirected graphs. The extracted graph features are forwarded to the LS-SVM for classification. To investigate the classification ability of the proposed feature extraction combined with the LS-SVM classifier, the extracted features are also forwarded to a k-means classifier for comparison. The proposed method is compared with several existing k-complexes detection methods in which the same datasets were used. The findings of this study shows that the proposed method yields better classification results than other existing methods in the literature. An average accuracy of 97% for the detection of the k-complexes is obtained using the proposed method. The proposed method could lead to an efficient tool for the scoring of automatic sleep stages which could be useful for doctors and neurologists in the diagnosis and treatment of sleep disorders and for sleep research. | Related WorkSeveral automatic methods have been developed to detect and analyze the k-complexes. Those approaches used different transformation techniques, such as Fourier transform, wavelet transform, spectral analysis, matching pursuit and autoregressive modeling (Camilleri et al., 2014). So far, no studies have been presented to identify k-complex transient events based on their waveform characteristics, such as a textural descriptor, non-linear features or their graph connections.Bankman et al. (1992) used a method based on different set of features to detect k-complexes in sleep EEG signals. 14 features were extracted from EEG signals and then used as input into a neural network. The researchers reported an average of sensitivity and false positive rate (FPR) of 90 and 8.1%, respectively. Another study was presented by Hernández-Pereira et al. (2016), in which k-complexes were also detected based on 14 features extracted from each sleep EEG signal. The features were then forwarded to different classifiers to identify k-complexes. An average accuracy of 91.40% was reported using the features selection method.Tang and Ishii (1995) proposed a method to identify k-complexes based on the discrete wavelet transform (DWT) parameters. The DWT parameters were used to determine the time duration and amplitude of k-complexes. In their study, they obtained 87% sensitivity and 10% FPR. More recently, Lajnef et al. (2015) used a tunable Q-factor wavelet transform for the detection of k-complexes. An average sensitivity and FPR of 81.57 and 29.54% were reported, respectively.Another study was presented by Richard and Lengelle (1998), in which the k-complexes were recognized based on a joint linear filter in time and time-frequency domains. The k-complexes and delta waves were identified with an average sensitivity and FPR of 90 and 9.2%, respectively. Yücelbaş et al. (2018b) used a method to detect k-complexes automatically based on time and frequency analyses. In their study, an EEG signal was decomposed using a DWT. An average accuracy rate of 92.29% was achieved.Noori et al. (2014) used a features selection using a generalized radial basis function extreme learning machine (MELM-GRBF) algorithm to detect k-complexes. In their study, fractal and entropy features were employed. The EEG signals were divided into segments using a sliding window technique. The size of the window was set to 1.0 s. An average sensitivity and accuracy of 61 and 96.1% were reported. Researchers in Zacharaki et al. (2013) utilized two steps to detect k-complexes. In the first step, the k-complex candidates are selected, while the number of k-complexes is reduced in the second step using a machine learning algorithm. In that study, four features, including peak-to-peak amplitude, standard deviation, and a ratio of power and duration of the negative sharp wave, were extracted from each segment. An average sensitivity of 83% was reported.Parekh et al. (2015) detected the k-complexes based on a fast non-linear optimization algorithm. In that study, only F-score result was reported. An average F-score of 0.70 and 0.57% for the detection of the sleep spindles and the k-complexes were achieved, respectively. Another study was presented by Henry et al. (1994), in which the k-complexes were classified based on matched filtering. Each segment was decomposed into a set of orthonormal functions and wavelets analysis.Devuyst et al. (2010) used a likelihood threshold parameters and features extraction method to detect k-complexes. The performance of the detection was assessed against to two human experts’ scorings. An average of sensitivity rate of 61.72 and 60.94% for scorer 1 and scorer 2 were obtained. Migotina et al. (2010) presented a method based on Hjorth parameters and employed fuzzy decision to identify k-complexes. In that study, the performance of the proposed method was compared with the visual human scoring to evaluate their results. All those methods for classifying k-complexes in sleep EEG signals were based on linear features. So far waveform characteristics based features, such as a textural descriptor, and graph network connections, have not been used for the detection of k-complexes.According to the literature, we found that the FD as non-linear features has been proven to be an efficient approach to explore the hidden patterns in digital images and signals (Prieto et al., 2011; Finotello et al., 2015). It has been used to analyze and classify EEG signals to trace the changes in EEG signals during different sleep stages, and has also been employed to recognize different digital image patterns. Yang et al. (2007) and Sourina and Liu (2011) employed a FD approach to analyze sleep stages in EEG signals.Fractal dimension technique was also used by Ali et al. (2016) for voice recognition. Time frequency (TF) images were also used by Bajaj and Pachori (2013) to classify sleep stages. Bajaj et al. (2017) also identified alcoholic EEGs based on T-F images. Based on our previous study (Al-Salman et al., 2018) we found that time frequency images coupled with FD yielded promising results in analyzing and detecting sleep spindles in sleep EEG signals. Furthermore, undirected graph properties have been used to analyze and study brain diseases (Vural and Yildiz, 2010; Wang et al., 2014). Some studies reported that undirected graphs can be considered as one of the robust approaches to characterize the functional topological properties in brain networks for both normal and abnormal brain functioning (Sourina and Liu, 2011; Li et al., 2013). The relevant techniques were employed in image processing as a powerful tool to analyze and classify digital images (Sarsoh et al., 2012).Recently, a graph approach was used in Diykh et al. (2016) to classify sleep stages. However, in this work, we have combined the fractal features with properties of undirected graphs to detect k-complexes in sleep EEG signals. Based on our knowledge, fractal graph features approach has not been used in k-complexes detection before. | [
"27747606",
"26531753",
"12531149",
"24008250",
"1487294",
"26159729",
"11976053",
"28891322",
"28491032",
"19005749",
"20654696",
"27101613",
"22178068",
"17390982",
"29793077",
"7705906",
"20477951",
"21230152",
"26283943",
"24369454",
"1180967",
"17990300",
"25956566",
"9628751",
"23874288",
"21168234",
"22287252",
"25704869",
"15319512",
"17266107",
"11776208",
"17282798",
"20192058",
"24686109",
"16803415",
"24768081"
] | [
{
"pmid": "27747606",
"title": "Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.",
"abstract": "Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively."
},
{
"pmid": "26531753",
"title": "Detection of Voice Pathology using Fractal Dimension in a Multiresolution Analysis of Normal and Disordered Speech Signals.",
"abstract": "Voice disorders are associated with irregular vibrations of vocal folds. Based on the source filter theory of speech production, these irregular vibrations can be detected in a non-invasive way by analyzing the speech signal. In this paper we present a multiband approach for the detection of voice disorders given that the voice source generally interacts with the vocal tract in a non-linear way. In normal phonation, and assuming sustained phonation of a vowel, the lower frequencies of speech are heavily source dependent due to the low frequency glottal formant, while the higher frequencies are less dependent on the source signal. During abnormal phonation, this is still a valid, but turbulent noise of source, because of the irregular vibration, affects also higher frequencies. Motivated by such a model, we suggest a multiband approach based on a three-level discrete wavelet transformation (DWT) and in each band the fractal dimension (FD) of the estimated power spectrum is estimated. The experiments suggest that frequency band 1-1562 Hz, lower frequencies after level 3, exhibits a significant difference in the spectrum of a normal and pathological subject. With this band, a detection rate of 91.28 % is obtained with one feature, and the obtained result is higher than all other frequency bands. Moreover, an accuracy of 92.45 % and an area under receiver operating characteristic curve (AUC) of 95.06 % is acquired when the FD of all levels is fused. Likewise, when the FD of all levels is combined with 22 Multi-Dimensional Voice Program (MDVP) parameters, an improvement of 2.26 % in accuracy and 1.45 % in AUC is observed."
},
{
"pmid": "12531149",
"title": "The functional significance of K-complexes.",
"abstract": "This paper summarizes the present knowledge about the cellular bases of the sleep K-complex (KC). The KC has two phases: the initial surface-positive wave is due to the synchronous excitation of cortical neurones, while the subsequent surface-negative wave represents neuronal hyperpolarization. These variations of membrane potential occur within a slow (<1 Hz) oscillation that characterizes all sleep stages. Therefore, KCs are periodic, and their shape and frequency are modulated by the increasing degree of deafferentation attained by the corticothalamic network with the deepening of the sleep. Within this network, the rhythmic KCs recurring at the frequency of the slow oscillation play a leading role by triggering and grouping other sleep oscillations, such as spindles (7-14 Hz) and delta (1-4 Hz). The KC is mainly a spontaneous event generated in cortical networks. During nocturnal epileptic seizures, the KCs are precursors of paroxysmal spike-wave complexes."
},
{
"pmid": "24008250",
"title": "Automatic classification of sleep stages based on the time-frequency image of EEG signals.",
"abstract": "In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals."
},
{
"pmid": "1487294",
"title": "Feature-based detection of the K-complex wave in the human electroencephalogram using neural networks.",
"abstract": "The main difficulties in reliable automated detection of the K-complex wave in EEG are its close similarity to other waves and the lack of specific characterization criteria. We present a feature-based detection approach using neural networks that provides good agreement with visual K-complex recognition: a sensitivity of 90% is obtained with about 8% false positives. The respective contribution of the features and that of the neural network is demonstrated by comparing the results to those obtained with i) raw EEG data presented to neural networks, and ii) features presented to Fisher's linear discriminant."
},
{
"pmid": "26159729",
"title": "Network analysis for a network disorder: The emerging role of graph theory in the study of epilepsy.",
"abstract": "Recent years have witnessed a paradigm shift in the study and conceptualization of epilepsy, which is increasingly understood as a network-level disorder. An emblematic case is temporal lobe epilepsy (TLE), the most common drug-resistant epilepsy that is electroclinically defined as a focal epilepsy and pathologically associated with hippocampal sclerosis. In this review, we will summarize histopathological, electrophysiological, and neuroimaging evidence supporting the concept that the substrate of TLE is not limited to the hippocampus alone, but rather is broadly distributed across multiple brain regions and interconnecting white matter pathways. We will introduce basic concepts of graph theory, a formalism to quantify topological properties of complex systems that has recently been widely applied to study networks derived from brain imaging and electrophysiology. We will discuss converging graph theoretical evidence indicating that networks in TLE show marked shifts in their overall topology, providing insight into the neurobiology of TLE as a network-level disorder. Our review will conclude by discussing methodological challenges and future clinical applications of this powerful analytical approach."
},
{
"pmid": "11976053",
"title": "Detection and description of non-linear interdependence in normal multichannel human EEG data.",
"abstract": "OBJECTIVES\nThis study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied.\n\n\nMETHODS\nScalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence.\n\n\nRESULTS\nStatistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies.\n\n\nCONCLUSIONS\nNon-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex."
},
{
"pmid": "28891322",
"title": "Data-Driven Topological Filtering Based on Orthogonal Minimal Spanning Trees: Application to Multigroup Magnetoencephalography Resting-State Connectivity.",
"abstract": "In the present study, a novel data-driven topological filtering technique is introduced to derive the backbone of functional brain networks relying on orthogonal minimal spanning trees (OMSTs). The method aims to identify the essential functional connections to ensure optimal information flow via the objective criterion of global efficiency minus the cost of surviving connections. The OMST technique was applied to multichannel, resting-state neuromagnetic recordings from four groups of participants: healthy adults (n = 50), adults who have suffered mild traumatic brain injury (n = 30), typically developing children (n = 27), and reading-disabled children (n = 25). Weighted interactions between network nodes (sensors) were computed using an integrated approach of dominant intrinsic coupling modes based on two alternative metrics (symbolic mutual information and phase lag index), resulting in excellent discrimination of individual cases according to their group membership. Classification results using OMST-derived functional networks were clearly superior to results using either relative power spectrum features or functional networks derived through the conventional minimal spanning tree algorithm."
},
{
"pmid": "28491032",
"title": "Topological Filtering of Dynamic Functional Brain Networks Unfolds Informative Chronnectomics: A Novel Data-Driven Thresholding Scheme Based on Orthogonal Minimal Spanning Trees (OMSTs).",
"abstract": "The human brain is a large-scale system of functionally connected brain regions. This system can be modeled as a network, or graph, by dividing the brain into a set of regions, or \"nodes,\" and quantifying the strength of the connections between nodes, or \"edges,\" as the temporal correlation in their patterns of activity. Network analysis, a part of graph theory, provides a set of summary statistics that can be used to describe complex brain networks in a meaningful way. The large-scale organization of the brain has features of complex networks that can be quantified using network measures from graph theory. The adaptation of both bivariate (mutual information) and multivariate (Granger causality) connectivity estimators to quantify the synchronization between multichannel recordings yields a fully connected, weighted, (a)symmetric functional connectivity graph (FCG), representing the associations among all brain areas. The aforementioned procedure leads to an extremely dense network of tens up to a few hundreds of weights. Therefore, this FCG must be filtered out so that the \"true\" connectivity pattern can emerge. Here, we compared a large number of well-known topological thresholding techniques with the novel proposed data-driven scheme based on orthogonal minimal spanning trees (OMSTs). OMSTs filter brain connectivity networks based on the optimization between the global efficiency of the network and the cost preserving its wiring. We demonstrated the proposed method in a large EEG database (N = 101 subjects) with eyes-open (EO) and eyes-closed (EC) tasks by adopting a time-varying approach with the main goal to extract features that can totally distinguish each subject from the rest of the set. Additionally, the reliability of the proposed scheme was estimated in a second case study of fMRI resting-state activity with multiple scans. Our results demonstrated clearly that the proposed thresholding scheme outperformed a large list of thresholding schemes based on the recognition accuracy of each subject compared to the rest of the cohort (EEG). Additionally, the reliability of the network metrics based on the fMRI static networks was improved based on the proposed topological filtering scheme. Overall, the proposed algorithm could be used across neuroimaging and multimodal studies as a common computationally efficient standardized tool for a great number of neuroscientists and physicists working on numerous of projects."
},
{
"pmid": "19005749",
"title": "Characterizing dynamic functional connectivity across sleep stages from EEG.",
"abstract": "Following a nonlinear dynamics approach, we investigated the emergence of functional clusters which are related with spontaneous brain activity during sleep. Based on multichannel EEG traces from 10 healthy subjects, we compared the functional connectivity across different sleep stages. Our exploration commences with the conjecture of a small-world patterning, present in the scalp topography of the measured electrical activity. The existence of such a communication pattern is first confirmed for our data and then precisely determined by means of two distinct measures of non-linear interdependence between time-series. A graph encapsulating the small-world network structure along with the relative interdependence strength is formed for each sleep stage and subsequently fed to a suitable clustering procedure. Finally the delineated graph components are comparatively presented for all stages revealing novel attributes of sleep architecture. Our results suggest a pivotal role for the functional coupling during the different stages and indicate interesting dynamic characteristics like its variable hemispheric asymmetry and the isolation between anterior and posterior cortical areas during REM."
},
{
"pmid": "20654696",
"title": "What does delta band tell us about cognitive processes: a mental calculation study.",
"abstract": "Multichannel EEG recordings from 18 healthy subjects were used to investigate brain activity in four delta subbands during two mental arithmetic tasks (number comparison and two-digit multiplication) and a control condition. The spatial redistribution of signal-power (SP) was explored based on four consecutives subbands of the delta rhythm. Additionally, network analysis was performed, independently for each subband, and the related graphs reflecting functional connectivity were characterized in terms of local structure (i.e. the clustering coefficient), overall integration (i.e. the path length) and the optimality of network organization (i.e. the \"small-worldness\"). EEG delta activity showed a widespread increase in all subbands during the performance of both arithmetic tasks. The inter-task comparison of the two arithmetic tasks revealed significant differences, in terms of signal-power, for the two subbands of higher frequency over left hemisphere (frontal, temporal, parietal and occipital) regions. The estimated brain networks exhibited small-world characteristics in the case of all subbands. On the contrary, lower frequency subbands were found to operate differently than the higher frequency subbands, with the latter featuring nodal organization and poor remote interconnectivity. These findings possibly reflect the deactivation of default mode network and could be attributed to inhibitory mechanisms activated during mental tasks."
},
{
"pmid": "27101613",
"title": "EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.",
"abstract": "The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method."
},
{
"pmid": "22178068",
"title": "Automated sleep stage identification system based on time-frequency analysis of a single EEG channel and random forest classifier.",
"abstract": "In this work, an efficient automated new approach for sleep stage identification based on the new standard of the American academy of sleep medicine (AASM) is presented. The propose approach employs time-frequency analysis and entropy measures for feature extraction from a single electroencephalograph (EEG) channel. Three time-frequency techniques were deployed for the analysis of the EEG signal: Choi-Williams distribution (CWD), continuous wavelet transform (CWT), and Hilbert-Huang Transform (HHT). Polysomnographic recordings from sixteen subjects were used in this study and features were extracted from the time-frequency representation of the EEG signal using Renyi's entropy. The classification of the extracted features was done using random forest classifier. The performance of the new approach was tested by evaluating the accuracy and the kappa coefficient for the three time-frequency distributions: CWD, CWT, and HHT. The CWT time-frequency distribution outperformed the other two distributions and showed excellent performance with an accuracy of 0.83 and a kappa coefficient of 0.76."
},
{
"pmid": "17390982",
"title": "Multiclass support vector machines for EEG-signals classification.",
"abstract": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies."
},
{
"pmid": "29793077",
"title": "fMRI classification method with multiple feature fusion based on minimum spanning tree analysis.",
"abstract": "Resting state functional brain networks have been widely studied in brain disease research. Conventional network analysis methods are hampered by differences in network size, density and normalization. Minimum spanning tree (MST) analysis has been recently suggested to ameliorate these limitations. Moreover, common MST analysis methods involve calculating quantifiable attributes and selecting these attributes as features in the classification. However, a disadvantage of these methods is that information about the topology of the network is not fully considered, limiting further improvement of classification performance. To address this issue, we propose a novel method combining brain region and subgraph features for classification, utilizing two feature types to quantify two properties of the network. We experimentally validated our proposed method using a major depressive disorder (MDD) patient dataset. The results indicated that MSTs of MDD patients were more similar to random networks and exhibited significant differences in certain regions involved in the limbic-cortical-striatal-pallidal-thalamic (LCSPT) circuit, which is considered to be a major pathological circuit of depression. Moreover, we demonstrated that this novel classification method could effectively improve classification accuracy and provide better interpretability. Overall, the current study demonstrated that different forms of feature representation provide complementary information."
},
{
"pmid": "7705906",
"title": "K-complex detection using multi-layer perceptrons and recurrent networks.",
"abstract": "The feasibility of using a multi-layer perceptron and Elman's recurrent network for the detection of specific waveforms (K-complexes) in electroencephalograms (EEGs), regardless of their location in the signal segment, is explored. Experiments with simulated and actual EEG data were performed. In case of the perceptron, the input consisted of the magnitude and/or phase values obtained from 10-s signal intervals, whereas the recurrent net operated on the digitized data samples directly. It was found that both nets performed well on the simulated data, but not on the actual EEG data. The reasons for the failure of both nets are discussed."
},
{
"pmid": "20477951",
"title": "Human non-rapid eye movement stage II sleep spindles are blocked upon spontaneous K-complex coincidence and resume as higher frequency spindles afterwards.",
"abstract": "The purpose of this study was to investigate a potential relation between the K-complex (KC) and sleep spindles of non-rapid eye movement (NREM) stage II of human sleep. Using 58 electroencephalogram electrodes, plus standard electrooculogram and electromyogram derivations for sleep staging, brain activity during undisturbed whole-night sleep was recorded in six young adults (one of them participated twice). NREM stage II spindles (1256 fast and 345 slow) and 1131 singular generalized KCs were selected from all sleep cycles. The negative peak of the KC, the positive peak of the KC (where applicable), and the prominent negative wave peak of slow and fast spindles were marked as events of reference. Fast Fourier transform-based time-frequency analysis was performed over the marked events, which showed that: (a) fast spindles that happen to coincide with KC are interrupted (100% of 403 cases) and in their place a slower rhythmic oscillation often (80%) appears; and (b) spindles that are usually (72% of 1131) following KCs always have a higher frequency (by ∼1 Hz) than both the interrupted spindles and the individual fast spindles that are not in any way associated with a KC. This enhancement of spindle frequency could not be correlated to any of the KC parameters studied. The results of this study reveal a consistent interaction between the KC and the sleep spindle during NREM stage II in human sleep."
},
{
"pmid": "21230152",
"title": "Description of stochastic and chaotic series using visibility graphs.",
"abstract": "Nonlinear time series analysis is an active field of research that studies the structure of complex signals in order to derive information of the process that generated those series, for understanding, modeling and forecasting purposes. In the last years, some methods mapping time series to network representations have been proposed. The purpose is to investigate on the properties of the series through graph theoretical tools recently developed in the core of the celebrated complex network theory. Among some other methods, the so-called visibility algorithm has received much attention, since it has been shown that series correlations are captured by the algorithm and translated in the associated graph, opening the possibility of building fruitful connections between time series analysis, nonlinear dynamics, and graph theory. Here we use the horizontal visibility algorithm to characterize and distinguish between correlated stochastic, uncorrelated and chaotic processes. We show that in every case the series maps into a graph with exponential degree distribution P(k)∼exp(-λk), where the value of λ characterizes the specific process. The frontier between chaotic and correlated stochastic processes, λ=ln(3/2) , can be calculated exactly, and some other analytical developments confirm the results provided by extensive numerical simulations and (short) experimental time series."
},
{
"pmid": "26283943",
"title": "Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis.",
"abstract": "A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG) signals are split into oscillatory (spindles) and transient (K-complex) components. This decomposition is conveniently achieved by applying morphological component analysis (MCA) to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT). Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i) the transient component to reveal K-complexes and (ii) the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity vs. FDR) on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1), the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR) of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62 and 49.09%, respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are discussed."
},
{
"pmid": "24369454",
"title": "A comparative study of theoretical graph models for characterizing structural networks of human brain.",
"abstract": "Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network."
},
{
"pmid": "1180967",
"title": "Comparison of the predicted and observed secondary structure of T4 phage lysozyme.",
"abstract": "Predictions of the secondary structure of T4 phage lysozyme, made by a number of investigators on the basis of the amino acid sequence, are compared with the structure of the protein determined experimentally by X-ray crystallography. Within the amino terminal half of the molecule the locations of helices predicted by a number of methods agree moderately well with the observed structure, however within the carboxyl half of the molecule the overall agreement is poor. For eleven different helix predictions, the coefficients giving the correlation between prediction and observation range from 0.14 to 0.42. The accuracy of the predictions for both beta-sheet regions and for turns are generally lower than for the helices, and in a number of instances the agreement between prediction and observation is no better than would be expected for a random selection of residues. The structural predictions for T4 phage lysozyme are much less successful than was the case for adenylate kinase (Schulz et al. (1974) Nature 250, 140-142). No one method of prediction is clearly superior to all others, and although empirical predictions based on larger numbers of known protein structure tend to be more accurate than those based on a limited sample, the improvement in accuracy is not dramatic, suggesting that the accuracy of current empirical predictive methods will not be substantially increased simply by the inclusion of more data from additional protein structure determinations."
},
{
"pmid": "17990300",
"title": "The influence of ageing on complex brain networks: a graph theoretical analysis.",
"abstract": "OBJECTIVE\nTo determine the functional connectivity of different EEG bands at the \"baseline\" situation (rest) and during mathematical thinking in children and young adults to study the maturation effect on brain networks at rest and during a cognitive task.\n\n\nMETHODS\nTwenty children (8-12 years) and twenty students (21-26 years) were studied. The synchronization likelihood was used to evaluate the interregional synchronization of different EEG frequency bands in children and adults, at rest and during math. Then, graphs were constructed and characterized in terms of local structure (clustering coefficient) and overall integration (path length) and the \"optimal\" organization of the connectivity i.e., the small world network (SWN).\n\n\nRESULTS\nThe main findings were: (i) Enhanced synchronization for theta band during math more prominent in adults. (ii) Decrease of the optimal SWN organization of the alpha2 band during math. (iii) The beta and especially gamma bands showed lower synchronization and signs of lower SWN organization in both situations in adults.\n\n\nCONCLUSION\nThere are interesting findings related to the two age groups and the two situations. The theta band showed higher synchronization during math in adults as a result of higher capacity of the working memory in this age group. The alpha2 band showed some SWN disorganization during math, a process analog to the known desynchronization. In adults, a dramatic reduction of the connections in gray matter occurs. Although this maturation process is probably related to higher efficiency, reduced connectivity is expressed by lower synchronization and lower mean values of the graph parameters in adults."
},
{
"pmid": "25956566",
"title": "Detection of K-complexes and sleep spindles (DETOKS) using sparse optimization.",
"abstract": "BACKGROUND\nThis paper addresses the problem of detecting sleep spindles and K-complexes in human sleep EEG. Sleep spindles and K-complexes aid in classifying stage 2 NREM human sleep.\n\n\nNEW METHOD\nWe propose a non-linear model for the EEG, consisting of a transient, low-frequency, and an oscillatory component. The transient component captures the non-oscillatory transients in the EEG. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, this paper presents a fast non-linear optimization algorithm to estimate the components in the proposed signal model. The low-frequency and oscillatory components are used to detect K-complexes and sleep spindles respectively.\n\n\nRESULTS AND COMPARISON WITH OTHER METHODS\nThe performance of the proposed method is evaluated using an online EEG database. The F1 scores for the spindle detection averaged 0.70 ± 0.03 and the F1 scores for the K-complex detection averaged 0.57 ± 0.02. The Matthews Correlation Coefficient and Cohen's Kappa values were in a range similar to the F1 scores for both the sleep spindle and K-complex detection. The F1 scores for the proposed method are higher than existing detection algorithms.\n\n\nCONCLUSIONS\nComparable run-times and better detection results than traditional detection algorithms suggests that the proposed method is promising for the practical detection of sleep spindles and K-complexes."
},
{
"pmid": "9628751",
"title": "Joint time and time-frequency optimal detection of K-complexes in sleep EEG.",
"abstract": "Automated detection of waveforms such as delta and K-complex in the EEG is an important component of sleep stage monitoring. The K-complex is a key feature that contributes to sleep stages assessment. However, its automated detection is still difficult due to the stochastic nature of the EEG. In this paper, we propose a detection structure which can be interpreted as joint linear filtering operations in time and time-frequency domains. We also introduce a method of obtaining the optimum detector from training data, and we show that the resulting receiver offers better performances than the one obtained via the Fisher criterion maximization. The efficiency of this approach for K-complexes detector design is explored. It results from this study that the obtained receiver is potentially the best one which can be found in the literature. Finally, it is emphasized that this methodology can be advantageously used to solve many other detection problems."
},
{
"pmid": "23874288",
"title": "Graph theoretical analysis of resting magnetoencephalographic functional connectivity networks.",
"abstract": "Complex networks have been observed to comprise small-world properties, believed to represent an optimal organization of local specialization and global integration of information processing at reduced wiring cost. Here, we applied magnitude squared coherence to resting magnetoencephalographic time series in reconstructed source space, acquired from controls and patients with schizophrenia, and generated frequency-dependent adjacency matrices modeling functional connectivity between virtual channels. After configuring undirected binary and weighted graphs, we found that all human networks demonstrated highly localized clustering and short characteristic path lengths. The most conservatively thresholded networks showed efficient wiring, with topographical distance between connected vertices amounting to one-third as observed in surrogate randomized topologies. Nodal degrees of the human networks conformed to a heavy-tailed exponentially truncated power-law, compatible with the existence of hubs, which included theta and alpha bilateral cerebellar tonsil, beta and gamma bilateral posterior cingulate, and bilateral thalamus across all frequencies. We conclude that all networks showed small-worldness, minimal physical connection distance, and skewed degree distributions characteristic of physically-embedded networks, and that these calculations derived from graph theoretical mathematics did not quantifiably distinguish between subject populations, independent of bandwidth. However, post-hoc measurements of edge computations at the scale of the individual vertex revealed trends of reduced gamma connectivity across the posterior medial parietal cortex in patients, an observation consistent with our prior resting activation study that found significant reduction of synthetic aperture magnetometry gamma power across similar regions. The basis of these small differences remains unclear."
},
{
"pmid": "21168234",
"title": "Clustering technique-based least square support vector machine for EEG signal classification.",
"abstract": "This paper presents a new approach called clustering technique-based least square support vector machine (CT-LS-SVM) for the classification of EEG signals. Decision making is performed in two stages. In the first stage, clustering technique (CT) has been used to extract representative features of EEG data. In the second stage, least square support vector machine (LS-SVM) is applied to the extracted features to classify two-class EEG signals. To demonstrate the effectiveness of the proposed method, several experiments have been conducted on three publicly available benchmark databases, one for epileptic EEG data, one for mental imagery tasks EEG data and another one for motor imagery EEG data. Our proposed approach achieves an average sensitivity, specificity and classification accuracy of 94.92%, 93.44% and 94.18%, respectively, for the epileptic EEG data; 83.98%, 84.37% and 84.17% respectively, for the motor imagery EEG data; and 64.61%, 58.77% and 61.69%, respectively, for the mental imagery tasks EEG data. The performance of the CT-LS-SVM algorithm is compared in terms of classification accuracy and execution (running) time with our previous study where simple random sampling with a least square support vector machine (SRS-LS-SVM) was employed for EEG signal classification. We also compare the proposed method with other existing methods in the literature for the three databases. The experimental results show that the proposed algorithm can produce a better classification rate than the previous reported methods and takes much less execution time compared to the SRS-LS-SVM technique. The research findings in this paper indicate that the proposed approach is very efficient for classification of two-class EEG signals."
},
{
"pmid": "22287252",
"title": "Improving the separability of motor imagery EEG signals using a cross correlation-based least square support vector machine for brain-computer interface.",
"abstract": "Although brain-computer interface (BCI) techniques have been developing quickly in recent decades, there still exist a number of unsolved problems, such as improvement of motor imagery (MI) signal classification. In this paper, we propose a hybrid algorithm to improve the classification success rate of MI-based electroencephalogram (EEG) signals in BCIs. The proposed scheme develops a novel cross-correlation based feature extractor, which is aided with a least square support vector machine (LS-SVM) for two-class MI signals recognition. To verify the effectiveness of the proposed classifier, we replace the LS-SVM classifier by a logistic regression classifier and a kernel logistic regression classifier, separately, with the same features extracted from the cross-correlation technique for the classification. The proposed approach is tested on datasets, IVa and IVb of BCI Competition III. The performances of those methods are evaluated with classification accuracy through a 10-fold cross-validation procedure. We also assess the performance of the proposed method by comparing it with eight recently reported algorithms. Experimental results on the two datasets show that the proposed LS-SVM classifier provides an improvement compared to the logistic regression and kernel logistic regression classifiers. The results also indicate that the proposed approach outperforms the most recently reported eight methods and achieves a 7.40% improvement over the best results of the other eight studies."
},
{
"pmid": "25704869",
"title": "Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.",
"abstract": "The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature."
},
{
"pmid": "15319512",
"title": "The small world of the cerebral cortex.",
"abstract": "While much information is available on the structural connectivity of the cerebral cortex, especially in the primate, the main organizational principles of the connection patterns linking brain areas, columns and individual cells have remained elusive. We attempt to characterize a wide variety of cortical connectivity data sets using a specific set of graph theory methods. We measure global aspects of cortical graphs including the abundance of small structural motifs such as cycles, the degree of local clustering of connections and the average path length. We examine large-scale cortical connection matrices obtained from neuroanatomical data bases, as well as probabilistic connection matrices at the level of small cortical neuronal populations linked by intra-areal and inter-areal connections. All cortical connection matrices examined in this study exhibit \"small-world\" attributes, characterized by the presence of abundant clustering of connections combined with short average distances between neuronal elements. We discuss the significance of these universal organizational features of cortex in light of functional brain anatomy. Supplementary materials are at www.indiana.edu/~cortex/lab.htm."
},
{
"pmid": "17266107",
"title": "Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources.",
"abstract": "OBJECTIVE\nTo address the problem of volume conduction and active reference electrodes in the assessment of functional connectivity, we propose a novel measure to quantify phase synchronization, the phase lag index (PLI), and compare its performance to the well-known phase coherence (PC), and to the imaginary component of coherency (IC).\n\n\nMETHODS\nThe PLI is a measure of the asymmetry of the distribution of phase differences between two signals. The performance of PLI, PC, and IC was examined in (i) a model of 64 globally coupled oscillators, (ii) an EEG with an absence seizure, (iii) an EEG data set of 15 Alzheimer patients and 13 control subjects, and (iv) two MEG data sets.\n\n\nRESULTS\nPLI and PC were more sensitive than IC to increasing levels of true synchronization in the model. PC and IC were influenced stronger than PLI by spurious correlations because of common sources. All measures detected changes in synchronization during the absence seizure. In contrast to PC, PLI and IC were barely changed by the choice of different montages. PLI and IC were superior to PC in detecting changes in beta band connectivity in AD patients. Finally, PLI and IC revealed a different spatial pattern of functional connectivity in MEG data than PC.\n\n\nCONCLUSION\nThe PLI performed at least as well as the PC in detecting true changes in synchronization in model and real data but, at the same token and like-wise the IC, it was much less affected by the influence of common sources and active reference electrodes."
},
{
"pmid": "11776208",
"title": "Neural network for sleep EEG K-complex detection.",
"abstract": "The paper presents the development and application of an automatic system used to detect and classify the K-complexes aperiodic, waveforms found in electroencephalograms of patients during stage two sleep. The slow-wave transient K-complex is evoked by auditory or somatosensory stimulation being an event related potential. The analysis of this transitory waveform contributes to the assessment of sleep stages used by controlled learning during sleep. In our work we used a TMS320C30 DSP to implement an automatic detection procedure based on features extraction and classification using a feed-forward neural network."
},
{
"pmid": "17282798",
"title": "A mixture of experts network structure for EEG signals classification.",
"abstract": "This paper illustrates the use of mixture of experts (ME) network structure to guide model selection for classification of electroencephalogram (EEG) signals. Expectation-maximization (EM) algorithm was used for training the ME so that the learning process is decoupled in a manner that fits well with the modular structure. The EEG signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The ME network structure was implemented for classification of the EEG signals using the statistical features as inputs. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified with the accuracy of 93.17% by the ME network structure. The ME network structure achieved accuracy rates which were higher than that of the stand-alone neural network models."
},
{
"pmid": "20192058",
"title": "Determination of sleep stage separation ability of features extracted from EEG signals using principle component analysis.",
"abstract": "In this study, a method was proposed in order to determine how well features extracted from the EEG signals for the purpose of sleep stage classification separate the sleep stages. The proposed method is based on the principle component analysis known also as the Karhunen-Loéve transform. Features frequently used in the sleep stage classification studies were divided into three main groups: (i) time-domain features, (ii) frequency-domain features, and (iii) hybrid features. That how well features in each group separate the sleep stages was determined by performing extensive simulations and it was seen that the results obtained are in agreement with those available in the literature. Considering the fact that sleep stage classification algorithms consist of two steps, namely feature extraction and classification, it will be possible to tell a priori whether the classification step will provide successful results or not without carrying out its realization thanks to the proposed method."
},
{
"pmid": "24686109",
"title": "Graph theoretical analysis reveals disrupted topological properties of whole brain functional networks in temporal lobe epilepsy.",
"abstract": "OBJECTIVE\nTemporal lobe epilepsy (TLE) is one of the most common forms of drug-resistant epilepsy. Previous studies have indicated that the TLE-related impairments existed in extensive local functional networks. However, little is known about the alterations in the topological properties of whole brain functional networks.\n\n\nMETHOD\nIn this study, we acquired resting-state BOLD-fMRI (rsfMRI) data from 26 TLE patients and 25 healthy controls, constructed their whole brain functional networks, compared the differences in topological parameters between the TLE patients and the controls, and analyzed the correlation between the altered topological properties and the epilepsy duration.\n\n\nRESULTS\nThe TLE patients showed significant increases in clustering coefficient and characteristic path length, but significant decrease in global efficiency compared to the controls. We also found altered nodal parameters in several regions in the TLE patients, such as the bilateral angular gyri, left middle temporal gyrus, right hippocampus, triangular part of left inferior frontal gyrus, left inferior parietal but supramarginal and angular gyri, and left parahippocampus gyrus. Further correlation analysis showed that the local efficiency of the TLE patients correlated positively with the epilepsy duration.\n\n\nCONCLUSION\nOur results indicated the disrupted topological properties of whole brain functional networks in TLE patients.\n\n\nSIGNIFICANCE\nOur findings indicated the TLE-related impairments in the whole brain functional networks, which may help us to understand the clinical symptoms of TLE patients and offer a clue for the diagnosis and treatment of the TLE patients."
},
{
"pmid": "16803415",
"title": "Complex network from pseudoperiodic time series: topology versus dynamics.",
"abstract": "We construct complex networks from pseudoperiodic time series, with each cycle represented by a single node in the network. We investigate the statistical properties of these networks for various time series and find that time series with different dynamics exhibit distinct topological structures. Specifically, noisy periodic signals correspond to random networks, and chaotic time series generate networks that exhibit small world and scale free features. We show that this distinction in topological structure results from the hierarchy of unstable periodic orbits embedded in the chaotic attractor. Standard measures of structure in complex networks can therefore be applied to distinguish different dynamic regimes in time series. Application to human electrocardiograms shows that such statistical properties are able to differentiate between the sinus rhythm cardiograms of healthy volunteers and those of coronary care patients."
},
{
"pmid": "24768081",
"title": "Epileptic seizure detection in EEGs signals using a fast weighted horizontal visibility algorithm.",
"abstract": "This paper proposes a fast weighted horizontal visibility graph constructing algorithm (FWHVA) to identify seizure from EEG signals. The performance of the FWHVA is evaluated by comparing with Fast Fourier Transform (FFT) and sample entropy (SampEn) method. Two noise-robustness graph features based on the FWHVA, mean degree and mean strength, are investigated using two chaos signals and five groups of EEG signals. Experimental results show that feature extraction using the FWHVA is faster than that of SampEn and FFT. And mean strength feature associated with ictal EEG is significant higher than that of healthy and inter-ictal EEGs. In addition, an 100% classification accuracy for identifying seizure from healthy shows that the features based on the FWHVA are more promising than the frequency features based on FFT and entropy indices based on SampEn for time series classification."
}
] |
Scientific Reports | 31270387 | PMC6610122 | 10.1038/s41598-019-46074-2 | Intelligent Diagnostic Prediction and Classification System for Chronic Kidney Disease | At present times, healthcare systems are updated with advanced capabilities like machine learning (ML), data mining and artificial intelligence to offer human with more intelligent and expert healthcare services. This paper introduces an intelligent prediction and classification system for healthcare, namely Density based Feature Selection (DFS) with Ant Colony based Optimization (D-ACO) algorithm for chronic kidney disease (CKD). The proposed intelligent system eliminates irrelevant or redundant features by DFS in prior to the ACO based classifier construction. The proposed D-ACO framework three phases namely preprocessing, Feature Selection (FS) and classification. Furthermore, the D-ACO algorithm is tested using benchmark CKD dataset and the performance are investigated based on different evaluation factors. Comparing the D-ACO algorithm with existing methods, the presented intelligent system outperformed the other methodologies with a significant improvisation in classification accuracy using fewer features. | Related WorksDifferent techniques have been proposed for effective prediction of CKD by the exploitation of patient’s medical data. A Cuckoo Search trained neural network (NN-CS) method is presented for the identification of CKD17. Initially, the presented model is designed to resolve the issues that exist in the local search based learning methods. The CS algorithm helps to optimally select the input weight vector of the NN to train data properly. The classifier results of the proposed algorithm showed that it attains better performance. A modified version of NN-CS (NN-MCS) algorithm18 is developed to overcome the problem of local optima of the NN-CS algorithm. As the initial weights of the neuron connection control the NNs performance, the proposed method uses employs MCS algorithm to decrease the root mean square error (RMSE) value employed in the training process of NN. The simulation results reported that NN-MCS algorithm attained better performance than NN-CS method.In19, two fuzzy classifiers are known as fuzzy rule-building expert system (FuRES) and fuzzy optimal Associative Memory (FOAM) are presented for the identification of CKD. FuRES generates a classification tree which comprises a minimal NN. It creates the classification rules to determine the weight vector with the least fuzzy entropy. The two fuzzy classifiers are employed for the identification of 386 CKD patients. Also, FuRES is better compared to FOAM especially in situations where the training, as well as the prediction process, contain a similar intensity of noise. FuRES and FOAM attained better performance in the identification of CKD; at the same time, FuRES is proficient than FOAM. In20, another fuzzy-based method is presented to identify the CKD. The author designed an Improved Hybrid Fuzzy C-Means (IHFCM), an improved version of FCM with Euclidean distance for the detection of CKD. This study revealed that the probability based methods are unsuitable for CKD prediction because of the necessity of proper output. Statistical methods, Bayesian classification or association rule based prediction methods are infeasible to use as it leads to inaccurate results. So, IHFCM is developed for the identification of CKD. At the initial stage, IHFCM removes the frequent records as a preprocessing step. Then, it computes the diffuse score for each value in the particular table of contents of the query. The higher fuzzy score represents the clusters of higher risk and lower fuzzy score indicates lower or no risk at all.In the year 2017, Dilli Arasu et al.21 devised a novel method namely Weighted Average Ensemble Learning Imputation (WAELI). The missing values in the dataset reduce the precision level of CKD. As the existing methods use of data preprocessing technique, the data cleaning process is needed to fill up the missing values and to remove the inaccurate values. A recalculation procedure is present in different CKD stages where the missing values are computed and placed in their respective positions. Although the existing methods are effective, it needs an expert in healthcare dataset to ensure the values for CKD.FS process acts as a significant part in the area of data classification, employed to find out a smaller set of rules from the training dataset with fixed goals. Different methodologies like AI techniques, bio-inspired algorithms are used for FS. In22, a wrapper method is presented by the hybridization of GA with support vector machine (SVM) called GA-SVM method to properly select the feature subset. The reduction in the redundant features of the proposed method improves the classification performance which is validated using five different disease dataset.Naganna Chetty et al.23 also presented a wrapper method for CKD identification by following three steps: (1) a framework is generated from data mining, (2) Wrapper subset attribute evaluator and best first search approach are employed to select attributes and (3) Classification algorithms are employed. The experimental observations revealed that the accuracy is improved for reduced dataset compared to the original dataset24. developed a framework for enhancing the quality of CKD. This framework involves three processes include FS, ensemble learning and classification. The integration of Correlation-based FS (CFS) and k-nearest neighbor (kNN) classifier results in high classification accuracy25. developed another CKD identification method by the use of filter as well as wrapper approaches. The simulation outcome depicted that the decrease in a number of features does not ensure effective classification performance. | [
"29990143",
"25338421",
"14581387",
"28243816"
] | [
{
"pmid": "29990143",
"title": "Identifying Important Attributes for Early Detection of Chronic Kidney Disease.",
"abstract": "Individuals with chronic kidney disease (CKD) are often not aware that the medical tests they take for other purposes may contain useful information about CKD, and that this information is sometimes not used effectively to tackle the identification of the disease. Therefore, attributes of different medical tests are investigated to identify which attributes may contain useful information about CKD. A database with several attributes of healthy subjects and subjects with CKD are analyzed using different techniques. Common spatial pattern (CSP) filter and linear discriminant analysis are first used to identify the dominant attributes that could contribute in detecting CKD. Here, the CSP filter is applied to optimize a separation between CKD and nonCKD subjects. Then, classification methods are also used to identify the dominant attributes. These analyses suggest that hemoglobin, albumin, specific gravity, hypertension, and diabetes mellitus, together with serum creatinine, are the most important attributes in the early detection of CKD. Further, it suggests that in the absence of information on hypertension and diabetes mellitus, random blood glucose and blood pressure attributes may be used."
},
{
"pmid": "25338421",
"title": "Cardiovascular disease in chronic kidney disease: risk factors, pathogenesis, and prevention.",
"abstract": "Patients with chronic kidney disease (CKD) experience serious adverse cardiovascular (CV) consequences. Cardiovascular disease is the leading cause of morbidity and mortality in patients with CKD, being secondary not only to an increased prevalence of traditional CV risk factors, but also to the presence of a wide array of nontraditional risk factors unique to patients with CKD. Pathogenesis includes both functional and structural alterations in the CV system. Those alterations give rise to a wide range of clinical CV syndromes, including ischemic heart disease, heart failure, and sudden cardiac arrest. As an increasingly prevalent disease, CKD, together with consequent CV disease, imparts major health and economic burdens to the community. In this review, we discuss traditional and nontraditional risk factors for CV disease, the pathogenesis of CV clinical syndromes, and prevention of CV syndromes in patients with CKD."
},
{
"pmid": "28243816",
"title": "Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.",
"abstract": "As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods."
}
] |
Frontiers in Computational Neuroscience | 31316363 | PMC6611394 | 10.3389/fncom.2019.00041 | A Temporal Signal-Processing Circuit Based on Spiking Neuron and Synaptic Learning | Time is a continuous, homogeneous, one-way, and independent signal that cannot be modified by human will. The mechanism of how the brain processes temporal information remains elusive. According to previous work, time-keeping in medial premotor cortex (MPC) is governed by four kinds of ramp cell populations (Merchant et al., 2011). We believe that these cell populations participate in temporal information processing in MPC. Hence, in this the present study, we present a model that uses spiking neuron, including these cell populations, to construct a complete circuit for temporal processing. By combining the time-adaptive drift-diffusion model (TDDM) with the transmission of impulse information between neurons, this new model is able to successfully reproduce the result of synchronization-continuation tapping task (SCT). We also discovered that the neurons that we used exhibited some of the firing properties of time-related neurons detected by electrophysiological experiments in other studies. Therefore, we believe that our model reflects many of the physiological of neural circuits in the biological brain and can explain some of the phenomena in the temporal-perception process. | Related WorkThere are many computational models for time-dependent signal processing, including pacemaker accumulator models (Treisman, 1963), state dependent network models (Buonomano and Maass, 2009), long short-term memory models(LSTM) (Rivest et al., 2010), time-adaptive drift–diffusion models (TDDM) (Rivest and Bengio, 2011), and recurrent synaptic networks (Mendoza et al., 2018).The pacemaker accumulator model is a traditional time model proposed many years ago (Treisman, 1963), the concept of which was derived from mechanical clocks. This model assumes that there is a pacemaker or an oscillator in our brain that sends pulses consistently at a certain frequency, and these are received and recorded by an accumulator. Within this framework, the pulse count provides a linear metric of time, and temporal judgments rely on comparing the current pulse count with that of a reference time. This process becomes the foundation for characterizing time in this model. The pacemaker accumulator model has proven to be effective in providing a framework for many psychophysical data related to time processing (Church, 1984; Meck, 2005). The downside of this model, however, is that it lacks biological feasibility. Mounting evidence indicates that clock models are not entirely consistent with the experimental data (for reviews see Mauk and Buonomano, 2004; Buhusi et al., 2005).The state-dependent network model recently proposed by Buonomano et al. differs from these above models. This model is able to tell and encode time as a result of dynamic change in the state of spiking neural networks. It is based on the assumption that there is an interaction between each sensory event and the current state of the network, forming a network state pattern that naturally encodes each event in the context of recent stimuli—similar to the interaction between different ripples generated by each raindrop falling in a pond instantly or previously. State-dependent models have the powerful ability to characterize time since they are inherently high dimensional. However, the deficiency of this model is that it encodes time via the firing rate of each neuron in the model, which is contrary to the result of Buonomano's motor-control experiment, in which the spiking time conveyed more information than the spiking rate [millisecond-scale motor encoding in a cortical vocal area].In addition, LSTM and temporary difference learning (TD) algorithms have been used to propose a small neural network based on artificial neurons that can encode a specific time into a ramp-like activity (Rivest et al., 2010). Although they introduced many biological concepts into their model, the basis of the model is the artificial neuron which is far from the bioneuron compared to the spiking neuron.TDDM was independently proposed by Rivest and Bengio (2011) and Simen et al. (2011) which utilizes a simple and more abstract neural model based on a drift-diffusion process of climbing neural activity. The drift-diffusion model is often used in decision-making under noisy stimuli. This work extends it by developing a learning rule so that their model can be used to learn time intervals rapidly. Additionally, Weber's law for time can be explained in this study.There is another excellent model. Recently, a kind of model called a recurrent synaptic network has been proposed (Mendoza et al., 2018). It simulates a cortical ensemble and makes use of paired-pulse facilitation and slow inhibitory synaptic currents to not only produce interval selective responses but also to follow the biases and scalar properties (Pérez and Merchant, 2018).In addition to the millisecond-range time-processing model mentioned above, there are several time-processing models in seconds to minutes range such as striatal beat frequency model (SBF), which is proposed by Matell and Meck (2004). SBF suggests that in the thalamo-cortico-striatal loops, the coincidence detection of neuronal oscillations in the cortex is the neural basis for the characterization of time information. Cortical neurons will act as oscillators and the striatum located in the basal ganglia can detect the oscillation pattern of cortical neurons. At the beginning of time interval processing, the release of dopamine in the brain prompts timing and synchronizes cortical oscillations, and resets the state level of striatum spinous neurons. The cortical oscillators oscillate a fixed frequency throughout the criterion interval. At the end of time interval processing, dopamine is released again, which changes the synaptic connections of spinous neurons, and forms the neural representation of time interval. | [
"24623769",
"14576211",
"15656724",
"19145235",
"6588815",
"8871242",
"25186744",
"30958818",
"19346478",
"9142762",
"11387394",
"18244602",
"10196532",
"25411486",
"7127141",
"11484055",
"12718864",
"15464348",
"15217335",
"15878722",
"29545587",
"28336572",
"28364174",
"14754870",
"22106292",
"14622234",
"29615484",
"10377358",
"9438963",
"19847635",
"10365959",
"11257908",
"21697374",
"25490022",
"5877542",
"8757133",
"29067130"
] | [
{
"pmid": "24623769",
"title": "Information processing in the primate basal ganglia during sensory-guided and internally driven rhythmic tapping.",
"abstract": "Gamma (γ) and beta (β) oscillations seem to play complementary functions in the cortico-basal ganglia-thalamo-cortical circuit (CBGT) during motor behavior. We investigated the time-varying changes of the putaminal spiking activity and the spectral power of local field potentials (LFPs) during a task where the rhythmic tapping of monkeys was guided by isochronous stimuli separated by a fixed duration (synchronization phase), followed by a period of internally timed movements (continuation phase). We found that the power of both bands and the discharge rate of cells showed an orderly change in magnitude as a function of the duration and/or the serial order of the intervals executed rhythmically. More LFPs were tuned to duration and/or serial order in the β- than the γ-band, although different values of preferred features were represented by single cells and by both bands. Importantly, in the LFPs tuned to serial order, there was a strong bias toward the continuation phase for the β-band when aligned to movements, and a bias toward the synchronization phase for the γ-band when aligned to the stimuli. Our results suggest that γ-oscillations reflect local computations associated with stimulus processing, whereas β-activity involves the entrainment of large putaminal circuits, probably in conjunction with other elements of CBGT, during internally driven rhythmic tapping."
},
{
"pmid": "14576211",
"title": "Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex.",
"abstract": "We trained monkeys to compare the frequencies of two mechanical vibrations applied sequentially to the tip of a finger and to report which of the two stimuli had the higher frequency. This task requires remembering the first frequency during the delay period between the two stimuli. Recordings were made from neurons in the inferior convexity of the prefrontal cortex (PFC) while the monkeys performed the task. We report neurons that fire persistently during the delay period, with a firing rate that is a monotonic function of the frequency of the first stimulus. Separately from, and in addition to, their correlation with the first stimulus, the delay period firing rates of these neurons were correlated with the behavior of the monkey, in a manner consistent with their interpretation as the neural substrate of working memory during the task. Most neurons had firing rates that varied systematically with time during the delay period. We suggest that this time-dependent activity may encode time itself and may be an intrinsic part of active memory maintenance mechanisms."
},
{
"pmid": "15656724",
"title": "Memory for timing visual and auditory signals in albino and pigmented rats.",
"abstract": "The authors hypothesized that during a gap in a timed signal, the time accumulated during the pregap interval decays at a rate proportional to the perceived salience of the gap, influenced by sensory acuity and signal intensity. When timing visual signals, albino (Sprague-Dawley) rats, which have poor visual acuity, stopped timing irrespective of gap duration, whereas pigmented (Long-Evans) rats, which have good visual acuity, stopped timing for short gaps but reset timing for long gaps. Pigmented rats stopped timing during a gap in a low-intensity visual signal and reset after a gap in a high-intensity visual signal, suggesting that memory for time in the gap procedure varies with the perceived salience of the gap, possibly through an attentional mechanism."
},
{
"pmid": "19145235",
"title": "State-dependent computations: spatiotemporal processing in cortical networks.",
"abstract": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity."
},
{
"pmid": "6588815",
"title": "Properties of the internal clock.",
"abstract": "Evidence has been cited for the following properties of the parts of the psychological process used for timing intervals: The pacemaker has a mean rate that can be varied by drugs, diet, and stress. The switch has a latency to operate and it can be operated in various modes, such as run, stop, and reset. The accumulator times up, in absolute, arithmetic units. Working memory can be reset on command or, after lesions have been created in the fimbria fornix, when there is a gap in a signal. The transformation from the accumulator to reference memory is done with a multiplicative constant that is affected by drugs, lesions, and individual differences. The comparator uses a ratio between the value in the accumulator (or working memory) and reference memory. Finally, there must be multiple switch-accumulator modules to handle simultaneous temporal processing; and the psychological timing process may be used on some occasions and not on others."
},
{
"pmid": "8871242",
"title": "Neuronal activity in posterior parietal area 7a during the delay periods of a spatial memory task.",
"abstract": "1. Neuronal activity was recorded from area 7a of monkeys performing a delayed match-to-sample task requiring release of a behavioral key when a visual stimulus appeared at a remembered spatial location. 2. Activity in the delay periods was significantly elevated in 28% of 405 neurons studied and could be classified as either sustained or anticipatory in nature. 3. Sustained activity was characterized by maintained or slowly decreasing discharge rates that were typically greater when the preceding stimulus was at a location that evoked a strong response. Sustained activity was terminated when a subsequent stimulus appeared at another location. 4. Anticipatory activity was characterized by accelerating discharge rates that were ordinarily greater after a stimulus at a location that evoked a weak response. Anticipatory activity was often associated with facilitated responses to the next stimulus. 5. These data demonstrate that a population of neurons in area 7a is active during the delay periods of a spatial memory task that does not require a behavioral response directed toward the location of the stimulus. This activity could represent a short-term memory trace for the spatial location of the preceding stimulus."
},
{
"pmid": "25186744",
"title": "Dynamic representation of the temporal and sequential structure of rhythmic movements in the primate medial premotor cortex.",
"abstract": "We determined the encoding properties of single cells and the decoding accuracy of cell populations in the medial premotor cortex (MPC) of Rhesus monkeys to represent in a time-varying fashion the duration and serial order of six intervals produced rhythmically during a synchronization-continuation tapping task. We found that MPC represented the temporal and sequential structure of rhythmic movements by activating small ensembles of neurons that encoded the duration or the serial order in rapid succession, so that the pattern of active neurons changed dramatically within each interval. Interestingly, the width of the encoding or decoding function for serial order increased as a function of duration. Finally, we found that the strength of correlation in spontaneous activity of the individual cells varied as a function of the timing of their recruitment. These results demonstrate the existence of dynamic representations in MPC for the duration and serial order of intervals produced rhythmically and suggest that this dynamic code depends on ensembles of interconnected neurons that provide a strong synaptic drive to the next ensemble in a consecutive chain of neural events."
},
{
"pmid": "30958818",
"title": "The amplitude in periodic neural state trajectories underlies the tempo of rhythmic tapping.",
"abstract": "Our motor commands can be exquisitely timed according to the demands of the environment, and the ability to generate rhythms of different tempos is a hallmark of musical cognition. Yet, the neuronal underpinnings behind rhythmic tapping remain elusive. Here, we found that the activity of hundreds of primate medial premotor cortices (MPCs; pre-supplementary motor area [preSMA] and supplementary motor area [SMA]) neurons show a strong periodic pattern that becomes evident when their responses are projected into a state space using dimensionality reduction analysis. We show that different tapping tempos are encoded by circular trajectories that travelled at a constant speed but with different radii, and that this neuronal code is highly resilient to the number of participating neurons. Crucially, the changes in the amplitude of the oscillatory dynamics in neuronal state space are a signature of duration encoding during rhythmic timing, regardless of whether it is guided by an external metronome or is internally controlled and is not the result of repetitive motor commands. This dynamic state signal predicted the duration of the rhythmically produced intervals on a trial-by-trial basis. Furthermore, the increase in variability of the neural trajectories accounted for the scalar property, a hallmark feature of temporal processing across tasks and species. Finally, we found that the interval-dependent increments in the radius of periodic neural trajectories are the result of a larger number of neurons engaged in the production of longer intervals. Our results support the notion that rhythmic timing during tapping behaviors is encoded in the radial curvature of periodic MPC neural population trajectories."
},
{
"pmid": "19346478",
"title": "Learning reward timing in cortex through reward dependent expression of synaptic plasticity.",
"abstract": "The ability to represent time is an essential component of cognition but its neural basis is unknown. Although extensively studied both behaviorally and electrophysiologically, a general theoretical framework describing the elementary neural mechanisms used by the brain to learn temporal representations is lacking. It is commonly believed that the underlying cellular mechanisms reside in high order cortical regions but recent studies show sustained neural activity in primary sensory cortices that can represent the timing of expected reward. Here, we show that local cortical networks can learn temporal representations through a simple framework predicated on reward dependent expression of synaptic plasticity. We assert that temporal representations are stored in the lateral synaptic connections between neurons and demonstrate that reward-modulated plasticity is sufficient to learn these representations. We implement our model numerically to explain reward-time learning in the primary visual cortex (V1), demonstrate experimental support, and suggest additional experimentally verifiable predictions."
},
{
"pmid": "9142762",
"title": "Toward a neurobiology of temporal cognition: advances and challenges.",
"abstract": "A rich tradition of normative psychophysics has identified two ubiquitous properties of interval timing: the scalar property, a strong form of Weber's law, and ratio comparison mechanisms. Finding the neural substrate of these properties is a major challenge for neurobiology. Recently, advances have been made in our understanding of the brain structures important for timing, especially the basal ganglia and the cerebellum. Surgical intervention or diseases of the cerebellum generally result in increased variability in temporal processing, whereas both clock and memory effects are seen for neurotransmitter interventions, lesions and diseases of the basal ganglia. We propose that cerebellar dysfunction may induce deregulation of tonic thalamic tuning, which disrupts gating of the mnemonic temporal information generated in the basal ganglia through striato-thalamo-cortical loops."
},
{
"pmid": "11387394",
"title": "Influence of expectation of different rewards on behavior-related neuronal activity in the striatum.",
"abstract": "This study investigated how different expected rewards influence behavior-related neuronal activity in the anterior striatum. In a spatial delayed-response task, monkeys reached for a left or right target and obtained a small quantity of one of two juices (apple, grenadine, orange, lemon, black currant, or raspberry). In each trial, an initial instruction picture indicated the behavioral target and predicted the reward. Nonmovement trials served as controls for movement relationships. Consistent preferences in special reward choice trials and differences in anticipatory licks, performance errors, and reaction times indicated that animals differentially expected the rewards predicted by the instructions. About 600 of >2,500 neurons in anterior parts of caudate nucleus, putamen, and ventral striatum showed five forms of task-related activations, comprising responses to instructions, spatial or nonspatial activations during the preparation or execution of the movement, and activations preceding or following the rewards. About one-third of the neurons showed different levels of task-related activity depending on which liquid reward was predicted at trial end. Activations were either higher or lower for rewards that were preferred by the animals as compared with nonpreferred rewards. These data suggest that the expectation of an upcoming liquid reward may influence a fraction of task-related neurons in the anterior striatum. Apparently the information about the expected reward is incorporated into the neuronal activity related to the behavioral reaction leading to the reward. The results of this study are in general agreement with an account of goal-directed behavior according to which the outcome should be represented already at the time at which the behavior toward the outcome is performed."
},
{
"pmid": "18244602",
"title": "Simple model of spiking neurons.",
"abstract": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC."
},
{
"pmid": "10196532",
"title": "Expectation of reward modulates cognitive signals in the basal ganglia.",
"abstract": "Action is controlled by both motivation and cognition. The basal ganglia may be the site where these kinds of information meet. Using a memory-guided saccade task with an asymmetric reward schedule, we show that visual and memory responses of caudate neurons are modulated by expectation of reward so profoundly that a neuron's preferred direction often changed with the change in the rewarded direction. The subsequent saccade to the target was earlier and faster for the rewarded direction. Our results indicate that the caudate contributes to the determination of oculomotor outputs by connecting motivational values (for example, expectation of reward) to visual information."
},
{
"pmid": "25411486",
"title": "Dissociating movement from movement timing in the rat primary motor cortex.",
"abstract": "Neural encoding of the passage of time to produce temporally precise movements remains an open question. Neurons in several brain regions across different experimental contexts encode estimates of temporal intervals by scaling their activity in proportion to the interval duration. In motor cortex the degree to which this scaled activity relies upon afferent feedback and is guided by motor output remains unclear. Using a neural reward paradigm to dissociate neural activity from motor output before and after complete spinal transection, we show that temporally scaled activity occurs in the rat hindlimb motor cortex in the absence of motor output and after transection. Context-dependent changes in the encoding are plastic, reversible, and re-established following injury. Therefore, in the absence of motor output and despite a loss of afferent feedback, thought necessary for timed movements, the rat motor cortex displays scaled activity during a broad range of temporally demanding tasks similar to that identified in other brain regions."
},
{
"pmid": "7127141",
"title": "Delay-related activity of prefrontal neurons in rhesus monkeys performing delayed response.",
"abstract": "Activity of dorsolateral prefrontal cortical neurons was examined in rhesus monkeys while they performed a spatial delayed-response task with delays of 2, 4, 8 or 12 s interposed between cue and response. Of the 600 neurons recorded for at least 10 trials under each delay condition, 95 displayed a pattern of discharge during the delay period which was significantly different from neuronal firing before or after this period. Changes in the duration of the delay elicit two distinct patterns of activity in these neurons: some (59/95, 62%) exhibit a fixed pattern of discharge regardless of the duration of the ensuing delay; others (31/95, 33%) alter their pattern of activity in relation to the temporal changes. Although both types of delay-related neurons display a variety of discharge profiles, more than half of each class exhibit their highest activity in the early part of the delay period. A related finding concerns a small subclass of spatially selective neurons which fire significantly more when the cue is presented on the left than on the right or vice versa. A striking 80% of these spatially discriminative neurons exhibit peak activity in the first few seconds of the delay period. These findings provide cellular evidence that (1) prefrontal neurons are responsive to temporal as well as spatial features of the delayed-response task; and (2) the involvement of a subset of these is particularly critical in the first few seconds of the delay. The latter finding emphasizes that prefrontal neurons may play an important role in the registration process of spatial memory."
},
{
"pmid": "11484055",
"title": "Retrospective and prospective coding for predicted reward in the sensory thalamus.",
"abstract": "Reward is important for shaping goal-directed behaviour. After stimulus-reward associative learning, an organism can assess the motivational value of the incoming stimuli on the basis of past experience (retrospective processing), and predict forthcoming rewarding events (prospective processing). The traditional role of the sensory thalamus is to relay current sensory information to cortex. Here we find that non-primary thalamic neurons respond to reward-related events in two ways. The early, phasic responses occurred shortly after the onset of the stimuli and depended on the sensory modality. Their magnitudes resisted extinction and correlated with the learning experience. The late responses gradually increased during the cue and delay periods, and peaked just before delivery of the reward. These responses were independent of sensory modality and were modulated by the value and timing of the reward. These observations provide new evidence that single thalamic neurons can code for the acquired significance of sensory stimuli in the early responses (retrospective coding) and predict upcoming reward value in the late responses (prospective coding)."
},
{
"pmid": "12718864",
"title": "Representation of time by neurons in the posterior parietal cortex of the macaque.",
"abstract": "The neural basis of time perception is unknown. Here we show that neurons in the posterior parietal cortex (area LIP) represent elapsed time relative to a remembered duration. We trained rhesus monkeys to report whether the duration of a test light was longer or shorter than a remembered \"standard\" (316 or 800 ms) by making an eye movement to one of two choice targets. While timing the test light, the responses of LIP neurons signaled changes in the monkey's perception of elapsed time. The variability of the neural responses explained the monkey's uncertainty about its temporal judgments. Thus, in addition to their role in spatial processing and sensorimotor integration, posterior parietal neurons encode signals related to the perception of time."
},
{
"pmid": "15464348",
"title": "Cortico-striatal circuits and interval timing: coincidence detection of oscillatory processes.",
"abstract": "Humans and other animals demonstrate the ability to perceive and respond to temporally relevant information with characteristic behavioral properties. For example, the response time distributions in peak-interval timing tasks are well described by Gaussian functions, and superimpose when scaled by the criterion duration. This superimposition has been referred to as the scalar property and results from the fact that the standard deviation of a temporal estimate is proportional to the duration being timed. Various psychological models have been proposed to account for such responding. These models vary in their success in predicting the temporal control of behavior as well as in the neurobiological feasibility of the mechanisms they postulate. A review of the major interval timing models reveals that no current model is successful on both counts. The neurobiological properties of the basal ganglia, an area known to be necessary for interval timing and motor control, suggests that this set of structures act as a coincidence detector of cortical and thalamic input. The hypothesized functioning of the basal ganglia is similar to the mechanisms proposed in the beat frequency timing model [R.C. Miall, Neural Computation 1 (1989) 359-371], leading to a reevaluation of its capabilities in terms of behavioral prediction. By implementing a probabilistic firing rule, a dynamic response threshold, and adding variance to a number of its components, simulations of the striatal beat frequency model were able to produce output that is functionally equivalent to the expected behavioral response form of peak-interval timing procedures."
},
{
"pmid": "15217335",
"title": "The neural basis of temporal processing.",
"abstract": "A complete understanding of sensory and motor processing requires characterization of how the nervous system processes time in the range of tens to hundreds of milliseconds (ms). Temporal processing on this scale is required for simple sensory problems, such as interval, duration, and motion discrimination, as well as complex forms of sensory processing, such as speech recognition. Timing is also required for a wide range of motor tasks from eyelid conditioning to playing the piano. Here we review the behavioral, electrophysiological, and theoretical literature on the neural basis of temporal processing. These data suggest that temporal processing is likely to be distributed among different structures, rather than relying on a centralized timing area, as has been suggested in internal clock models. We also discuss whether temporal processing relies on specialized neural mechanisms, which perform temporal computations independent of spatial ones. We suggest that, given the intricate link between temporal and spatial information in most sensory and motor tasks, timing and spatial processing are intrinsic properties of neural function, and specialized timing mechanisms such as delay lines, oscillators, or a spectrum of different time constants are not required. Rather temporal processing may rely on state-dependent changes in network dynamics."
},
{
"pmid": "15878722",
"title": "Neuropsychology of timing and time perception.",
"abstract": "Interval timing in the range of milliseconds to minutes is affected in a variety of neurological and psychiatric populations involving disruption of the frontal cortex, hippocampus, basal ganglia, and cerebellum. Our understanding of these distortions in timing and time perception are aided by the analysis of the sources of variance attributable to clock, memory, decision, and motor-control processes. The conclusion is that the representation of time depends on the integration of multiple neural systems that can be fruitfully studied in selected patient populations."
},
{
"pmid": "29545587",
"title": "Neural basis for categorical boundaries in the primate pre-SMA during relative categorization of time intervals.",
"abstract": "Perceptual categorization depends on the assignment of different stimuli to specific groups based, in principle, on the notion of flexible categorical boundaries. To determine the neural basis of categorical boundaries, we record the activity of pre-SMA neurons of monkeys executing an interval categorization task in which the limit between short and long categories changes between blocks of trials within a session. A large population of cells encodes this boundary by reaching a constant peak of activity close to the corresponding subjective limit. Notably, the time at which this peak is reached changes according to the categorical boundary of the current block, predicting the monkeys' categorical decision on a trial-by-trial basis. In addition, pre-SMA cells also represent the category selected by the monkeys and the outcome of the decision. These results suggest that the pre-SMA adaptively encodes subjective duration boundaries between short and long durations and contains crucial neural information to categorize intervals and evaluate the outcome of such perceptual decisions."
},
{
"pmid": "28336572",
"title": "The Computational and Neural Basis of Rhythmic Timing in Medial Premotor Cortex.",
"abstract": "The neural underpinnings of rhythmic behavior, including music and dance, have been studied using the synchronization-continuation task (SCT), where subjects initially tap in synchrony with an isochronous metronome and then keep tapping at a similar rate via an internal beat mechanism. Here, we provide behavioral and neural evidence that supports a resetting drift-diffusion model (DDM) during SCT. Behaviorally, we show the model replicates the linear relation between the mean and standard-deviation of the intervals produced by monkeys in SCT. We then show that neural populations in the medial premotor cortex (MPC) contain an accurate trial-by-trial representation of elapsed-time between taps. Interestingly, the autocorrelation structure of the elapsed-time representation is consistent with a DDM. These results indicate that MPC has an orderly representation of time with features characteristic of concatenated DDMs and that this population signal can be used to orchestrate the rhythmic structure of the internally timed elements of SCT.SIGNIFICANCE STATEMENT The present study used behavioral data, ensemble recordings from medial premotor cortex (MPC) in macaque monkeys, and computational modeling, to establish evidence in favor of a class of drift-diffusion models of rhythmic timing during a synchronization-continuation tapping task (SCT). The linear relation between the mean and standard-deviation of the intervals produced by monkeys in SCT is replicated by the model. Populations of MPC cells faithfully represent the elapsed time between taps, and there is significant trial-by-trial relation between decoded times and the timing behavior of the monkeys. Notably, the neural decoding properties, including its autocorrelation structure are consistent with a set of drift-diffusion models that are arranged sequentially and that are resetting in each SCT tap."
},
{
"pmid": "28364174",
"title": "Primate beta oscillations and rhythmic behaviors.",
"abstract": "The study of non-human primates in complex behaviors such as rhythm perception and entrainment is critical to understand the neurophysiological basis of human cognition. Next to reviewing the role of beta oscillations in human beat perception, here we discuss the role of primate putaminal oscillatory activity in the control of rhythmic movements that are guided by a sensory metronome or internally gated. The analysis of the local field potentials of the behaving macaques showed that gamma-oscillations reflect local computations associated with stimulus processing of the metronome, whereas beta-activity involves the entrainment of large putaminal circuits, probably in conjunction with other elements of cortico-basal ganglia-thalamo-cortical circuit, during internally driven rhythmic tapping. Thus, this review emphasizes the need of parametric neurophysiological observations in non-human primates that display a well-controlled behavior during high-level cognitive processes."
},
{
"pmid": "14754870",
"title": "Neural responses during interception of real and apparent circularly moving stimuli in motor cortex and area 7a.",
"abstract": "We recorded the neuronal activity in the arm area of the motor cortex and parietal area 7a of two monkeys during interception of stimuli moving in real and apparent motion. The stimulus moved along a circular path with one of five speeds (180-540 degrees/s), and was intercepted at 6 o'clock by exerting a force pulse on a semi-isometric joystick which controlled a cursor on the screen. The real stimuli were shown in adjacent positions every 16 ms, whereas in the apparent motion situation five stimuli were flashed successively at the vertices of a regular pentagon. The results showed, first, that a group of neurons in both areas above responded not only during the interception but also during a NOGO task in which the same stimuli were presented in the absence of a motor response. This finding suggests these areas are involved in both the processing of the stimulus as well as in the preparation and production of the interception movement. In addition, a group of motor cortical cells responded during the interception task but not during a center --> out task, in which the monkeys produced similar force pulses towards eight stationary targets. This group of cells may be engaged in sensorimotor transformations more specific to the interception of real and apparent moving stimuli. Finally, a multiple regression analysis revealed that the time-varying neuronal activity in area 7a and motor cortex was related to various aspects of stimulus motion and hand force in both the real and apparent motion conditions, with stimulus-related activity prevailing in area 7a and hand-related activity prevailing in motor cortex. In addition, the neural activity was selectively associated with the stimulus angle during real motion, whereas it was tightly correlated to the time-to-contact in the apparent motion condition, particularly in the motor cortex. Overall, these observations indicate that neurons in motor cortex and area 7a are processing different parameters of the stimulus depending on the kind of stimulus motion, and that this information is used in a predictive fashion in motor cortex to trigger the interception movement."
},
{
"pmid": "22106292",
"title": "Measuring time with different neural chronometers during a synchronization-continuation task.",
"abstract": "Temporal information processing is critical for many complex behaviors including speech and music cognition, yet its neural substrate remains elusive. We examined the neurophysiological properties of medial premotor cortex (MPC) of two Rhesus monkeys during the execution of a synchronization-continuation tapping task that includes the basic sensorimotor components of a variety of rhythmic behaviors. We show that time-keeping in the MPC is governed by separate cell populations. One group encoded the time remaining for an action, showing activity whose duration changed as a function of interval duration, reaching a peak at similar magnitudes and times with respect to the movement. The other cell group showed a response that increased in duration or magnitude as a function of the elapsed time from the last movement. Hence, the sensorimotor loops engaged during the task may depend on the cyclic interplay between different neuronal chronometers that quantify the time passed and the remaining time for an action."
},
{
"pmid": "14622234",
"title": "Retrospective and prospective persistent activity induced by Hebbian learning in a recurrent cortical network.",
"abstract": "Recordings from cells in the associative cortex of monkeys performing visual working memory tasks link persistent neuronal activity, long-term memory and associative memory. In particular, delayed pair-associate tasks have revealed neuronal correlates of long-term memory of associations between stimuli. Here, a recurrent cortical network model with Hebbian plastic synapses is subjected to the pair-associate protocol. In a first stage, learning leads to the appearance of delay activity, representing individual images ('retrospective' activity). As learning proceeds, the same learning mechanism uses retrospective delay activity together with choice stimulus activity to potentiate synapses connecting neural populations representing associated images. As a result, the neural population corresponding to the pair-associate of the image presented is activated prior to its visual stimulation ('prospective' activity). The probability of appearance of prospective activity is governed by the strength of the inter-population connections, which in turn depends on the frequency of pairings during training. The time course of the transitions from retrospective to prospective activity during the delay period is found to depend on the fraction of slow, N-methyl-d-aspartate-like receptors at excitatory synapses. For fast recurrent excitation, transitions are abrupt; slow recurrent excitation renders transitions gradual. Both scenarios lead to a gradual rise of delay activity when averaged over many trials, because of the stochastic nature of the transitions. The model reproduces most of the neuro-physiological data obtained during such tasks, makes experimentally testable predictions and demonstrates how persistent activity (working memory) brings about the learning of long-term associations."
},
{
"pmid": "29615484",
"title": "The Synaptic Properties of Cells Define the Hallmarks of Interval Timing in a Recurrent Neural Network.",
"abstract": "Extensive research has described two key features of interval timing. The bias property is associated with accuracy and implies that time is overestimated for short intervals and underestimated for long intervals. The scalar property is linked to precision and states that the variability of interval estimates increases as a function of interval duration. The neural mechanisms behind these properties are not well understood. Here we implemented a recurrent neural network that mimics a cortical ensemble and includes cells that show paired-pulse facilitation and slow inhibitory synaptic currents. The network produces interval selective responses and reproduces both bias and scalar properties when a Bayesian decoder reads its activity. Notably, the interval-selectivity, timing accuracy, and precision of the network showed complex changes as a function of the decay time constants of the modeled synaptic properties and the level of background activity of the cells. These findings suggest that physiological values of the time constants for paired-pulse facilitation and GABAb, as well as the internal state of the network, determine the bias and scalar properties of interval timing.SIGNIFICANCE STATEMENT Timing is a fundamental element of complex behavior, including music and language. Temporal processing in a wide variety of contexts shows two primary features: time estimates exhibit a shift toward the mean (the bias property) and are more variable for longer intervals (the scalar property). We implemented a recurrent neural network that includes long-lasting synaptic currents, which cannot only produce interval-selective responses but also follow the bias and scalar properties. Interestingly, only physiological values of the time constants for paired-pulse facilitation and GABAb, as well as intermediate background activity within the network can reproduce the two key features of interval timing."
},
{
"pmid": "10377358",
"title": "Prospective coding for objects in primate prefrontal cortex.",
"abstract": "We examined neural activity in prefrontal (PF) cortex of monkeys performing a delayed paired associate task. Monkeys were cued with a sample object. Then, after a delay, a test object was presented. If the test object was the object associated with the sample during training (i.e., its target), they had to release a lever. Monkeys could bridge the delay by remembering the sample (a sensory-related code) and/or thinking ahead to the expected target (a prospective code). Examination of the monkeys' behavior suggested that they were relying on a prospective code. During and shortly after sample presentation, neural activity in the lateral PF cortex primarily reflected the sample. Toward the end of the delay, however, PF activity began to reflect the anticipated target, which indicated a prospective code. These results provide further confirmation that PF cortex does not simply buffer incoming visual inputs, but instead selectively processes information relevant to current behavioral demands, even when this information must be recalled from long-term memory."
},
{
"pmid": "9438963",
"title": "Scalar expectancy theory and peak-interval timing in humans.",
"abstract": "The properties of the internal clock, temporal memory, and decision processes used to time short durations were investigated. The peak-interval procedure was used to evaluate the timing of 8-, 12-, and 21-s intervals, and analyses were conducted on the mean response functions and on individual trials. A distractor task prevented counting, and visual feedback on accuracy and precision was provided after each trial. Mean response distributions were (a) centered at the appropriate real-time criteria, (b) highly symmetrical, and (c) scalar in their variability. Analysis of individual trials indicated more memory variability relative to response threshold variability. Taken together, these results demonstrate that humans show the same qualitative timing properties that other animals do, but with some quantitative differences."
},
{
"pmid": "19847635",
"title": "Alternative time representation in dopamine models.",
"abstract": "Dopaminergic neuron activity has been modeled during learning and appetitive behavior, most commonly using the temporal-difference (TD) algorithm. However, a proper representation of elapsed time and of the exact task is usually required for the model to work. Most models use timing elements such as delay-line representations of time that are not biologically realistic for intervals in the range of seconds. The interval-timing literature provides several alternatives. One of them is that timing could emerge from general network dynamics, instead of coming from a dedicated circuit. Here, we present a general rate-based learning model based on long short-term memory (LSTM) networks that learns a time representation when needed. Using a naïve network learning its environment in conjunction with TD, we reproduce dopamine activity in appetitive trace conditioning with a constant CS-US interval, including probe trials with unexpected delays. The proposed model learns a representation of the environment dynamics in an adaptive biologically plausible framework, without recourse to delay lines or other special-purpose circuits. Instead, the model predicts that the task-dependent representation of time is learned by experience, is encoded in ramp-like changes in single-neuron activity distributed across small neural networks, and reflects a temporal integration mechanism resulting from the inherent dynamics of recurrent loops within the network. The model also reproduces the known finding that trace conditioning is more difficult than delay conditioning and that the learned representation of the task can be highly dependent on the types of trials experienced during training. Finally, it suggests that the phasic dopaminergic signal could facilitate learning in the cortex."
},
{
"pmid": "10365959",
"title": "Neuronal correlates of parametric working memory in the prefrontal cortex.",
"abstract": "Humans and monkeys have similar abilities to discriminate the difference in frequency between two mechanical vibrations applied sequentially to the fingertips. A key component of this sensory task is that the second stimulus is compared with the trace left by the first (base) stimulus, which must involve working memory. Where and how is this trace held in the brain? This question was investigated by recording from single neurons in the prefrontal cortex of monkeys while they performed the somatosensory discrimination task. Here we describe neurons in the inferior convexity of the prefrontal cortex whose discharge rates varied, during the delay period between the two stimuli, as a monotonic function of the base stimulus frequency. We describe this as 'monotonic stimulus encoding', and we suggest that the result may generalize: monotonic stimulus encoding may be the basic representation of one-dimensional sensory stimulus quantities in working memory. Thus we predict that other behavioural tasks that require ordinal comparisons between scalar analogue stimuli would give rise to monotonic responses similar to those reported here."
},
{
"pmid": "11257908",
"title": "Multiple reward signals in the brain.",
"abstract": "The fundamental biological importance of rewards has created an increasing interest in the neuronal processing of reward information. The suggestion that the mechanisms underlying drug addiction might involve natural reward systems has also stimulated interest. This article focuses on recent neurophysiological studies in primates that have revealed that neurons in a limited number of brain structures carry specific signals about past and future rewards. This research provides the first step towards an understanding of how rewards influence behaviour before they are received and how the brain might use reward information to control learning and goal-directed behaviour."
},
{
"pmid": "21697374",
"title": "A model of interval timing by neural integration.",
"abstract": "We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior."
},
{
"pmid": "25490022",
"title": "Millisecond-scale motor encoding in a cortical vocal area.",
"abstract": "Studies of motor control have almost universally examined firing rates to investigate how the brain shapes behavior. In principle, however, neurons could encode information through the precise temporal patterning of their spike trains as well as (or instead of) through their firing rates. Although the importance of spike timing has been demonstrated in sensory systems, it is largely unknown whether timing differences in motor areas could affect behavior. We tested the hypothesis that significant information about trial-by-trial variations in behavior is represented by spike timing in the songbird vocal motor system. We found that neurons in motor cortex convey information via spike timing far more often than via spike rate and that the amount of information conveyed at the millisecond timescale greatly exceeds the information available from spike counts. These results demonstrate that information can be represented by spike timing in motor circuits and suggest that timing variations evoke differences in behavior."
},
{
"pmid": "8757133",
"title": "Reward expectancy in primate prefrontal neurons.",
"abstract": "The prefrontal cortex is important in the organization of goal-directed behaviour. When animals are trained to work for a particular goal or reward, reward 'expectancy' is processed by prefrontal neurons. Recent studies of the prefrontal cortex have concentrated on the role of working memory in the control of behaviour. In spatial delayed-response tasks, neurons in the prefrontal cortex show activity changes during the delay period between presentation of the cue and the reward, with some of the neurons being spatially specific (that is, responses vary with the cue position). Here I report that the delay activity in prefrontal neurons is dependent also on the particular reward received for the behavioural response, and to the way the reward is given. It seems that the prefrontal cortex may monitor the outcome of goal-directed behaviour."
},
{
"pmid": "29067130",
"title": "A decision-making model based on a spiking neural circuit and synaptic plasticity.",
"abstract": "To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved."
}
] |
Frontiers in Neuroinformatics | 31312131 | PMC6614282 | 10.3389/fninf.2019.00048 | Parkinson's Disease Detection Using Isosurfaces-Based Features and Convolutional Neural Networks | Computer aided diagnosis systems based on brain imaging are an important tool to assist in the diagnosis of Parkinson's disease, whose ultimate goal is the detection by automatic recognizing of patterns that characterize the disease. In recent times Convolutional Neural Networks (CNN) have proved to be amazingly useful for that task. The drawback, however, is that 3D brain images contain a huge amount of information that leads to complex CNN architectures. When these architectures become too complex, classification performances often degrades because the limitations of the training algorithm and overfitting. Thus, this paper proposes the use of isosurfaces as a way to reduce such amount of data while keeping the most relevant information. These isosurfaces are then used to implement a classification system which uses two of the most well-known CNN architectures, LeNet and AlexNet, to classify DaTScan images with an average accuracy of 95.1% and AUC = 97%, obtaining comparable (slightly better) values to those obtained for most of the recently proposed systems. It can be concluded therefore that the computation of isosurfaces reduces the complexity of the inputs significantly, resulting in high classification accuracies with reduced computational burden. | 2. Related WorkThe high spatial and color resolution provided by current neuroimaging systems has prompted them to become the main diagnosis tool for neurodegenerative disorders. Thus, DaTSCAN SPECT imaging is used routinely for the diagnosis of PD through the evaluation of deficits of dopamine transporters of the nigrostriatal pathway. However, the visual assessment of these images to come to a final diagnostic is, even for expert clinician, a time-consuming and complicate task, which requires having into account many variables. Machine learning algorithms, which allow combining different types of inputs to produce a result, can potentially overcome this problem. Additionally, the vast amount of information contained in DatSCAN images requires the use of computer aided tools to be exploited, allowing to find complex, disease-related patterns to increase the diagnosis accuracy. We review next the main computer-based techniques proposed in this framework.Two of the first works to analyze the possibilities of machine learning algorithms with DaTSCAN were Palumbo et al. (2010) and Towey et al. (2011). The former compared a probabilistic neural network (PNN) with a classification tree (CIT) to differentiate between PD and essential tremor. Striatal binding ratios for caudate and putamina on 3 slices were used as image features. The latter used Naïve-Bayes with PCA decomposition of the voxels in the striatal region. These were followed for a series of works where SVMs were used as the main classifier tool, with linear or RBF kernel and different image features. Illán et al. (2012) and later Oliveira and Castelo-Branco (2015) used voxel-as-features; i.e., image voxel intensities are used directly as features. Segovia et al. (2012) used a Partial Least Square (PLS) scheme to decompose DaT images into scores and loading. Then, the scores with the highest Fisher Discriminant Ratios were used as feature for the SVM. Khedher et al. (2015) also used PLS. Rojas et al. (2013) proposed the use of 2D empical mode decomposition to split DaTSCAN images into different intrinsic mode functions, accounting for different frequency subbands. The components were used to select features related to PD that clearly differentiate them from NC, allowing an easy visual inspection. Martínez-Murcia et al. (2014a) decomposed the DaTSCAN images into statistically independent components which revealed patterns associated to PD. Moreover, in this approach, image voxels were ranked by means of their statistical significance in class discrimination. A more recent approach also based on multivariate decomposition techniques is proposed in Ortiz et al. (2018), where the use of functional principal component analysis on 3D images is proposed. This is addressed by sampling the 3D images using fractal curves in order to transform the 3D DatSCAN images into 1D signals, preserving the neighborhood relationship among voxels. Striatal binding ratios for both caudates and putamina were used in Prashanth et al. (2014), Palumbo et al. (2014), and Bhalchandra et al. (2015). Martínez-Murcia et al. (2014b) proposed the extraction of 3D textural-based features (Haralick texture features) for the characterization of the dopamine transporters concentration in the image. And finishing with those based on SVM, Badoud et al. (2016) used univariate (voxel-wise) statistical parametric mapping and multivariate pattern recognition using linear discriminant classifiers to differentiate among different Parkinsonian syndromes.More recently, methods based on neural networks, especially deep learning-based methods, have paved the way to discover complex patterns and, consequently, to outperform the diagnosis accuracy obtained by classical statistical methodologies (Ortiz et al., 2016; Martinez-Murcia et al., 2017). The use of models containing stacks of layers composed of a large number of units that individually perform simple operations allows to compute models containing a large number of parameters. Moreover, these massively parallelized architectures are able to discover very complex patterns in the data by a learning process formulated as an optimization problem. Zhang and Kagen (2017) proposes a classifier based on a single layer neural network and voxel-as-features from different slices. Martinez-Murcia et al. (2017) and Martinez-Murcia et al. (2018) propose the use of Convolutional Neural Networks (CNN) to discover patterns associated to PD. Increasing the accuracy requires the use of deeper networks, but this increment also makes the network prone to overfitting and push the training algorithms to their performance limits. Thus, architectures combining more elaborated blocks such as in He et al. (2016) have been also proposed to effectively increase the number of layers.In this work, we describe a classifier based on the well-known CNNs LeNet-5 and AlexNet where the image features used to train them are isosurfaces computed from the regions of interest. The computation of isosurfaces reduces the complexity of the inputs significantly which results in high classification accuracies with reduced computational burden. | [
"26213851",
"27489771",
"26086379",
"29366762",
"17884682",
"23039635",
"27601096",
"26017442",
"26971941",
"24387526",
"30215285",
"11545704",
"25710187",
"30322338",
"25501084",
"20567820",
"28599112",
"25749984",
"28254511",
"29188397",
"21659911",
"24639916",
"27730415"
] | [
{
"pmid": "26213851",
"title": "Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning.",
"abstract": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence."
},
{
"pmid": "27489771",
"title": "Discriminating among degenerative parkinsonisms using advanced (123)I-ioflupane SPECT analyses.",
"abstract": "(123)I-ioflupane single photon emission computed tomography (SPECT) is a sensitive and well established imaging tool in Parkinson's disease (PD) and atypical parkinsonian syndromes (APS), yet a discrimination between PD and APS has been considered inconsistent at least based on visual inspection or simple region of interest analyses. We here reappraise this issue by applying advanced image analysis techniques to separate PD from the various APS. This study included 392 consecutive patients with degenerative parkinsonism undergoing (123)I-ioflupane SPECT at our institution over the last decade: 306 PD, 24 multiple system atrophy (MSA), 32 progressive supranuclear palsy (PSP) and 30 corticobasal degeneration (CBD) patients. Data analysis included voxel-wise univariate statistical parametric mapping and multivariate pattern recognition using linear discriminant classifiers. MSA and PSP showed less ioflupane uptake in the head of caudate nucleus relative to PD and CBD, yet there was no difference between MSA and PSP. CBD had higher uptake in both putamen relative to PD, MSA and PSP. Classification was significant for PD versus APS (AUC 0.69, p < 0.05) and between APS subtypes (MSA vs CBD AUC 0.80, p < 0.05; MSA vs PSP AUC 0.69 p < 0.05; CBD vs PSP AUC 0.69 p < 0.05). Both striatal and extra-striatal regions contain classification information, yet the combination of both regions does not significantly improve classification accuracy. PD, MSA, PSP and CBD have distinct patterns of dopaminergic depletion on (123)I-ioflupane SPECT. The high specificity of 84-90% for PD versus APS indicates that the classifier is particularly useful for confirming APS cases."
},
{
"pmid": "26086379",
"title": "Comparison between Different Intensity Normalization Methods in 123I-Ioflupane Imaging for the Automatic Detection of Parkinsonism.",
"abstract": "Intensity normalization is an important pre-processing step in the study and analysis of DaTSCAN SPECT imaging. As most automatic supervised image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. In this work, a comparison between different novel intensity normalization methods is presented. These proposed methodologies are based on Gaussian Mixture Model (GMM) image filtering and mean-squared error (MSE) optimization. The GMM-based image filtering method is achieved according to a probability threshold that removes the clusters whose likelihood are negligible in the non-specific regions. The MSE optimization method consists of a linear transformation that is obtained by minimizing the MSE in the non-specific region between the intensity normalized image and the template. The proposed intensity normalization methods are compared to: i) a standard approach based on the specific-to-non-specific binding ratio that is widely used, and ii) a linear approach based on the α-stable distribution. This comparison is performed on a DaTSCAN image database comprising analysis and classification stages for the development of a computer aided diagnosis (CAD) system for Parkinsonian syndrome (PS) detection. In addition, these proposed methods correct spatially varying artifacts that modulate the intensity of the images. Finally, using the leave-one-out cross-validation technique over these two approaches, the system achieves results up to a 92.91% of accuracy, 94.64% of sensitivity and 92.65 % of specificity, outperforming previous approaches based on a standard and a linear approach, which are used as a reference. The use of advanced intensity normalization techniques, such as the GMM-based image filtering and the MSE optimization improves the diagnosis of PS."
},
{
"pmid": "29366762",
"title": "The rise of deep learning in drug discovery.",
"abstract": "Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis."
},
{
"pmid": "17884682",
"title": "Assessment of the progression of Parkinson's disease: a metabolic network approach.",
"abstract": "BACKGROUND\nClinical research into Parkinson's disease has focused increasingly on the development of interventions that slow the neurodegeneration underlying this disorder. These investigations have stimulated interest in finding objective biomarkers that show changes in the rate of disease progression with treatment. Through radiotracer-based imaging of nigrostriatal dopaminergic function, a specific class of biomarkers to monitor the progression of Parkinson's disease has been identified, and these biomarkers were used in the clinical trials of drugs with the potential to modify the course of the disease. However, in some of these studies there was discordance between the imaging outcome measures and blinded clinical ratings of disease severity. Research is underway to identify and validate alternative ways to image brain metabolism, through which the efficacy of new therapies for Parkinson's disease and related disorders can be assessed.\n\n\nRECENT DEVELOPMENTS\nDuring recent years, spatial covariance analysis has been used with (18)F-fluorodeoxyglucose PET to detect abnormal patterns of brain metabolism in patients with neurodegenerative disorders. Rapid, automated, voxel-based algorithms have been used with metabolic imaging to quantify the activity of disease-specific networks. This approach has helped to characterise the unique metabolic patterns associated with the motor and cognitive features of Parkinson's disease. The results of several studies have shown correction of abnormal motor, but not cognitive, network activity by treatment with dopaminergic therapy and deep brain stimulation. The authors of a longitudinal imaging study of early-stage Parkinson's disease reported substantial differences in the development of these metabolic networks over a follow-up of 4 years. WHERE NEXT?: Developments in network imaging have provided the basis for several new applications of metabolic imaging in the study of Parkinson's disease. A washout study is currently underway to determine the long-duration effects of dopaminergic therapy on the network activity related to Parkinson's disease, which will be useful to plan future trials of disease-modifying drugs. Network approaches are also being applied to the study of atypical parkinsonian syndromes. The characterisation of specific patterns associated with atypical parkinsonian syndromes and classic Parkinson's disease will be the basis for a fully automated imaging-based procedure for early differential diagnosis. Efforts are underway to quantify the networks related to Parkinson's disease with less invasive imaging methods. Assessments of network activity with perfusion-weighted MRI show excellent concordance with measurements done with established radiotracer techniques. This approach will ultimately enable the assessment of abnormal network activity in people who are genetically at risk of Parkinson's disease."
},
{
"pmid": "23039635",
"title": "Automatic assistance to Parkinson's disease diagnosis in DaTSCAN SPECT imaging.",
"abstract": "PURPOSE\nIn this work, an approach to computer aided diagnosis (CAD) system is proposed as a decision-making aid in Parkinsonian syndrome (PS) detection. This tool, intended for physicians, entails fully automatic preprocessing, normalization, and classification procedures for brain single-photon emission computed tomography images.\n\n\nMETHODS\nIoflupane[(123)I]FP-CIT images are used to provide in vivo information of the dopamine transporter density. These images are preprocessed using an automated template-based registration followed by two proposed approaches for intensity normalization. A support vector machine (SVM) is used and compared to other statistical classifiers in order to achieve an effective diagnosis using whole brain images in combination with voxel selection masks.\n\n\nRESULTS\nThe CAD system is evaluated using a database consisting of 208 DaTSCAN images (100 controls, 108 PS). SVM-based classification is the most efficient choice when masked brain images are used. The generalization performance is estimated to be 89.02 (90.41-87.62)% sensitivity and 93.21 (92.24-94.18)% specificity. The area under the curve can take values of 0.9681 (0.9641-0.9722) when the image intensity is normalized to a maximum value, as derived from the receiver operating characteristics curves.\n\n\nCONCLUSIONS\nThe present analysis allows to evaluate the impact of the design elements for the development of a CAD-system when all the information encoded in the scans is considered. In this way, the proposed CAD-system shows interesting properties for clinical use, such as being fast, automatic, and robust."
},
{
"pmid": "27601096",
"title": "Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition.",
"abstract": "Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "26971941",
"title": "A Spherical Brain Mapping of MR Images for the Detection of Alzheimer's Disease.",
"abstract": "Magnetic Resonance Imaging (MRI) is of fundamental importance in neuroscience, providing good contrast and resolution, as well as not being considered invasive. Despite the development of newer techniques involving radiopharmaceuticals, it is still a recommended tool in Alzheimer's Disease (AD) neurological practice to assess neurodegeneration, and recent research suggests that it could reveal changes in the brain even before the symptomatology appears. In this paper we propose a method that performs a Spherical Brain Mapping, using different measures to project the three-dimensional MR brain images onto two-dimensional maps revealing statistical characteristics of the tissue. The resulting maps could be assessed visually, but also perform a significant feature reduction that will allow further supervised or unsupervised processing, reducing the computational load while maintaining a large amount of the original information. We have tested our methodology against a MRI database comprising 180 AD affected patients and 180 normal controls, where some of the mappings have revealed as an optimum strategy for the automatic processing and characterization of AD patterns, achieving up to a 90.9% of accuracy, as well as significantly reducing the computational load. Additionally, our maps allow the visual analysis and interpretation of the images, which can be of great help in the diagnosis of this and other types of dementia."
},
{
"pmid": "24387526",
"title": "Parametrization of textural patterns in 123I-ioflupane imaging for the automatic detection of Parkinsonism.",
"abstract": "PURPOSE\nA novel approach to a computer aided diagnosis system for the Parkinson's disease is proposed. This tool is intended as a supporting tool for physicians, based on fully automated methods that lead to the classification of (123)I-ioflupane SPECT images.\n\n\nMETHODS\n(123)I-ioflupane images from three different databases are used to train the system. The images are intensity and spatially normalized, then subimages are extracted and a 3D gray-level co-occurrence matrix is computed over these subimages, allowing the characterization of the texture using Haralick texture features. Finally, different discrimination estimation methods are used to select a feature vector that can be used to train and test the classifier.\n\n\nRESULTS\nUsing the leave-one-out cross-validation technique over these three databases, the system achieves results up to a 97.4% of accuracy, and 99.1% of sensitivity, with positive likelihood ratios over 27.\n\n\nCONCLUSIONS\nThe system presents a robust feature extraction method that helps physicians in the diagnosis task by providing objective, operator-independent textural information about (123)I-ioflupane images, commonly used in the diagnosis of the Parkinson's disease. Textural features computation has been optimized by using a subimage selection algorithm, and the discrimination estimation methods used here makes the system feature-independent, allowing us to extend it to other databases and diseases."
},
{
"pmid": "30215285",
"title": "Convolutional Neural Networks for Neuroimaging in Parkinson's Disease: Is Preprocessing Needed?",
"abstract": "Spatial and intensity normalizations are nowadays a prerequisite for neuroimaging analysis. Influenced by voxel-wise and other univariate comparisons, where these corrections are key, they are commonly applied to any type of analysis and imaging modalities. Nuclear imaging modalities such as PET-FDG or FP-CIT SPECT, a common modality used in Parkinson's disease diagnosis, are especially dependent on intensity normalization. However, these steps are computationally expensive and furthermore, they may introduce deformations in the images, altering the information contained in them. Convolutional neural networks (CNNs), for their part, introduce position invariance to pattern recognition, and have been proven to classify objects regardless of their orientation, size, angle, etc. Therefore, a question arises: how well can CNNs account for spatial and intensity differences when analyzing nuclear brain imaging? Are spatial and intensity normalizations still needed? To answer this question, we have trained four different CNN models based on well-established architectures, using or not different spatial and intensity normalization preprocessings. The results show that a sufficiently complex model such as our three-dimensional version of the ALEXNET can effectively account for spatial differences, achieving a diagnosis accuracy of 94.1% with an area under the ROC curve of 0.984. The visualization of the differences via saliency maps shows that these models are correctly finding patterns that match those found in the literature, without the need of applying any complex spatial normalization procedure. However, the intensity normalization - and its type - is revealed as very influential in the results and accuracy of the trained model, and therefore must be well accounted."
},
{
"pmid": "11545704",
"title": "A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM).",
"abstract": "Motivated by the vast amount of information that is rapidly accumulating about the human brain in digital form, we embarked upon a program in 1992 to develop a four-dimensional probabilistic atlas and reference system for the human brain. Through an International Consortium for Brain Mapping (ICBM) a dataset is being collected that includes 7000 subjects between the ages of eighteen and ninety years and including 342 mono- and dizygotic twins. Data on each subject includes detailed demographic, clinical, behavioural and imaging information. DNA has been collected for genotyping from 5800 subjects. A component of the programme uses post-mortem tissue to determine the probabilistic distribution of microscopic cyto- and chemoarchitectural regions in the human brain. This, combined with macroscopic information about structure and function derived from subjects in vivo, provides the first large scale opportunity to gain meaningful insights into the concordance or discordance in micro- and macroscopic structure and function. The philosophy, strategy, algorithm development, data acquisition techniques and validation methods are described in this report along with database structures. Examples of results are described for the normal adult human brain as well as examples in patients with Alzheimer's disease and multiple sclerosis. The ability to quantify the variance of the human brain as a function of age in a large population of subjects for whom data is also available about their genetic composition and behaviour will allow for the first assessment of cerebral genotype-phenotype-behavioural correlations in humans to take place in a population this large. This approach and its application should provide new insights and opportunities for investigators interested in basic neuroscience, clinical diagnostics and the evaluation of neuropsychiatric disorders in patients."
},
{
"pmid": "25710187",
"title": "Computer-aided diagnosis of Parkinson's disease based on [(123)I]FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines.",
"abstract": "OBJECTIVE\nThe aim of the present study was to develop a fully-automated computational solution for computer-aided diagnosis in Parkinson syndrome based on [(123)I]FP-CIT single photon emission computed tomography (SPECT) images.\n\n\nAPPROACH\nA dataset of 654 [(123)I]FP-CIT SPECT brain images from the Parkinson's Progression Markers Initiative were used. Of these, 445 images were of patients with Parkinson's disease at an early stage and the remainder formed a control group. The images were pre-processed using automated template-based registration followed by the computation of the binding potential at a voxel level. Then, the binding potential images were used for classification, based on the voxel-as-feature approach and using the support vector machines paradigm.\n\n\nMAIN RESULTS\nThe obtained estimated classification accuracy was 97.86%, the sensitivity was 97.75% and the specificity 98.09%.\n\n\nSIGNIFICANCE\nThe achieved classification accuracy was very high and, in fact, higher than accuracies found in previous studies reported in the literature. In addition, results were obtained on a large dataset of early Parkinson's disease subjects. In summation, the information provided by the developed computational solution potentially supports clinical decision-making in nuclear medicine, using important additional information beyond the commonly used uptake ratios and respective statistical comparisons. (ClinicalTrials.gov Identifier: NCT01141023)."
},
{
"pmid": "30322338",
"title": "Empirical Functional PCA for 3D Image Feature Extraction Through Fractal Sampling.",
"abstract": "Medical image classification is currently a challenging task that can be used to aid the diagnosis of different brain diseases. Thus, exploratory and discriminative analysis techniques aiming to obtain representative features from the images play a decisive role in the design of effective Computer Aided Diagnosis (CAD) systems, which is especially important in the early diagnosis of dementia. In this work, we present a technique that allows using specific time series analysis techniques with 3D images. This is achieved by sampling the image using a fractal-based method which preserves the spatial relationship among voxels. In addition, a method called Empirical functional PCA (EfPCA) is presented, which combines Empirical Mode Decomposition (EMD) with functional PCA to express an image in the space spanned by a basis of empirical functions, instead of using components computed by a predefined basis as in Fourier or Wavelet analysis. The devised technique has been used to classify images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Parkinson Progression Markers Initiative (PPMI), achieving accuracies up to 93% and 92% differential diagnosis tasks (AD versus controls and PD versus Controls, respectively). The results obtained validate the method, proving that the information retrieved by our methodology is significantly linked to the diseases."
},
{
"pmid": "25501084",
"title": "Diagnostic accuracy of Parkinson disease by support vector machine (SVM) analysis of 123I-FP-CIT brain SPECT data: implications of putaminal findings and age.",
"abstract": "Brain single-photon-emission-computerized tomography (SPECT) with I-ioflupane (I-FP-CIT) is useful to diagnose Parkinson disease (PD). To investigate the diagnostic performance of I-FP-CIT brain SPECT with semiquantitative analysis by Basal Ganglia V2 software (BasGan), we evaluated semiquantitative data of patients with suspect of PD by a support vector machine classifier (SVM), a powerful supervised classification algorithm.I-FP-CIT SPECT with BasGan analysis was performed in 90 patients with suspect of PD showing mild symptoms (bradykinesia-rigidity and mild tremor). PD was confirmed in 56 patients, 34 resulted non-PD (essential tremor and drug-induced Parkinsonism). A clinical follow-up of at least 6 months confirmed diagnosis. To investigate BasGan diagnostic performance we trained SVM classification models featuring different descriptors using both a \"leave-one-out\" and a \"five-fold\" method. In the first study we used as class descriptors the semiquantitative radiopharmaceutical uptake values in the left (L) and right (R) putamen (P) and in the L and R caudate nucleus (C) for a total of 4 descriptors (CL, CR, PL, PR). In the second study each patient was described only by CL and CR, while in the third by PL and PR descriptors. Age was added as a further descriptor to evaluate its influence in the classification performance.I-FP-CIT SPECT with BasGan analysis reached a classification performance higher than 73.9% in all the models. Considering the \"Leave-one-out\" method, PL and PR were better predictors (accuracy of 91% for all patients) than CL and CR descriptors; using PL, PR, CL, and CR diagnostic accuracy was similar to that of PL and PR descriptors in the different groups. Adding age as a further descriptor accuracy improved in all the models. The best results were obtained by using all the 5 descriptors both in PD and non-PD subjects (CR and CL + PR and PL + age = 96.4% and 94.1%, respectively). Similar results were observed for the \"five-fold\" method. I-FP-CIT SPECT with BasGan analysis using SVM classifier was able to diagnose PD. Putamen was the most discriminative descriptor for PD and the patient age influenced the classification accuracy."
},
{
"pmid": "20567820",
"title": "Comparison of two neural network classifiers in the differential diagnosis of essential tremor and Parkinson's disease by (123)I-FP-CIT brain SPECT.",
"abstract": "PURPOSE\nTo contribute to the differentiation of Parkinson's disease (PD) and essential tremor (ET), we compared two different artificial neural network classifiers using (123)I-FP-CIT SPECT data, a probabilistic neural network (PNN) and a classification tree (ClT).\n\n\nMETHODS\n(123)I-FP-CIT brain SPECT with semiquantitative analysis was performed in 216 patients: 89 with ET, 64 with PD with a Hoehn and Yahr (H&Y) score of ≤2 (early PD), and 63 with PD with a H&Y score of ≥2.5 (advanced PD). For each of the 1,000 experiments carried out, 108 patients were randomly selected as the PNN training set, while the remaining 108 validated the trained PNN, and the percentage of the validation data correctly classified in the three groups of patients was computed. The expected performance of an \"average performance PNN\" was evaluated. In analogy, for ClT 1,000 classification trees with similar structures were generated.\n\n\nRESULTS\nFor PNN, the probability of correct classification in patients with early PD was 81.9±8.1% (mean±SD), in patients with advanced PD 78.9±8.1%, and in ET patients 96.6±2.6%. For ClT, the first decision rule gave a mean value for the putamen of 5.99, which resulted in a probability of correct classification of 93.5±3.4%. This means that patients with putamen values >5.99 were classified as having ET, while patients with putamen values <5.99 were classified as having PD. Furthermore, if the caudate nucleus value was higher than 6.97 patients were classified as having early PD (probability 69.8±5.3%), and if the value was <6.97 patients were classified as having advanced PD (probability 88.1%±8.8%).\n\n\nCONCLUSION\nThese results confirm that PNN achieved valid classification results. Furthermore, ClT provided reliable cut-off values able to differentiate ET and PD of different severities."
},
{
"pmid": "28599112",
"title": "Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review.",
"abstract": "Convolutional neural networks (CNNs) have been applied to visual tasks since the late 1980s. However, despite a few scattered applications, they were dormant until the mid-2000s when developments in computing power and the advent of large amounts of labeled data, supplemented by improved algorithms, contributed to their advancement and brought them to the forefront of a neural network renaissance that has seen rapid progression since 2012. In this review, which focuses on the application of CNNs to image classification tasks, we cover their development, from their predecessors up to recent state-of-the-art deep learning systems. Along the way, we analyze (1) their early successes, (2) their role in the deep learning renaissance, (3) selected symbolic works that have contributed to their recent popularity, and (4) several improvement attempts by reviewing contributions and challenges of over 300 publications. We also introduce some of their current trends and remaining challenges."
},
{
"pmid": "25749984",
"title": "Building a FP-CIT SPECT Brain Template Using a Posterization Approach.",
"abstract": "Spatial affine registration of brain images to a common template is usually performed as a preprocessing step in intersubject and intrasubject comparison studies, computer-aided diagnosis, region of interest selection and brain segmentation in tomography. Nevertheless, it is not straightforward to build a template of [123I]FP-CIT SPECT brain images because they exhibit very low intensity values outside the striatum. In this work, we present a procedure to automatically build a [123I]FP-CIT SPECT template in the standard Montreal Neurological Institute (MNI) space. The proposed methodology consists of a head voxel selection using the Otsu's method, followed by a posterization of the source images to three different levels: background, head, and striatum. Analogously, we also design a posterized version of a brain image in the MNI space; subsequently, we perform a spatial affine registration of the posterized source images to this image. The intensity of the transformed images is normalized linearly, assuming that the histogram of the intensity values follows an alpha-stable distribution. Lastly, we build the [123I]FP-CIT SPECT template by means of the transformed and normalized images. The proposed methodology is a fully automatic procedure that has been shown to work accurately even when a high-resolution magnetic resonance image for each subject is not available."
},
{
"pmid": "28254511",
"title": "Voxel-based logistic analysis of PPMI control and Parkinson's disease DaTscans.",
"abstract": "A comprehensive analysis of the Parkinson's Progression Markers Initiative (PPMI) Dopamine Transporter Single Photon Emission Computed Tomography (DaTscan) images is carried out using a voxel-based logistic lasso model. The model reveals that sub-regional voxels in the caudate, the putamen, as well as in the globus pallidus are informative for classifying images into control and PD classes. Further, a new technique called logistic component analysis is developed. This technique reveals that intra-population differences in dopamine transporter concentration and imperfect normalization are significant factors influencing logistic analysis. The interactions with handedness, sex, and age are also evaluated."
},
{
"pmid": "29188397",
"title": "Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?",
"abstract": "BACKGROUND\nSemi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times.\n\n\nRESULTS\nThe mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively.\n\n\nCONCLUSIONS\nClassification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context."
},
{
"pmid": "21659911",
"title": "Automatic classification of 123I-FP-CIT (DaTSCAN) SPECT images.",
"abstract": "INTRODUCTION\nWe present a method of automatic classification of I-fluoropropyl-carbomethoxy-3β-4-iodophenyltropane (FP-CIT) images. This technique uses singular value decomposition (SVD) to reduce a training set of patient image data into vectors in feature space (D space). The automatic classification techniques use the distribution of the training data in D space to define classification boundaries. Subsequent patients can be mapped into D space, and their classification can be automatically given.\n\n\nMETHODS\nThe technique has been tested using 116 patients for whom the diagnosis of either Parkinsonian syndrome or non-Parkinsonian syndrome has been confirmed from post I-FP-CIT imaging follow-up. The first three components were used to define D space. Two automatic classification tools were used, naïve Bayes (NB) and group prototype. A leave-one-out cross-validation was performed to repeatedly train and test the automatic classification system. Four commercially available systems for the classification were tested using the same clinical database.\n\n\nRESULTS\nThe proposed technique combining SVD and NB correctly classified 110 of 116 patients (94.8%), with a sensitivity of 93.7% and specificity of 97.3%. The combination of SVD and an automatic classifier performed as well or better than the commercially available systems.\n\n\nCONCLUSION\nThe combination of data reduction by SVD with automatic classifiers such as NB can provide good diagnostic accuracy and may be a useful adjunct to clinical reporting."
},
{
"pmid": "24639916",
"title": "Magnetic Resonance Imaging (MRI) in Parkinson's Disease.",
"abstract": "Recent developments in brain imaging methods are on the verge of changing the evaluation of people with Parkinson's disease (PD). This includes an assortment of techniques ranging from diffusion tensor imaging (DTI) to iron-sensitive methods such as T2*, as well as adiabatic methods R1ρ and R2ρ, resting-state functional MRI, and magnetic resonance spectroscopy (MRS). Using a multi-modality approach that ascertains different aspects of the pathophysiology or pathology of PD, it may be possible to better characterize disease phenotypes as well as provide a surrogate of disease and a potential means to track disease progression."
},
{
"pmid": "27730415",
"title": "Machine Learning Interface for Medical Image Analysis.",
"abstract": "TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting."
}
] |
Frontiers in Neuroscience | 31333397 | PMC6615473 | 10.3389/fnins.2019.00650 | Constructing an Associative Memory System Using Spiking Neural Network | Development of computer science has led to the blooming of artificial intelligence (AI), and neural networks are the core of AI research. Although mainstream neural networks have done well in the fields of image processing and speech recognition, they do not perform well in models aimed at understanding contextual information. In our opinion, the reason for this is that the essence of building a neural network through parameter training is to fit the data to the statistical law through parameter training. Since the neural network built using this approach does not possess memory ability, it cannot reflect the relationship between data with respect to the causality. Biological memory is fundamentally different from the current mainstream digital memory in terms of the storage method. The information stored in digital memory is converted to binary code and written in separate storage units. This physical isolation destroys the correlation of information. Therefore, the information stored in digital memory does not have the recall or association functions of biological memory which can present causality. In this paper, we present the results of our preliminary effort at constructing an associative memory system based on a spiking neural network. We broke the neural network building process into two phases: the Structure Formation Phase and the Parameter Training Phase. The Structure Formation Phase applies a learning method based on Hebb's rule to provoke neurons in the memory layer growing new synapses to connect to neighbor neurons as a response to the specific input spiking sequences fed to the neural network. The aim of this phase is to train the neural network to memorize the specific input spiking sequences. During the Parameter Training Phase, STDP and reinforcement learning are employed to optimize the weight of synapses and thus to find a way to let the neural network recall the memorized specific input spiking sequences. The results show that our memory neural network could memorize different targets and could recall the images it had memorized. | 2. Related WorksNeural network construction has a long history, and many algorithms have been proposed (Śmieja, 1993; Fiesler, 1994; Quinlan, 1998; Perez-Uribe, 1999).As the second generation of ANNs, DNNs have many advantages. However, they rely heavily on data for training. With the construction of DNN becoming increasingly complex and powerful, the training process requires an increasing number of computations, which has become a great challenge. Each session of training becomes increasingly time and resource consuming, which may become a bottleneck for DNNs in the near future. Now, an increasing number of researchers are turning their attention to SNNs.In 2002, Bohte et al. (2000) derived the first supervised training algorithm for SNNs, called SpikeProp, which is an adaptation of the gradient-descent-based error-back-propagation method. SpikeProp overcame the problems inherent to SNNs using a gradient-descent approach by allowing each neuron to fire only once (Wade et al., 2010). In 2010, Wade et al. presented a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs), which merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP) (Wade et al., 2010).In 2013, Kasabov et al. (2013) introduced a new model called deSNN, which utilizes rank-order learning and Spike Driven Synaptic Plasticity (SDSP) spike-time learning in unsupervised, supervised, or semi-supervised modes. In 2017, they presented a methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data (Kasabov et al., 2016). The method they presented is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture (Kasabov, 2014), which adopted both unsupervised learning and supervised learning in different phases.In 2019, He et al. (2019) proposed a bionic way to implement artificial neural networks through construction rather than training and learning. The hierarchy of the neural network is designed according to analysis of the required functionality, and then module design is carried out to form each hierarchy. The results show that the bionic artificial neural network built through their method could work as a bionic compound eye, which can achieve the detection of an object and its movement, and the results are better on some properties, compared with the Drosophila's biological compound eyes.Some studies have already attempted to design neural networks that behave similar to a memory system. Lecun et al. (2015) proposed RNNs for time domain sequence data; RNNs use a special network structure to address the aforementioned issue, but the complexity of their structure also leads to many limitations.Hochreiter and Schmidhuber (1997) presented the long short-term memory neural network, which is a variant of RNNs. This neural network inherits the excellent memory ability of RNNs with regard to the time series and overcomes the limitation of RNN, that is, difficulty in learning and preserving long-term information. Moreover, it has displayed remarkable performance in the fields of natural language processing and speech recognition. However, the efficiency and scalability of long short-term memory is poor.Hopfield (1988) has established the Hopfield network, which is a recursive network computing model for simulating a biological neural system. The Hopfield network can simulate the memory and learning behavior of the brain. The successful application of this network to solve the traveling salesman problem shows the potential computing ability of the neural computing model for the NP class problem. However, the network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network, thus the number of patterns that the network can remember is limited. Also, since patterns that the network uses for training (called retrieval states) become attractors of the system, repeated updates would eventually lead to convergence to one of the retrieval states. Thus, sometimes the network will converge to spurious patterns (different from the training patterns). And when the input patterns are similar, the network cannot always recall the correct memorized pattern, which means the fault-tolerance is affected by the relationship between input patterns. | [
"23787338",
"30794587",
"9377276",
"23340243",
"24508754",
"26017442",
"12662798",
"25462637",
"20876015"
] | [
{
"pmid": "23787338",
"title": "Representation learning: a review and new perspectives.",
"abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."
},
{
"pmid": "30794587",
"title": "Implementing artificial neural networks through bionic construction.",
"abstract": "It is evident through biology research that, biological neural network could be implemented through two means: by congenital heredity, or by posteriority learning. However, traditionally, artificial neural network, especially the Deep learning Neural Networks (DNNs) are implemented only through exhaustive training and learning. Fixed structure is built, and then parameters are trained through huge amount of data. In this way, there are a lot of redundancies in the implemented artificial neural network. This redundancy not only requires more effort to train the network, but also costs more computing resources when used. In this paper, we proposed a bionic way to implement artificial neural network through construction rather than training and learning. The hierarchy of the neural network is designed according to analysis of the required functionality, and then module design is carried out to form each hierarchy. We choose the Drosophila's visual neural network as a test case to verify our method's validation. The results show that the bionic artificial neural network built through our method could work as a bionic compound eye, which can achieve the detection of the object and their movement, and the results are better on some properties, compared with the Drosophila's biological compound eyes."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "23340243",
"title": "Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.",
"abstract": "On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes (which information is explicitly present in input data streams and would be crucial to consider in some tasks) and of the information contained in the timing of the following spikes that is learned by the dynamic synapses as a whole spatio-temporal pattern."
},
{
"pmid": "24508754",
"title": "NeuCube: a spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data.",
"abstract": "The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain-Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly, prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships 'hidden' in the data. NeuCube will allow the integration in one model of various brain data, information and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated in a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "12662798",
"title": "Structural change and development in real and artificial neural networks.",
"abstract": "Two related but different fields are reviewed. Initially some basic facts about developing real brains are set out and then work on dynamic neural networks is described. A dynamic neural network is defined as any artificial neural network that automatically changes its structure through exposure to input stimuli. Various models are described and evaluated and the functional correlates of both regressive and progressive structural changes are discussed. The paper concludes that, if future modelling work is to be set within a more neurally-plausible framework, then it would be fruitful to examine networks in which the connectivity between extant units is progressively embellished."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "20876015",
"title": "SWAT: a spiking neural network training algorithm for classification problems.",
"abstract": "This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing."
}
] |
Scientific Reports | 31292508 | PMC6620331 | 10.1038/s41598-019-46511-2 | Label propagation method based on bi-objective optimization for ambiguous community detection in large networks | Community detection is of great significance because it serves as a basis for network research and has been widely applied in real-world scenarios. It has been proven that label propagation is a successful strategy for community detection in large-scale networks and local clustering coefficient can measure the degree to which the local nodes tend to cluster together. In this paper, we try to optimize two objects about the local clustering coefficient to detect community structure. To avoid the trend that merges too many nodes into a large community, we add some constraints on the objectives. Through the experiments and comparison, we select a suitable strength for one constraint. Last, we merge two objectives with linear weighting into a hybrid objective and use the hybrid objective to guide the label update in our proposed label propagation algorithm. We perform amounts of experiments on both artificial and real-world networks. Experimental results demonstrate the superiority of our algorithm in both modularity and speed, especially when the community structure is ambiguous. | Related worksLocal clustering coefficientIn the unweighted undirected graph, an open triplet consists of three nodes that are connected by two edges and a closed triplet (i.e., triangle) consists of three nodes connected to each other29. The number of triangles on edge eij connects node i and node j is given as:1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\tau }_{ij}=|{\rm{\Phi }}(i)\cap {\rm{\Phi }}(j)|,$$\end{document}τij=|Φ(i)∩Φ(j)|,where Φ(i) is the set of nodes immediately connected to node i. The number of triangles on node i is given as:2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${t}_{i}=\frac{1}{2}\sum _{j\in {\rm{\Phi }}(i)}{\tau }_{ij}.$$\end{document}ti=12∑j∈Φ(i)τij.The local clustering coefficient of one node is defined based on the triplet and measures the degree to which the node and its neighbors tend to cluster together29. The size of the set Φ(i) is given as ki, that is the degree of node i. The local clustering coefficient Ci of node i is defined as:3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${C}_{i}=\frac{{t}_{i}}{{k}_{i}\cdot ({k}_{i}-1)/2},$$\end{document}Ci=tiki⋅(ki−1)/2,where ti is the number of triangles on node i and ki(ki − 1)/2 is the number of open triplets on node i.Evaluation for community partitionsA graph can be represented by its adjacency matrix A in which element Aij is one when node i is connected to node j, and zero when not connected. The modularity compares the number of edges between nodes in the same community to the expected value in a null model8 and is formulated as:4\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q=\frac{1}{2m}\sum _{i=1}^{n}\sum _{j=1}^{n}({A}_{ij}-\frac{{k}_{i}{k}_{j}}{2m})\delta (l(i),l(j))$$\end{document}Q=12m∑i=1n∑j=1n(Aij−kikj2m)δ(l(i),l(j))where m is a total number of edges, n is the total number of nodes, l(*) is the community for the node * and δ is the Kronecker delta. The higher modularity indicates a better community partition, and the typical range of modularity is [0.3, 0.7]. Though modularity optimization methods suffer from resolution limit30, modularity is still a good metric for evaluating the quality of community partitions.Normalized Mutual Information (NMI) is one of the widely used metrics that evaluate the quality of community partitions31. NMI can be used to compare the given partition with the ground-truth community partition. The closer to one the NMI is, the more similar the two partitions are.Label propagationIn general, label propagation algorithms initialize every node with unique labels and let the labels propagate through the network, that is, every node repeatedly updates its own label based on specific rules. Finally, nodes having the same labels compose one community.In the LPA, one node selects the most frequent label from its neighbors’ as its new label25, and the rule can be expressed as:5\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=\mathop{\text{arg}\,\max }\limits_{l\in L}\sum _{u\in {\rm{\Phi }}(v)}\delta (l(u),l),$$\end{document}l′(v)=argmaxl∈L∑u∈Φ(v)δ(l(u),l),where l(u) is the current label of node u, l’(v) is the new label of node v and L is the set of labels for all nodes in the network. Barber and Clark reformulated the Eq. (5) in terms of the adjacency matrix A for the network27, giving:6\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=\mathop{\text{arg}\,\max }\limits_{l\in L}\sum _{u=1}^{n}{A}_{uv}\delta (l(u),l).$$\end{document}l′(v)=argmaxl∈L∑u=1nAuvδ(l(u),l).Barber and Clark also proposed a label propagation algorithm based on modularity (LPAm). LPAm considers the new label with constraining the sum of degrees of nodes in the same community, and its update rule is:7\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=\mathop{\text{arg}\,\max }\limits_{l\in L}(\sum _{u=1}^{n}{A}_{uv}\delta (l(u),l)-\lambda {k}_{v}{K}_{l}+\lambda {k}_{v}^{2}\delta (l(v),l)),$$\end{document}l′(v)=argmaxl∈L(∑u=1nAuvδ(l(u),l)−λkvKl+λkv2δ(l(v),l)),where8\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${K}_{l}=\sum _{u=1}^{n}{k}_{u}\delta (l(u),l),$$\end{document}Kl=∑u=1nkuδ(l(u),l),and the parameter λ is 1/2 m.Later, Xie and Szymanski proposed a label propagation algorithm combining with the neighborhood (LPAc)26. The update rule of LPAc is:9\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=l(\mathop{\text{arg}\,\max }\limits_{{{\rm{\Phi }}}_{l}(v)}\{\sum _{u\in {{\rm{\Phi }}}_{l}(v)}(1+c\cdot {\tau }_{uv})\}),$$\end{document}l′(v)=l(argmaxΦl(v){∑u∈Φl(v)(1+c⋅τuv)}),where Φl(v) is the set of nodes with the same label l and immediately connected to node v, c is the weight that controls the impact of neighbors and c belongs to [0, 1]. Usually, c = 1 performs better than other cases and Eq. (9) degrades into Eq. (5) when c = 0.It is worth mentioning that the update process in label propagation can either be synchronous or asynchronous. In order to avoid the possible oscillations of labels, we focus our attention on the asynchronous update process here. Besides, when the current label of the updated node meets the update rule, algorithms always select a label at random from labels meet the update rule instead of keeping the current label.LFR benchmark networksWe test our algorithm and compare it with others on the artificial networks based on LFR benchmark32. In LFR benchmark, the mixing coefficient (μ) controls the expected fraction of edges between communities; the distribution of node degrees and community sizes follow the power law with exponent γ and β; the number of nodes is n; the average of node degrees is kave; the maximum of node degrees is kmax; the minimum of community sizes is cmin and the maximum of community sizes is cmax.Our approachThe local clustering coefficient measures the degree to which the local area tends to cluster together. The coefficient considers two factors: the number of edges connected to the node and the number of triangles on the node. Therefore, we try to optimize two objectives about both factors to detect the community structure.The first objective is making the number of edges within communities as many as possible. The edge within communities means that two nodes connected by it belong to the same community.The second objective is making the number of triangles within communities as many as possible. The triangle within communities means that three nodes that makeup it belongs to the same community.We introduce a function H to roughly represent the linear combination of two objectives mentioned above as follows:10\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$H=\sum _{v=1}^{n}\sum _{u=1}^{n}\{{A}_{uv}\delta (l(u),l(v))+{\alpha }_{1}\cdot {\tau }_{uv}{A}_{uv}\delta (l(u),l(v))\},$$\end{document}H=∑v=1n∑u=1n{Auvδ(l(u),l(v))+α1⋅τuvAuvδ(l(u),l(v))},where the parameter α1 is a weight. Next, we can extract the term related to node w and rewrite function H as:11\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{c}H=\sum _{v\ne w}\sum _{u\ne w}(1+{\alpha }_{1}\cdot {\tau }_{uv}){A}_{uv}\delta (l(u),l(v))-(1+{\alpha }_{1}\cdot {\tau }_{ww}){A}_{ww}\\ \,+\,2\cdot \sum _{u=1}^{n}(1+{\alpha }_{1}\cdot {\tau }_{uw}){A}_{uw}\delta (l(u),l(w)).\end{array}$$\end{document}H=∑v≠w∑u≠w(1+α1⋅τuv)Auvδ(l(u),l(v))−(1+α1⋅τww)Aww+2⋅∑u=1n(1+α1⋅τuw)Auwδ(l(u),l(w)).The third term of Eq. (11) can be regarded as a label update rule which can optimize two objectives. The rule can be denoted as:12\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=\mathop{\text{arg}\,\max }\limits_{l\in L}\sum _{u=1}^{n}\{{A}_{uv}\delta (l(u),l)+{\alpha }_{1}\cdot {\tau }_{uv}{A}_{uv}\delta (l(u),l)\},$$\end{document}l′(v)=argmaxl∈L∑u=1n{Auvδ(l(u),l)+α1⋅τuvAuvδ(l(u),l)},In fact, Eq. (12) is a variant of Eq. (9). Obviously, when function H achieves the global maximum, all nodes have the same label, which is not a good community partition.LPA assigns labels so as to make the number of edges within communities as many as possible. LPAm constrains the size of every community by Eq. (8), and at the same time, it increases the number of edges within communities.Therefore, we firstly focus our attention on constraining the number of triangles within communities. The total number of triangles on nodes with the same label l is defined as:13\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${T}_{l}=\frac{1}{2}\sum _{i=1}^{n}\sum _{j=1}^{n}{\tau }_{ij}{A}_{ij}\delta (l(i),l)=\sum _{i=1}^{n}{t}_{i}\delta (l(i),l).$$\end{document}Tl=12∑i=1n∑j=1nτijAijδ(l(i),l)=∑i=1ntiδ(l(i),l).The function for optimizing the number of triangles within communities is given as:14\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{rcl}{H}_{t} & = & \sum _{v=1}^{n}\sum _{u=1}^{n}{\tau }_{uv}{A}_{uv}\delta (l(u),l(v))-{\alpha }_{2}\cdot \sum _{l}{T}_{l}^{2}\\ & = & \sum _{v=1}^{n}\sum _{u=1}^{n}({\tau }_{uv}{A}_{uv}-{\alpha }_{2}\cdot {t}_{u}{t}_{v})\delta (l(u),l(v))\end{array}.$$\end{document}Ht=∑v=1n∑u=1nτuvAuvδ(l(u),l(v))−α2⋅∑lTl2=∑v=1n∑u=1n(τuvAuv−α2⋅tutv)δ(l(u),l(v)).where α2 is the parameter that controls the strength of the constraint term. Similar to LPAm’s constraint about the number of edges within communities, α2 is selected as:15\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\alpha }_{2}=\varepsilon \frac{1}{{\rm{\Delta }}}$$\end{document}α2=ε1Δwhere Δ is the total number of triangles in a network and ε is a coefficient between 0 and 1. The suitable value for ε will be explained combined with experiments in Section 4. When the label of node v is updated, the label of v should be ignored to avoid its effect, that is16\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${T^{\prime} }_{l}=\{\begin{array}{cc}{T}_{l}, & l\ne l(v)\\ {T}_{l}-{t}_{v}, & l=l(v)\end{array}.$$\end{document}T′l={Tl,l≠l(v)Tl−tv,l=l(v).From the relation between Eq. (10) and Eq. (12), the update rule corresponds to Ht is given as:17\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{rcl}l^{\prime} (v) & = & \mathop{\text{arg}\,\max }\limits_{l\in L}\sum _{u=1}^{n}({\tau }_{uv}{A}_{uv}-{\alpha }_{2}\cdot {t}_{u}{t}_{v})\delta (l(u),l)\\ & = & \mathop{\text{arg}\,\max }\limits_{l\in L}(\sum _{u=1}^{n}{\tau }_{uv}{A}_{uv}\delta (l(u),l)-{\alpha }_{2}{t}_{v}{{T}_{l}}^{^{\prime} })\end{array}.$$\end{document}l′(v)=argmaxl∈L∑u=1n(τuvAuv−α2⋅tutv)δ(l(u),l)=argmaxl∈L(∑u=1nτuvAuvδ(l(u),l)−α2tvTl′).The label propagation algorithm based on Eq. (17) is donated as LPAt.Finally, the update rule of the label propagation algorithm that optimizes both objectives is formulated as:18\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l^{\prime} (v)=\mathop{\text{arg}\,\max }\limits_{l\in L}(\sum _{u=1}^{n}(1+{\alpha }_{1}{\tau }_{uv}){A}_{uv}\delta (l(u),l)-\lambda {k}_{v}{K^{\prime} }_{l}-{\alpha }_{1}{\alpha }_{2}{t}_{v}{T^{\prime} }_{l}),$$\end{document}l′(v)=argmaxl∈L(∑u=1n(1+α1τuv)Auvδ(l(u),l)−λkvK′l−α1α2tvT′l),where19\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${K}_{l}^{\prime} =\{\begin{array}{ll}{K}_{l}, & l\ne l(v)\\ {K}_{l}-{k}_{v}, & l=l(v)\end{array}.$$\end{document}Kl′={Kl,l≠l(v)Kl−kv,l=l(v).We donate the algorithm that optimizes both objectives as LPAh. In fact, we can conclude that LPAh performs better than LPAt through experiments. The main of LPAh is given in Fig. 1.Figure 1The main label propagation algorithm based on the hybrid of two objectives.Experiments and discussionIn this section, we test the LPAt and LPAh on artificial networks and real-world networks and compare their performance with LPA, LPAm, LPAc, CNM5, Louvain33 and G-CN. Among them, G-CN is one of the state-of-the-art methods34 for community detection; CNM and Louvain are popular community detection algorithms, and their time complexity are O(nlog2n) and O(m) respectively.The selection for εThe value of ε has a direct effect on the strength of the constraint term. Therefore, we test LPAt with different values of ε on LFR benchmark networks. For the purposes of comparison, we also test LPAm with different values of parameter mλ. Each algorithm doesn’t stop running until it converges or 20 iterations. Figure 2 shows the average of different metrics for performing LPAt and LPAm respectively 50 times on LFR benchmark networks.Figure 2Tests of LPAt and LPAm with different strength of constraint on LFR benchmark networks: (a–c) and (d–f) show the results of LPAt and LPAm respectively. The parameters of LFR benchmark networks are: μ = 0 ~ 1, n = 5000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Figure 2(a) shows the NMI of partitions given by LPAt. When the community structure is ambiguous (i.e., μ ≥ 0.6), with the increment of ε, the NMI values also increase, which means the partitions are closer to the ground-truth partitions. In Fig. 2(b), with the increment of ε, the increment of average modularity also demonstrates the quality of partitions becomes better. Figure 2(c) shows that when the community structure is ambiguous, the number of communities in partitions given by LPAt increases with the increment of ε.The above observation also appears in Fig. 2(d~f). From the trend, we can conclude that when the community structure becomes ambiguous, if there is no or weak constraint, LPAt or LPAm tends to assign all nodes to a large community. However, when the constraint is strong, LPAt or LPAm tends to assign nodes into too many small communities. Therefore, a suitable value should be that the partitions given by LPAt or LPAm are as close as possible to the ground-truth partitions or the modularity is as large as possible.As Barber and Clark gave, the suitable value of mλ is 0.527. When mλ is larger than 0.5, the NMI and modularity have no obvious increment. It is worth pointing out that when mλ = 0.6 or 0.7, the NMI is slightly bigger than that when mλ = 0.5. This is because of the bias of NMI towards partitions with more communities35. Therefore, when mλ is larger than 0.5, the constraint tends to be excessive. Follow the above analysis, the suitable value for ε of LPAt approaches to 0.7.Finally, we try to explain this idea mathematically. The triplet is a locally dense structure that contains more information than adjacent relationships. We can assign this information as weights to edges in the original network. The adjacency matrix of the new weighted network can be represented as:20\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$W={[{w}_{ij}]}_{n\times n},$$\end{document}W=[wij]n×n,where21\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${w}_{ij}={A}_{ij}\cdot {\tau }_{ij}.$$\end{document}wij=Aij⋅τij.The suitable value for mλ is inspired by the definition of modularity, that is, the constant term of Eq. (20):22\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{\sum _{j}{A}_{ij}\cdot \sum _{i}{A}_{ij}}{\sum _{ij}{A}_{ij}}=\frac{1}{2}\cdot \frac{{k}_{i}\cdot {k}_{j}}{m}.$$\end{document}∑jAij⋅∑iAij∑ijAij=12⋅ki⋅kjm.According to the definition of modularity in a weighted graph, the suitable value for ε should be 2/3 and determined by23\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{\sum _{j}{w}_{ij}\cdot \sum _{i}{w}_{ij}}{\sum _{ij}{w}_{ij}}=\frac{2{t}_{i}\cdot 2{t}_{j}}{2\sum _{i}{t}_{i}}=\frac{2}{3}\cdot \frac{{t}_{i}\cdot {t}_{j}}{{\rm{\Delta }}}$$\end{document}∑jwij⋅∑iwij∑ijwij=2ti⋅2tj2∑iti=23⋅ti⋅tjΔBesides, from Fig. 2, we can conclude that LPAt with ε = 2/3 performs not better than LPAm with mλ = 0.5. Therefore, we focus our attention on LPAh with ε = 2/3.The selection for α1Here, under ε = 2/3, we test LPAh with different values of α1 on LFR benchmark networks. The iteration time of the algorithm is also less than or equal to 20. The results of the above experiments are shown in Fig. 3.Figure 3Tests of LPAh with different α1 on LFR benchmark networks. The parameters of LFR benchmark networks are: μ = 0 ~ 1, n = 5000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.As we can see from Fig. 3(a), the increment of α1 can improve the NMI of detection results. However, when α1 is between 0.5 and 1, the difference in the improvement is not obvious. Figure 3(b) shows that different α1 has no obvious effects on the modularity of detection results. In Fig. 3(c), when community structure is ambiguous, with the increment of α1, the number of communities that are detected by LPAh decreases. In fact, when α1 is 0, LPAh degrades into LPAm. From the discussion in section 4.1, the partition that assigns nodes into too many small communities means the constraint is strong. The execution time of LPAh under different values of α1 demonstrates the faster convergence when α1 is larger than 0. Considering LPAc often performs better when the weight c is 1, we also determine to select the α1 as 1.Comparison of artificial networksIn order to fully compare all algorithms, we not only consider the networks with different strength of community structure but also take the size of networks into account.Firstly, we test 7 algorithms on LFR networks with different mixing coefficient (μ). Each algorithm doesn’t stop running until it converges or 20 iterations. The average results achieved by performing each algorithm 50 times are shown in Figs 4, 5 and 6.Figure 4Tests of 7 algorithms on LFR networks with n = 1000. The parameters of LFR networks are: μ = 0 ~ 1, n = 1000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Figure 5Tests of 7 algorithms on LFR networks with n = 5000. The parameters of LFR networks are: μ = 0 ~ 1, n = 5000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Figure 6Tests of 7 algorithms on LFR networks with n = 10000. The parameters of LFR networks are: μ = 0 ~ 1, n = 10000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Before analyzing the results of experiments, we divide the variation range of μ into 3 parts to observe every figure: when 0 ≤ μ < 0.5, the most edges connect nodes belong to the same community, which means the community structure is clear; when 0.5 ≤ μ ≤ 0.65, the community structure is ambiguous because the modularity is still larger than 0.3; when μ > 0.65, the community structure is very weak.Figure 4 shows the NMI, modularity, number of communities and execution time of 7 algorithms on LFR networks with 1000 nodes. As we can see from Fig. 4(c), when the community structure becomes ambiguous, LPA, LPAc and G-CN tend to assign all nodes into a large community, and the tendency of LPA appears earlier. Unlike them, LPAm and LPAh tend to assign nodes into many communities. Therefore, in Fig. 4(a,b), LPAh and LPAm both perform better than LPA, LPAc and G-CN. When the community structure is ambiguous (0.5 ≤ μ ≤ 0.65), LPAh performs better than LPAm both in NMI and modularity. Notice that, when the community structure is very weak (μ > 0.65), the modularity of LPAm and Louvain is slightly larger than that of LPAh which may be because LPAm and Louvain both aim at optimizing modularity. However, at this time, the modularity is lower than the typical value (0.3), and the slight superiority has no practical significance. Figure 4(d) shows the execution time of algorithms on different networks. Besides, for non-label propagation algorithm, CNM always performs not well and Louvain aggregate excessively (the average number of communities is lower than the ground-truth even if the community structure is clear).From the experiments on the network with 5000 and 10000 nodes in Figs 5 and 6, we can get the conclusions consistent with the above.In Figs 5(c) and 6(c), in order to exhibit the results of other algorithms clearly, we only plot part of the results of LPAm, because the number of communities detected by LPAm increases dramatically. We can compare the experimental results from a different perspective - under the same μ and different sizes of networks. Let’s focus our attention on the cases that the community structure is ambiguous, especially μ = 0.6 and 0.65. It is obvious that the accuracy of LPA, LPAc and G-CN decreases significantly, and even unable to detect the community structure. In the above cases, the accuracy of LPAh, LPAm, and Louvain only decrease slightly, and LPAh still performs better than LPAm. In terms of execution time, LPAh still performs quite well.Next, we test 7 algorithms on LFR networks with different size, that is, the number of nodes (n) is 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000, 12000, 14000, 16000, 18000, 20000, 25000, 30000, 35000, 40000 and 50000. Here, we consider the situation in which the community structure is clear or ambiguous (μ = 0.3 or 0.6). Each algorithm doesn’t stop running until it converges or 20 iterations. The average results achieved by performing each algorithm 20 times are shown in Figs 7 and 8.Figure 7Tests of 7 algorithms on LFR networks with μ = 0.3. The parameters of LFR networks are: μ = 0.3, n = 1000~50000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Figure 8Tests of 7 algorithms on LFR networks with μ = 0.6. The parameters of LFR networks are: μ = 0.6, n = 1000~50000, kave = 20, kmax = 0.1n, γ = −2, β = −1, cmin = 10, cmax = 0.1n.Figure 7 shows the performance of 7 algorithms on different sizes of networks when the community structure is clear (μ = 0.3). The algorithms based on label propagation perform better than CNM in NMI and modularity, and better than Louvain in the number of communities. According to the execution time, the time complexity of 7 algorithms is comparable and close to linear.Compared to Fig. 7, the results in Fig. 8 are more interesting. Although LPA is fastest, it can’t find the community structure. With the increment of network size, the accuracy of LPAc and G-CN decreases significantly. In fact, as shown in Table 1, the detection results (red data) of LPAc and G-CN sometimes are still comparable to LPAh. In Table 1, when LPAc can’t detect the community structure, it will converge fast, which causes the fluctuations in the execution time of LPAc in Fig. 8(d). When n is larger than 10000, the performance of LPAm in NMI and modularity also decreases slightly. With the increment of network size, two algorithms with constraints, namely LPAm and LPAh, perform differently from other algorithms in the number of communities in Fig. 8(c).Table 1Tests of LPAc on LFR networks with μ = 0.6. The parameters of LFR networks are: μ = 0.6, n = 20000, kave = 20, kmax = 0.1n, γ = -2, β = -1, cmin = 10, cmax = 0.1n.t(ms)iterationscNMIQ1468850.01230.0016638540.00870.0010255320530.79490.3340252720540.81130.3395253020610.85180.3461251020460.73750.3123251120580.84000.3453250620570.82180.3403251120550.80070.336613841130.00440.0005626530.00440.000513961140.00870.001017591450.01230.0016250120540.79800.3369252020560.80490.3380249620500.76170.3269766650.01230.0016249620500.76390.3283879740.00830.0010251620510.76570.3234Comparison of real-world networksFinally, we run each algorithm on 7 real-world networks until it converges or 20 iterations. Because some networks do not have the ground-truth partitions, or some partitions are concluded by researchers, we only consider the average of modularity (Q), execution time (t) and number of communities (c). The detection results of all algorithms are shown in Table 2.Table 2Detection results on real-world networks.networkKarateDolphinsFootballFacebookca-GrQcca-HepPhcit-HepThn3462115403952421200827770m781596138823414484118489352285cLPA231156724656580LPAc241424720818843G-CN341425682814834LPAm791398124313971488LPAh681355106612061444CNM34714419424289Louvain451016392317171QLPA0.3070.4740.5860.8130.7930.4550.488LPAc0.3630.5270.5650.7320.7970.5340.590G-CN0.3150.5270.5620.7380.8000.5500.584LPAm0.3450.5000.5810.8130.7090.5890.569LPAh0.3630.5150.5850.8210.7520.6020.589CNM0.3810.4940.5710.7780.8140.5890.519Louvain0.4190.5200.6040.8350.8600.6580.650t (ms)LPA<1<1<12061878674820LPAc<1<1<12401969875222G-CN<1<1<126322012476632LPAm<1<1<124239121998930LPAh<1<1<123921822879736CNM<1<11.5966569438581Louvain<1<11.681971448537752In Table 2, Karate36, Dolphins37, Football38 and Facebook39 network are social networks between persons or animals in different scenarios; ca-GrQc40 and ca-HepPh40 are collaboration networks; cit-HepTh41 is a citation network. According to the optimal results highlighted with red color in Table 2, though LPAh is not the clear winner, it performs well enough. The number of communities detected by LPAm and LPAh is larger than others, which is because of the constraint term in their objective function. The modularity of LPAh is comparable to that of other algorithms and even performs better on some networks. Because of Louvain and CNM aim at optimizing the modularity, Q detected by Louvain and CNM is sometimes larger than that by LPAh. | [
"26955022",
"27905526",
"15244693",
"15601068",
"18216267",
"21405744",
"27564002",
"17525150"
] | [
{
"pmid": "26955022",
"title": "Bayesian Community Detection in the Space of Group-Level Functional Differences.",
"abstract": "We propose a unified Bayesian framework to detect both hyper- and hypo-active communities within whole-brain fMRI data. Specifically, our model identifies dense subgraphs that exhibit population-level differences in functional synchrony between a control and clinical group. We derive a variational EM algorithm to solve for the latent posterior distributions and parameter estimates, which subsequently inform us about the afflicted network topology. We demonstrate that our method provides valuable insights into the neural mechanisms underlying social dysfunction in autism, as verified by the Neurosynth meta-analytic database. In contrast, both univariate testing and community detection via recursive edge elimination fail to identify stable functional communities associated with the disorder."
},
{
"pmid": "27905526",
"title": "Predicting missing links in complex networks based on common neighbors and distance.",
"abstract": "The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity."
},
{
"pmid": "15244693",
"title": "Fast algorithm for detecting community structure in networks.",
"abstract": "Many networks display community structure--groups of vertices within which connections are dense but between which they are sparser--and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50,000 physicists."
},
{
"pmid": "15601068",
"title": "Detecting fuzzy community structures in complex networks with a Potts model.",
"abstract": "A fast community detection algorithm based on a q-state Potts model is presented. Communities (groups of densely interconnected nodes that are only loosely connected to the rest of the network) are found to coincide with the domains of equal spin value in the minima of a modified Potts spin glass Hamiltonian. Comparing global and local minima of the Hamiltonian allows for the detection of overlapping (\"fuzzy\") communities and quantifying the association of nodes with multiple communities as well as the robustness of a community. No prior knowledge of the number of communities has to be assumed."
},
{
"pmid": "18216267",
"title": "Maps of random walks on complex networks reveal community structure.",
"abstract": "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network-including physics, chemistry, molecular biology, and medicine-information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences."
},
{
"pmid": "21405744",
"title": "Stochastic blockmodels and community structure in networks.",
"abstract": "Stochastic blockmodels have been proposed as a tool for detecting community structure in networks as well as for generating synthetic networks for use as benchmarks. Most blockmodels, however, ignore variation in vertex degree, making them unsuitable for applications to real-world networks, which typically display broad degree distributions that can significantly affect the results. Here we demonstrate how the generalization of blockmodels to incorporate this missing element leads to an improved objective function for community detection in complex networks. We also propose a heuristic algorithm for community detection using this objective function or its non-degree-corrected counterpart and show that the degree-corrected version dramatically outperforms the uncorrected one in both real-world and synthetic networks."
},
{
"pmid": "27564002",
"title": "Estimating the Number of Communities in a Network.",
"abstract": "Community detection, the division of a network into dense subnetworks with only sparse connections between them, has been a topic of vigorous study in recent years. However, while there exist a range of effective methods for dividing a network into a specified number of communities, it is an open question how to determine exactly how many communities one should use. Here we describe a mathematically principled approach for finding the number of communities in a network by maximizing the integrated likelihood of the observed network structure under an appropriate generative model. We demonstrate the approach on a range of benchmark networks, both real and computer generated."
},
{
"pmid": "17525150",
"title": "Mixture models and exploratory analysis in networks.",
"abstract": "Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation-maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks."
}
] |
Scientific Reports | 31292482 | PMC6620345 | 10.1038/s41598-019-46380-9 | Community Detection on Networks with Ricci Flow | Many complex networks in the real world have community structures – groups of well-connected nodes with important functional roles. It has been well recognized that the identification of communities bears numerous practical applications. While existing approaches mainly apply statistical or graph theoretical/combinatorial methods for community detection, in this paper, we present a novel geometric approach which enables us to borrow powerful classical geometric methods and properties. By considering networks as geometric objects and communities in a network as a geometric decomposition, we apply curvature and discrete Ricci flow, which have been used to decompose smooth manifolds with astonishing successes in mathematics, to break down communities in networks. We tested our method on networks with ground-truth community structures, and experimentally confirmed the effectiveness of this geometric approach. | Related workRicci curvature on general spaces without Riemannian structures has been recently studied, in the work of Ollivier19,20 on Markov chains, and Bakry and Emery37, Lott, Villani21, Bonciocat and Sturm38,39 on general metric spaces. Ricci curvature based on optimal transportation theory, proposed by Ollivier (Ollivier-Ricci curvature)19,20, has become a popular topic and has been applied in various fields – for distinguishing cancer-related genes from normal genes28, for studying financial market fragility29, for understanding phylogenetic trees26, and for detecting network backbone and congestion22,25,40. In41, Pal et al. proposed to use Jaccard coefficients for a proxy for Ollivier-Ricci Curvature. Besides, discrete Ricci curvature has also been defined on cell complexes, proposed by Forman42 (Forman curvature or Forman-Ricci curvature). Forman curvature is based on graph Laplacian. It is easier and faster to compute than Ollivier-Ricci curvature, but is less geometrical. It is more suitable for large scale network analysis23,24,43,44 and image processing45. We have also experimented with Forman curvature for community detection. The results were less satisfying. So here we focus on Ollivier Ricci curvature.Unlike discrete Ricci curvature, discrete Ricci flow has not been studied as much. Chow and Luo introduced the first discrete Ricci flow on surfaces46. In43, Weber et al. suggested applying Forman-Ricci flow for anomaly detection in the complex network. In30, Ni et al. used the Ollivier-Ricci curvature flow to compute the Ricci flow metric as edge weights for the problem of network alignment (noisy graph matching).Community detection, on the other hand, is a well-studied topic in social network analysis2,3,6,47–51, and protein-protein interaction networks1,52. There are a few main ideas. One family of algorithms iteratively remove edges of high ‘centrality’, for example, the edge betweenness centrality as suggested in14 by Girvan and Newman. The other idea is to use modularity (introduced by Newman and Clauset et al.), which measures the strength of division of a graph into clusters4,7, as the objective of optimization. But methods using modularity suffer from a resolution limit and cannot detect small communities. A geometric extension, named Laplacian modularity, is also suggested with the help of Gauss’s law in5. Another family of algorithms borrows intuitions from other fields. In53, a spin glass approach uses the Potts model from statistical physics: every node (particle) is assigned one of c spin states (communities); edges between nodes model the interaction of the particles. The community structure of the network is understood as the spin configuration that minimizes the energy of the spin glass. In12, Raghavan et al. proposed a non-deterministic label propagation algorithm for large networks. In the initial stage, the algorithm randomly assigns each node in the graph one of c labels. Each node then changes its label to the most popular label among its neighbors. Infomap13 uses an information theoretic approach. A group of nodes for which information flows quickly shall be in the same community. The information flow is approximated by random walks and succinctly summarized by network coding.Taking a geometric view of complex networks is an emerging trend, as shown in a number of recent work. For example, the community structures were used as a coarse version of its embedding in a hidden space with hyperbolic geometry54. Topological data analysis, a typical geometric approach for data analysis, has been applied for analyzing complex systems55. | [
"27476470",
"16723398",
"30093660",
"25489096",
"28508065",
"28355181",
"17930305",
"18216267",
"12060727",
"29872167",
"26169480",
"27386522",
"25375548",
"27967199",
"30230906"
] | [
{
"pmid": "27476470",
"title": "A Comparative Analysis of Community Detection Algorithms on Artificial Networks.",
"abstract": "Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time."
},
{
"pmid": "16723398",
"title": "Modularity and community structure in networks.",
"abstract": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets."
},
{
"pmid": "30093660",
"title": "Gauss's law for networks directly reveals community boundaries.",
"abstract": "The study of network topology provides insight into the function and behavior of physical, social, and biological systems. A natural step towards discovering the organizing principles of these complex topologies is to identify a reduced network representation using cohesive subgroups or communities. This procedure often uncovers the underlying mechanisms governing the functional assembly of complex networks. A community is usually defined as a subgraph or a set of nodes that has more edges than would be expected from a simple, null distribution of edges over the graph. This view drives objective such as modularity. Another perspective, corresponding to objectives like conductance or density, is that communities are groups of nodes that have extremal properties with respect to the number of internal edges and cut edges. Here we show that identifying community boundaries rather than communities results in a more accurate decomposition of the network into informative components. We derive a network analog of Gauss's law that relates a measure of flux through a subgraph's boundary to the connectivity among the subgraph's nodes. Our Gauss's law for networks naturally characterizes a community as a subgraph with high flux through its boundary. Aggregating flux over these boundaries gives rise to a Laplacian and forms the basis of our \"Laplacian modularity\" quality function for community detection that is applicable to general network types. This technique allows us to determine communities that are both overlapping and hierarchically organized."
},
{
"pmid": "25489096",
"title": "Scalable detection of statistically significant communities and hierarchies, using message passing for modularity.",
"abstract": "Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ''communities'' in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods."
},
{
"pmid": "28508065",
"title": "The ground truth about metadata and community detection in networks.",
"abstract": "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures."
},
{
"pmid": "28355181",
"title": "Evolutionary dynamics on any population structure.",
"abstract": "Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure-graph surgery-affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties."
},
{
"pmid": "17930305",
"title": "Near linear time algorithm to detect community structures in large-scale networks.",
"abstract": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far."
},
{
"pmid": "18216267",
"title": "Maps of random walks on complex networks reveal community structure.",
"abstract": "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network-including physics, chemistry, molecular biology, and medicine-information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences."
},
{
"pmid": "12060727",
"title": "Community structure in social and biological networks.",
"abstract": "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known--a collaboration network and a food web--and find that it detects significant and informative community divisions in both cases."
},
{
"pmid": "29872167",
"title": "Comparative analysis of two discretizations of Ricci curvature for complex networks.",
"abstract": "We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices."
},
{
"pmid": "26169480",
"title": "Graph Curvature for Differentiating Cancer Networks.",
"abstract": "Cellular interactions can be modeled as complex dynamical systems represented by weighted graphs. The functionality of such networks, including measures of robustness, reliability, performance, and efficiency, are intrinsically tied to the topology and geometry of the underlying graph. Utilizing recently proposed geometric notions of curvature on weighted graphs, we investigate the features of gene co-expression networks derived from large-scale genomic studies of cancer. We find that the curvature of these networks reliably distinguishes between cancer and normal samples, with cancer networks exhibiting higher curvature than their normal counterparts. We establish a quantitative relationship between our findings and prior investigations of network entropy. Furthermore, we demonstrate how our approach yields additional, non-trivial pair-wise (i.e. gene-gene) interactions which may be disrupted in cancer samples. The mathematical formulation of our approach yields an exact solution to calculating pair-wise changes in curvature which was computationally infeasible using prior methods. As such, our findings lay the foundation for an analytical approach to studying complex biological networks."
},
{
"pmid": "27386522",
"title": "Ricci curvature: An economic indicator for market fragility and systemic risk.",
"abstract": "Quantifying the systemic risk and fragility of financial systems is of vital importance in analyzing market efficiency, deciding on portfolio allocation, and containing financial contagions. At a high level, financial systems may be represented as weighted graphs that characterize the complex web of interacting agents and information flow (for example, debt, stock returns, and shareholder ownership). Such a representation often turns out to provide keen insights. We show that fragility is a system-level characteristic of \"business-as-usual\" market behavior and that financial crashes are invariably preceded by system-level changes in robustness. This was done by leveraging previous work, which suggests that Ricci curvature, a key geometric feature of a given network, is negatively correlated to increases in network fragility. To illustrate this insight, we examine daily returns from a set of stocks comprising the Standard and Poor's 500 (S&P 500) over a 15-year span to highlight the fact that corresponding changes in Ricci curvature constitute a financial \"crash hallmark.\" This work lays the foundation of understanding how to design (banking) systems and policy regulations in a manner that can combat financial instabilities exposed during the 2007-2008 crisis."
},
{
"pmid": "25375548",
"title": "Triadic closure as a basic generating mechanism of communities in complex networks.",
"abstract": "Most of the complex social, technological, and biological networks have a significant community structure. Therefore the community structure of complex networks has to be considered as a universal property, together with the much explored small-world and scale-free properties of these networks. Despite the large interest in characterizing the community structures of real networks, not enough attention has been devoted to the detection of universal mechanisms able to spontaneously generate networks with communities. Triadic closure is a natural mechanism to make new connections, especially in social networks. Here we show that models of network growth based on simple triadic closure naturally lead to the emergence of community structure, together with fat-tailed distributions of node degree and high clustering coefficients. Communities emerge from the initial stochastic heterogeneity in the concentration of links, followed by a cycle of growth and fragmentation. Communities are the more pronounced, the sparser the graph, and disappear for high values of link density and randomness in the attachment procedure. By introducing a fitness-based link attractivity for the nodes, we find a phase transition where communities disappear for high heterogeneity of the fitness distribution, but a different mesoscopic organization of the nodes emerges, with groups of nodes being shared between just a few superhubs, which attract most of the links of the system."
},
{
"pmid": "27967199",
"title": "Equivalence between modularity optimization and maximum likelihood methods for community detection.",
"abstract": "We demonstrate an equivalence between two widely used methods of community detection in networks, the method of modularity maximization and the method of maximum likelihood applied to the degree-corrected stochastic block model. Specifically, we show an exact equivalence between maximization of the generalized modularity that includes a resolution parameter and the special case of the block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter."
},
{
"pmid": "30230906",
"title": "Characterizing the Analogy Between Hyperbolic Embedding and Community Structure of Complex Networks.",
"abstract": "We show that the community structure of a network can be used as a coarse version of its embedding in a hidden space with hyperbolic geometry. The finding emerges from a systematic analysis of several real-world and synthetic networks. We take advantage of the analogy for reinterpreting results originally obtained through network hyperbolic embedding in terms of community structure only. First, we show that the robustness of a multiplex network can be controlled by tuning the correlation between the community structures across different layers. Second, we deploy an efficient greedy protocol for network navigability that makes use of routing tables based on community structure."
}
] |
Frontiers in Neuroscience | 31333404 | PMC6621912 | 10.3389/fnins.2019.00686 | Deep Liquid State Machines With Neural Plasticity for Video Activity Recognition | Real-world applications such as first-person video activity recognition require intelligent edge devices. However, size, weight, and power constraints of the embedded platforms cannot support resource intensive state-of-the-art algorithms. Machine learning lite algorithms, such as reservoir computing, with shallow 3-layer networks are computationally frugal as only the output layer is trained. By reducing network depth and plasticity, reservoir computing minimizes computational power and complexity, making the algorithms optimal for edge devices. However, as a trade-off for their frugal nature, reservoir computing sacrifices computational power compared to state-of-the-art methods. A good compromise between reservoir computing and fully supervised networks are the proposed deep-LSM networks. The deep-LSM is a deep spiking neural network which captures dynamic information over multiple time-scales with a combination of randomly connected layers and unsupervised layers. The deep-LSM processes the captured dynamic information through an attention modulated readout layer to perform classification. We demonstrate that the deep-LSM achieves an average of 84.78% accuracy on the DogCentric video activity recognition task, beating state-of-the-art. The deep-LSM also shows up to 91.13% memory savings and up to 91.55% reduction in synaptic operations when compared to similar recurrent neural network models. Based on these results we claim that the deep-LSM is capable of overcoming limitations of traditional reservoir computing, while maintaining the low computational cost associated with reservoir computing. | 2. Related Work2.1. Video Activity RecognitionEgocentric video activity recognition is quickly becoming a pertinent application area due to first person wearable devices such as body cameras or in robotics. In these application domains, real-time learning is critical for deployment beyond controlled environments (such as deep space exploration), or to learn continuously in novel scenarios. Many research groups have focused on solving video activity recognition problems with 2D and 3D convolutions (Tran et al., 2015), optical flow (Simonyan and Zisserman, 2014; Zhan et al., 2014; Ma et al., 2016; Song et al., 2016a), hand-crafted features (Ryoo et al., 2015), combining motion sensors with visual information (Song et al., 2016a,b), or using long-short term memory (LSTM) networks to capture dynamics about spatial information extracted by a convolutional neural network (CNN) (Baccouche et al., 2011; Yue-Hei Ng et al., 2015). These approaches, while befitting for high-end compute platforms, are often not suitable for wearable devices due to the resource intensive networks or the long training times.Efficient video activity recognition designed for mobile devices has been studied by several research groups. An energy aware training algorithm was proposed in Possas et al. (2018), to demonstrate energy efficient video activity recognition on complex problems. In this work, the authors use reinforcement learning to train a network on both video and motion information captured by sensors while penalizing actions that have high energy costs. Another approach to minimizing energy consumption in mobile devices when using an accelerometer for activity recognition is to minimize the sampling rate (Zheng et al., 2017). In Yan et al. (2012) and Lee and Kim (2016), the authors investigate a network with adaptive features, sampling frequency, and window size for minimizing energy consumption during activity recognition.Recently Graham et al. (2017) proposed convolutional drift networks (CDNs) for enabling real-time learning on mobile devices. CDNs are an architecture for video activity recognition which use a pre-trained CNN to extract features from video frames and an ESN to capture temporal information. The motivation behind the CDNs is to minimize the training time and compute resources for spatiotemporal tasks when compared to networks akin to LSTMs (Yue-Hei Ng et al., 2015; Graham et al., 2017). A similar sized RC network requires one fourth of the weights, has faster training, and lower energy consumption as that of an LSTM.2.2. Hierarchical Reservoir ComputingAs conventional reservoir networks are shallow and capture information in short time-scales, recently several research groups have investigated hierarchical reservoir models. A hierarchical ESN is introduced in Jaeger (2007) with the goal of developing a hierarchical information processing system which feeds on high-dimension time series data and learns its own features and concepts with minimal supervision. The hierarchical layers help the system to process information on multiple timescales where faster information is processed in the earlier layers and information on slower timescales is processed in the final layers. The outputs of each reservoir feed sequentially into the next reservoir in the network. The networks prediction is made from a combination of all the reservoir outputs. More recently, a hierarchical ESN was proposed in Ma et al. (2017). In this work the authors explore the use of trained auto-encoders, principal component analysis, and random connections as encoding layers between each reservoir layer. The downside to this approach is that the output layer is trained on the activity of every encoding layer, the last reservoir, and the current input. This means as the number of layers increases, the output layer size will increase. Another hierarchical model was developed in Triefenbach et al. (2010). This model is implemented by stacking trained ESNs on top of each other to create a hierarchical chain of reservoirs. The hierarchical ESN is applied to speech recognition where the intermediary layers have a readout layer trained to perform the tasks and the inputs to the hierarchical layers are the predictions of the previous layers. With this approach each layer corrects the error from the previous layer. The authors later designed a hierarchical ESN where each layer was trained on a broad representation of the output, which became more specific at later layers (Triefenbach et al., 2013). Another hierarchical ESN proposed in Gallicchio and Micheli (2016) connects an ensemble of ESNs together. In Carmichael et al. (2018), our group has proposed a mod-deepESN architecture, a modular architecture that allows for varying topologies of deep ESNs. Intrinsic plasticity mechanism is embedded in the ESN that contributes more equally toward predictions and achieves better performance with increased breadth and depth. In Wang and Li (2016), a deep LSM model is proposed for image processing which uses multiple LSMs as filters with a single response. The authors use convolution and pooling similar to the process of CNNs and train the LSMs with an unsupervised learning rule. In Bellec et al. (2018), the authors introduce an approximation of backpropagation-through-time for LSMs to optimize the temporal memory of the LSM. The network shows a large improvement in performance on sequential MNIST and speech recognition with the TIMIT speech corpus. Another approach to optimizing the LSM is Roy and Basu (2016), which proposes a computationally efficient on-line learning rule for unsupervised optimization of reservoir connections.This work aims to develop an algorithm that overcomes few of the gaps in the vanilla RC network while focusing on maintaining the inherent efficiency of LSMs. | [
"26941637",
"28215558",
"12433288",
"9560274",
"28680387",
"12741993",
"27626967",
"21423491",
"14595400",
"28885560"
] | [
{
"pmid": "26941637",
"title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.",
"abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks."
},
{
"pmid": "28215558",
"title": "Optimal Degrees of Synaptic Connectivity.",
"abstract": "Synaptic connectivity varies widely across neuronal types. Cerebellar granule cells receive five orders of magnitude fewer inputs than the Purkinje cells they innervate, and cerebellum-like circuits, including the insect mushroom body, also exhibit large divergences in connectivity. In contrast, the number of inputs per neuron in cerebral cortex is more uniform and large. We investigate how the dimension of a representation formed by a population of neurons depends on how many inputs each neuron receives and what this implies for learning associations. Our theory predicts that the dimensions of the cerebellar granule-cell and Drosophila Kenyon-cell representations are maximized at degrees of synaptic connectivity that match those observed anatomically, showing that sparse connectivity is sometimes superior to dense connectivity. When input synapses are subject to supervised plasticity, however, dense wiring becomes advantageous, suggesting that the type of plasticity exhibited by a set of synapses is a major determinant of connection density."
},
{
"pmid": "12433288",
"title": "Real-time computing without stable states: a new framework for neural computation based on perturbations.",
"abstract": "A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology."
},
{
"pmid": "9560274",
"title": "Differential signaling via the same axon of neocortical pyramidal neurons.",
"abstract": "The nature of information stemming from a single neuron and conveyed simultaneously to several hundred target neurons is not known. Triple and quadruple neuron recordings revealed that each synaptic connection established by neocortical pyramidal neurons is potentially unique. Specifically, synaptic connections onto the same morphological class differed in the numbers and dendritic locations of synaptic contacts, their absolute synaptic strengths, as well as their rates of synaptic depression and recovery from depression. The same axon of a pyramidal neuron innervating another pyramidal neuron and an interneuron mediated frequency-dependent depression and facilitation, respectively, during high frequency discharges of presynaptic action potentials, suggesting that the different natures of the target neurons underlie qualitative differences in synaptic properties. Facilitating-type synaptic connections established by three pyramidal neurons of the same class onto a single interneuron, were all qualitatively similar with a combination of facilitation and depression mechanisms. The time courses of facilitation and depression, however, differed for these convergent connections, suggesting that different pre-postsynaptic interactions underlie quantitative differences in synaptic properties. Mathematical analysis of the transfer functions of frequency-dependent synapses revealed supra-linear, linear, and sub-linear signaling regimes in which mixtures of presynaptic rates, integrals of rates, and derivatives of rates are transferred to targets depending on the precise values of the synaptic parameters and the history of presynaptic action potential activity. Heterogeneity of synaptic transfer functions therefore allows multiple synaptic representations of the same presynaptic action potential train and suggests that these synaptic representations are regulated in a complex manner. It is therefore proposed that differential signaling is a key mechanism in neocortical information processing, which can be regulated by selective synaptic modifications."
},
{
"pmid": "28680387",
"title": "Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.",
"abstract": "An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning."
},
{
"pmid": "12741993",
"title": "Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks.",
"abstract": "The concept of bell-shaped persistent neural activity represents a cornerstone of the theory for the internal representation of analog quantities, such as spatial location or head direction. Previous models, however, relied on the unrealistic assumption of network homogeneity. We investigate this issue in a network model where fine tuning of parameters is destroyed by heterogeneities in cellular and synaptic properties. Heterogeneities result in the loss of stored spatial information in a few seconds. Accurate encoding is recovered when a homeostatic mechanism scales the excitatory synapses to each cell to compensate for the heterogeneity in cellular excitability and synaptic inputs. Moreover, the more realistic model produces a wide diversity of tuning curves, as commonly observed in recordings from prefrontal neurons. We conclude that recurrent attractor networks in conjunction with appropriate homeostatic mechanisms provide a robust, biologically plausible theoretical framework for understanding the neural circuit basis of spatial working memory."
},
{
"pmid": "27626967",
"title": "An Online Structural Plasticity Rule for Generating Better Reservoirs.",
"abstract": "In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 0.18 times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 5.79 ms longer fading memory than random liquids, which have shown 92.8 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively."
},
{
"pmid": "21423491",
"title": "Homeostatic Plasticity and STDP: Keeping a Neuron's Cool in a Fluctuating World.",
"abstract": "Spike-timing-dependent plasticity (STDP) offers a powerful means of forming and modifying neural circuits. Experimental and theoretical studies have demonstrated its potential usefulness for functions as varied as cortical map development, sharpening of sensory receptive fields, working memory, and associative learning. Even so, it is unlikely that STDP works alone. Unless changes in synaptic strength are coordinated across multiple synapses and with other neuronal properties, it is difficult to maintain the stability and functionality of neural circuits. Moreover, there are certain features of early postnatal development (e.g., rapid changes in sensory input) that threaten neural circuit stability in ways that STDP may not be well placed to counter. These considerations have led researchers to investigate additional types of plasticity, complementary to STDP, that may serve to constrain synaptic weights and/or neuronal firing. These are collectively known as \"homeostatic plasticity\" and include schemes that control the total synaptic strength of a neuron, that modulate its intrinsic excitability as a function of average activity, or that make the ability of synapses to undergo Hebbian modification depend upon their history of use. In this article, we will review the experimental evidence for homeostatic forms of plasticity and consider how they might interact with STDP during development, and learning and memory."
},
{
"pmid": "28885560",
"title": "A Novel Energy-Efficient Approach for Human Activity Recognition.",
"abstract": "In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper."
}
] |
Frontiers in Bioengineering and Biotechnology | 31334225 | PMC6624635 | 10.3389/fbioe.2019.00125 | Glandular Segmentation of Prostate Cancer: An Illustration of How the Choice of Histopathological Stain Is One Key to Success for Computational Pathology | Digital pathology offers the potential for computer-aided diagnosis, significantly reducing the pathologists' workload and paving the way for accurate prognostication with reduced inter-and intra-observer variations. But successful computer-based analysis requires careful tissue preparation and image acquisition to keep color and intensity variations to a minimum. While the human eye may recognize prostate glands with significant color and intensity variations, a computer algorithm may fail under such conditions. Since malignancy grading of prostate tissue according to Gleason or to the International Society of Urological Pathology (ISUP) grading system is based on architectural growth patterns of prostatic carcinoma, automatic methods must rely on accurate identification of the prostate glands. But due to poor color differentiation between stroma and epithelium from the common stain hematoxylin-eosin, no method is yet able to segment all types of glands, making automatic prognostication hard to attain. We address the effect of tissue preparation on glandular segmentation with an alternative stain, Picrosirius red-hematoxylin, which clearly delineates the stromal boundaries, and couple this stain with a color decomposition that removes intensity variation. In this paper we propose a segmentation algorithm that uses image analysis techniques based on mathematical morphology and that can successfully determine the glandular boundaries. Accurate determination of the stromal and glandular morphology enables the identification of the architectural pattern that determine the malignancy grade and classify each gland into its appropriate Gleason grade or ISUP Grade Group. Segmentation of prostate tissue with the new stain and decomposition method has been successfully tested on more than 11000 objects including well-formed glands (Gleason grade 3), cribriform and fine caliber glands (grade 4), and single cells (grade 5) glands. | Related WorkThere are many examples in the literature of prostate gland segmentation as part of automatic malignancy grading systems. Naik et al. (2007) find the lumen using color information and use the lumen boundary to initialize level set curves which evolve until they reach the epithelial nuclei. The final glandular structure only includes the lumen and the epithelium without the nuclei. Nguyen et al. (2012) also start with the lumen and grow that structure to include the epithelial nuclei. Singh et al. (2017) manually annotate gland, lumen, periacinar refraction, and stroma in H&E-stained tissue images, and train a segmentation algorithm on these manual annotations using standard machine learning techniques. The segmentation process continues by region-growing from a seed inside the glands toward the epithelial nuclei. By the authors own admission, the algorithm fails for cribriform glands, since these glands are not lined with epithelial nuclei. Paul and Mukherjee (2016) propose an automatic prostate gland segmentation of H&E-stained tissue images using morphological scale space. The authors assume that glands are surrounded by an epithelial layer where the nuclei appear dark and can be used to delineate the glands. The methods above work on the assumption that a gland is surrounded by a layer of epithelial nuclei, and can thus successfully find only benign glands, glands of Gleason grade (GG) three, and some of poorly formed grade 4, but cannot identify other types, such as cribriform structures and grade 5. Tabesh et al. (2007) use a different approach identifying small objects in the prostate tissue with similar characteristics which are used directly for classification of cancerous and non-cancerous tissue, without identification of the underlying glandular structure. But without the glandular structures it is impossible to identify all the Gleason grades shown in Figure 1.Figure 1Gleason grades: (A) benign; (B) well-formed glands (Gleason grade 3); (C) poorly formed glands (Gleason grade 4); (D) cribriform (Gleason grade 4); (E) small fused glands (Gleason grade 4); (F) large fused glands (Gleason grade 4); (G) intraductal carcinoma (Gleason grade 4); (H) poorly formed glands and single cells (Gleason grades 4 and 5).To automatically identify all glandular patterns illustrated in Figure 1, an algorithm must work from the stromal border and in, not from the center of the gland out. However, traditionally prostatic tissue is stained with hematoxylin and eosin (H&E), which gives poor differentiation between epithelium and stroma, as both stain in shades of red/pink by eosin. A different stain that gives good contrast between glandular epithelium and stroma is required for accurate prostate gland segmentation that works for all types of prostate glands.While the methods above rely on classical machine learning and image analysis, deep learning has recently generated a great deal of interest for the problem of segmentation and classification of prostate tissue. The first such publication (Litjens et al., 2016), applies convolutional networks (CNN) to prostate tissue analysis. The authors manually delineate cancer regions from H&E-stained prostate tissue and then train a network on patches extracted from these regions. A cancer likelihood map from the CNN shows good agreement with the manual identified cancer regions. The authors also demonstrate that potentially it is possible to automatically exclude a significant portion of benign tissue from the diagnostic process.In Ing et al. (2018), the authors combine segmentation and classification of glandular regions. They annotate regions in H&E-stained tissue images as stroma, benign glands, Gleason grade 3 and Gleason grade 4&5, and train several public networks with these annotations. The results are compared with manually annotated regions and show a good accuracy for benign tissue, and for high-grade and low-grade cancer. Gummeson et al. (2017) describe the classification of prostate tissue into classes benign, and Gleason grades 3, 4, 5 with a proprietary network. The training dataset is created by cropping tissue images so that each training image contains only one grade. The authors report a high classification accuracy, but the small training dataset may not cover all types of glands.Jiménez del Toro et al. (2017) describe a completely automatic segmentation and classification method for H&E-stained tissue images. The ground truth is extracted from pathologist's reports in the original diagnoses. The authors propose a method to remove the areas that are not of interest, that is areas in the tissue with few epithelial nuclei, and train public networks on patches in the remaining tissue. The method performs well in separating low grade cancer (Gleason scores 6-7) from high grade cancer (Gleason scores 8-10).In summary, while deep learning shows great promise for the segmentation and classification of prostate glands, none of the approaches above can segment or classify all types of malignant glands into appropriate categories. | [
"22641492",
"11172298",
"23766933",
"18392850",
"26166626",
"27918777",
"23322760",
"1555838",
"91593",
"27649382",
"27212078",
"4129194",
"28653016",
"17948727",
"25649671"
] | [
{
"pmid": "22641492",
"title": "Inter/intra-observer reproducibility of Gleason scoring in prostate adenocarcinoma in Iranian pathologists.",
"abstract": "PURPOSE\nTo measure the level of inter/intra-observer reproducibility among pathologists as far as Gleason scoring of adenocarcinoma of the prostate is concerned.\n\n\nMATERIALS AND METHODS\nA total of 101 prostate biopsy slides, diagnosed with adenocarcinoma of the prostate by five pathologists from different education centers, were exposed to Gleason scoring. Two months later, the slides were re-examined by three of the same pathologists. Thereafter, the kappa was calculated for the data provided in the first and second reports of each pathologist and compared between pathologists.\n\n\nRESULTS\nInter-observer reproducibility was inappropriate, but intra-observer diagnostic reproducibility was almost perfect with a corresponding percentage of agreement of 85.2%.\n\n\nCONCLUSION\nThe inter-observer reproducibility was poor."
},
{
"pmid": "11172298",
"title": "Interobserver reproducibility of Gleason grading of prostatic carcinoma: urologic pathologists.",
"abstract": "Gleason grading is now the most widely used grading system for prostatic carcinoma in the United States. However, there are only a few studies of the interobserver reproducibility of this system, and no extensive study of interobserver reproducibility among a large number of experienced urologic pathologists exists. Forty-six needle biopsies containing prostatic carcinoma were assigned Gleason scores by 10 urologic pathologists. The overall weighted kappa coefficient kappa(w) for Gleason score for each of the urologic pathologists compared with each of the remaining urologic pathologists ranged from 0.56 to 0.70, all but one being at least 0.60 (substantial agreement). The overall kappa coefficient kappa for each pathologist compared with the others for Gleason score groups 2-4, 5-6, 7, and 8-10 ranged from 0.47 to 0.64 (moderate-substantial agreement), only one less than 0.50. At least 70% of the urologic pathologists agreed on the Gleason grade group (2-4, 5-6, 7, 8-10) in 38 (\"consensus\" cases) of the 46 cases. The 8 \"nonconsensus\" cases included low-grade tumors, tumors with small cribriform proliferations, and tumors whose histology was on the border between Gleason patterns. Interobserver reproducibility of Gleason grading among urologic pathologists is in an acceptable range."
},
{
"pmid": "23766933",
"title": "Histological stain evaluation for machine learning applications.",
"abstract": "AIMS\nA methodology for quantitative comparison of histological stains based on their classification and clustering performance, which may facilitate the choice of histological stains for automatic pattern and image analysis.\n\n\nBACKGROUND\nMachine learning and image analysis are becoming increasingly important in pathology applications for automatic analysis of histological tissue samples. Pathologists rely on multiple, contrasting stains to analyze tissue samples, but histological stains are developed for visual analysis and are not always ideal for automatic analysis.\n\n\nMATERIALS AND METHODS\nThirteen different histological stains were used to stain adjacent prostate tissue sections from radical prostatectomies. We evaluate the stains for both supervised and unsupervised classification of stain/tissue combinations. For supervised classification we measure the error rate of nonlinear support vector machines, and for unsupervised classification we use the Rand index and the F-measure to assess the clustering results of a Gaussian mixture model based on expectation-maximization. Finally, we investigate class separability measures based on scatter criteria.\n\n\nRESULTS\nA methodology for quantitative evaluation of histological stains in terms of their classification and clustering efficacy that aims at improving segmentation and color decomposition. We demonstrate that for a specific tissue type, certain stains perform consistently better than others according to objective error criteria.\n\n\nCONCLUSIONS\nThe choice of histological stain for automatic analysis must be based on its classification and clustering performance, which are indicators of the performance of automatic segmentation of tissue into morphological components, which in turn may be the basis for diagnosis."
},
{
"pmid": "18392850",
"title": "Interobserver reproducibility of Gleason grading: evaluation using prostate cancer tissue microarrays.",
"abstract": "OBJECTIVES\nDue to PSA screening and increased awareness, prostate cancer (PCa) is identified earlier resulting in smaller diagnostic samples on prostate needle biopsy. Because Gleason grading plays a critical role in treatment planning, we undertook a controlled study to evaluate interobserver variability among German pathologists to grade small PCas using a series of tissue microarray (TMA) images.\n\n\nMETHODS\nWe have previously demonstrated excellent agreement in Gleason grading using TMAs among expert genitourinary pathologists. In the current study, we identified 331 TMA images (95% PCa and 5% benign) to be evaluated by an expert PCa pathologist and subsequently by practicing pathologists throughout Germany. The images were presented using the Bacus Webslide Browser on a CD-ROM. Evaluations were kept anonymous and participant's scoring was compared to the expert's results.\n\n\nRESULTS\nA total of 29 German pathologists analysed an average of 278 images. Mean percentage of TMA images which had been assigned the same Gleason score (GS) as done by the expert was 45.7%. GSs differed by no more than one point (+/-1) in 83.5% of the TMA samples evaluated. The respondents were able to correctly assign a GS into clinically relevant categories (i.e. <7, 7, >7) in 68.3% of cases. A total of 75.9% respondents under-graded the TMA images. Gleason grading agreement with the expert reviewer correlated with the number of biopsies evaluated by the pathologist per week. Years of diagnostic experience, self-description as a urologic pathologist or affiliation with a university hospital did not correlate with the pathologist's performance.\n\n\nCONCLUSION\nThe vast majority of participants under-graded the small tumors. Clinically relevant GS categories were correctly assigned in 68% of cases. This raises a potentially significant problem for pathologists, who have not had as much experience evaluating small PCas."
},
{
"pmid": "26166626",
"title": "A Contemporary Prostate Cancer Grading System: A Validated Alternative to the Gleason Score.",
"abstract": "BACKGROUND\nDespite revisions in 2005 and 2014, the Gleason prostate cancer (PCa) grading system still has major deficiencies. Combining of Gleason scores into a three-tiered grouping (6, 7, 8-10) is used most frequently for prognostic and therapeutic purposes. The lowest score, assigned 6, may be misunderstood as a cancer in the middle of the grading scale, and 3+4=7 and 4+3=7 are often considered the same prognostic group.\n\n\nOBJECTIVE\nTo verify that a new grading system accurately produces a smaller number of grades with the most significant prognostic differences, using multi-institutional and multimodal therapy data.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBetween 2005 and 2014, 20,845 consecutive men were treated by radical prostatectomy at five academic institutions; 5501 men were treated with radiotherapy at two academic institutions.\n\n\nOUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS\nOutcome was based on biochemical recurrence (BCR). The log-rank test assessed univariable differences in BCR by Gleason score. Separate univariable and multivariable Cox proportional hazards used four possible categorizations of Gleason scores.\n\n\nRESULTS AND LIMITATIONS\nIn the surgery cohort, we found large differences in recurrence rates between both Gleason 3+4 versus 4+3 and Gleason 8 versus 9. The hazard ratios relative to Gleason score 6 were 1.9, 5.1, 8.0, and 11.7 for Gleason scores 3+4, 4+3, 8, and 9-10, respectively. These differences were attenuated in the radiotherapy cohort as a whole due to increased adjuvant or neoadjuvant hormones for patients with high-grade disease but were clearly seen in patients undergoing radiotherapy only. A five-grade group system had the highest prognostic discrimination for all cohorts on both univariable and multivariable analysis. The major limitation was the unavoidable use of prostate-specific antigen BCR as an end point as opposed to cancer-related death.\n\n\nCONCLUSIONS\nThe new PCa grading system has these benefits: more accurate grade stratification than current systems, simplified grading system of five grades, and lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa.\n\n\nPATIENT SUMMARY\nWe looked at outcomes for prostate cancer (PCa) treated with radical prostatectomy or radiation therapy and validated a new grading system with more accurate grade stratification than current systems, including a simplified grading system of five grades and a lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa."
},
{
"pmid": "27918777",
"title": "Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-years for 32 Cancer Groups, 1990 to 2015: A Systematic Analysis for the Global Burden of Disease Study.",
"abstract": "IMPORTANCE\nCancer is the second leading cause of death worldwide. Current estimates on the burden of cancer are needed for cancer control planning.\n\n\nOBJECTIVE\nTo estimate mortality, incidence, years lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) for 32 cancers in 195 countries and territories from 1990 to 2015.\n\n\nEVIDENCE REVIEW\nCancer mortality was estimated using vital registration system data, cancer registry incidence data (transformed to mortality estimates using separately estimated mortality to incidence [MI] ratios), and verbal autopsy data. Cancer incidence was calculated by dividing mortality estimates through the modeled MI ratios. To calculate cancer prevalence, MI ratios were used to model survival. To calculate YLDs, prevalence estimates were multiplied by disability weights. The YLLs were estimated by multiplying age-specific cancer deaths by the reference life expectancy. DALYs were estimated as the sum of YLDs and YLLs. A sociodemographic index (SDI) was created for each location based on income per capita, educational attainment, and fertility. Countries were categorized by SDI quintiles to summarize results.\n\n\nFINDINGS\nIn 2015, there were 17.5 million cancer cases worldwide and 8.7 million deaths. Between 2005 and 2015, cancer cases increased by 33%, with population aging contributing 16%, population growth 13%, and changes in age-specific rates contributing 4%. For men, the most common cancer globally was prostate cancer (1.6 million cases). Tracheal, bronchus, and lung cancer was the leading cause of cancer deaths and DALYs in men (1.2 million deaths and 25.9 million DALYs). For women, the most common cancer was breast cancer (2.4 million cases). Breast cancer was also the leading cause of cancer deaths and DALYs for women (523 000 deaths and 15.1 million DALYs). Overall, cancer caused 208.3 million DALYs worldwide in 2015 for both sexes combined. Between 2005 and 2015, age-standardized incidence rates for all cancers combined increased in 174 of 195 countries or territories. Age-standardized death rates (ASDRs) for all cancers combined decreased within that timeframe in 140 of 195 countries or territories. Countries with an increase in the ASDR due to all cancers were largely located on the African continent. Of all cancers, deaths between 2005 and 2015 decreased significantly for Hodgkin lymphoma (-6.1% [95% uncertainty interval (UI), -10.6% to -1.3%]). The number of deaths also decreased for esophageal cancer, stomach cancer, and chronic myeloid leukemia, although these results were not statistically significant.\n\n\nCONCLUSION AND RELEVANCE\nAs part of the epidemiological transition, cancer incidence is expected to increase in the future, further straining limited health care resources. Appropriate allocation of resources for cancer prevention, early diagnosis, and curative and palliative care requires detailed knowledge of the local burden of cancer. The GBD 2015 study results demonstrate that progress is possible in the war against cancer. However, the major findings also highlight an unmet need for cancer prevention efforts, including tobacco control, vaccination, and the promotion of physical activity and a healthy diet."
},
{
"pmid": "23322760",
"title": "Blind color decomposition of histological images.",
"abstract": "Cancer diagnosis is based on visual examination under a microscope of tissue sections from biopsies. But whereas pathologists rely on tissue stains to identify morphological features, automated tissue recognition using color is fraught with problems that stem from image intensity variations due to variations in tissue preparation, variations in spectral signatures of the stained tissue, spectral overlap and spatial aliasing in acquisition, and noise at image acquisition. We present a blind method for color decomposition of histological images. The method decouples intensity from color information and bases the decomposition only on the tissue absorption characteristics of each stain. By modeling the charge-coupled device sensor noise, we improve the method accuracy. We extend current linear decomposition methods to include stained tissues where one spectral signature cannot be separated from all combinations of the other tissues' spectral signatures. We demonstrate both qualitatively and quantitatively that our method results in more accurate decompositions than methods based on non-negative matrix factorization and independent component analysis. The result is one density map for each stained tissue type that classifies portions of pixels into the correct stained tissue allowing accurate identification of morphological features that may be linked to cancer."
},
{
"pmid": "1555838",
"title": "Histologic grading of prostate cancer: a perspective.",
"abstract": "The wide-ranging biologic malignancy of prostate cancer is strongly correlated with its extensive and diverse morphologic appearances. Histologic grading is a valuable research tool that could and should be used more extensively and systematically in patient care. It can improve clinical staging, as outlined by Oesterling et al (J Urol 138: 92-98, 1987), during selection of patients for possible prostatectomy by helping to identify the optimal treatment. Some of the recurrent practical problems with grading (reproducibility, \"undergrading\" of biopsies, and \"lumping\" of grades) are discussed and recommendations are made. The newer technologically sophisticated but single-parameter tumor measurements are compared with one important advantage of histologic grading: the ability to encompass the entire low to high range of malignancy. The predictive success of grading suggests that prostate cancers have more or less fixed degrees of malignancy and growth rates (a hypothesis of \"biologic determinism\") rather than a steady increase in malignancy with time. Most of the observed facts can be interpreted on that basis, including the interrelations of tumor size, grade, and malignancy. The increasing age-adjusted incidence of diagnosed prostate cancer is attributed to new diagnostic tools and increased diagnostic zeal."
},
{
"pmid": "91593",
"title": "Picrosirius staining plus polarization microscopy, a specific method for collagen detection in tissue sections.",
"abstract": "Sirius Red, a strong anionic dye, stains collagen by reacting, via its sulphonic acid groups, with basic groups present in the collagen molecule. The elongated dye molecules are attached to the collagen fibre in such a way that their long axes are parallel. This parallel relationship between dye and collagen results in an enhanced birefringency. Examination of tissue sections from 15 species of vertebrates suggests that staining with Sirius Red, when combined with enhancement of birefringency, may be considered specific for collagen. An improved and modified method of staining with Sirius Red is presented."
},
{
"pmid": "27649382",
"title": "Automatic thresholding from the gradients of region boundaries.",
"abstract": "We present an approach for automatic threshold segmentation of greyscale images. The procedure is inspired by a reinterpretation of the strategy observed in human operators when adjusting thresholds manually and interactively by means of 'slider' controls. The approach translates into two methods. The first one is suitable for single or multiple global thresholds to be applied globally to images and consists of searching for a threshold value that generates a phase whose boundary coincides with the largest gradients in the original image. The second method is a variation, implemented to operate on the discrete connected components of the thresholded phase (i.e. the binary regions) independently. Consequently, this becomes an adaptive local threshold procedure, which operates relative to regions, rather than to local image subsets as is the case in most local thresholding methods previously published. Adding constraints for specifying certain classes of expected objects in the images can improve the output of the method over the traditional 'segmenting first, then classify' approach."
},
{
"pmid": "27212078",
"title": "Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.",
"abstract": "Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging."
},
{
"pmid": "28653016",
"title": "Gland segmentation in prostate histopathological images.",
"abstract": "Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shapes and sizes of glands combined with the tedious manual observation task can result in inaccurate assessment. There are also discrepancies and low-level agreement among pathologists, especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. An automated gland segmentation system can highlight various glandular shapes and structures for further analysis by the pathologist. These objective highlighted patterns can help reduce the assessment variability. We propose an automated gland segmentation system. Forty-three hematoxylin and eosin-stained images were acquired from prostate cancer tissue slides and were manually annotated for gland, lumen, periacinar retraction clefting, and stroma regions. Our automated gland segmentation system was trained using these manual annotations. It identifies these regions using a combination of pixel and object-level classifiers by incorporating local and spatial information for consolidating pixel-level classification results into object-level segmentation. Experimental results show that our method outperforms various texture and gland structure-based gland segmentation algorithms in the literature. Our method has good performance and can be a promising tool to help decrease interobserver variability among pathologists."
},
{
"pmid": "17948727",
"title": "Multifeature prostate cancer diagnosis and Gleason grading of histological images.",
"abstract": "We present a study of image features for cancer diagnosis and Gleason grading of the histological images of prostate. In diagnosis, the tissue image is classified into the tumor and nontumor classes. In Gleason grading, which characterizes tumor aggressiveness, the image is classified as containing a low- or high-grade tumor. The image sets used in this paper consisted of 367 and 268 color images for the diagnosis and Gleason grading problems, respectively, and were captured from representative areas of hematoxylin and eosin-stained tissue retrieved from tissue microarray cores or whole sections. The primary contribution of this paper is to aggregate color, texture, and morphometric cues at the global and histological object levels for classification. Features representing different visual cues were combined in a supervised learning framework. We compared the performance of Gaussian, k-nearest neighbor, and support vector machine classifiers together with the sequential forward feature selection algorithm. On diagnosis, using a five-fold cross-validation estimate, an accuracy of 96.7% was obtained. On Gleason grading, the achieved accuracy of classification into low- and high-grade classes was 81.0%."
},
{
"pmid": "25649671",
"title": "The past, present, and future of cancer incidence in the United States: 1975 through 2020.",
"abstract": "BACKGROUND\nThe overall age-standardized cancer incidence rate continues to decline whereas the number of cases diagnosed each year increases. Predicting cancer incidence can help to anticipate future resource needs, evaluate primary prevention strategies, and inform research.\n\n\nMETHODS\nSurveillance, Epidemiology, and End Results data were used to estimate the number of cancers (all sites) resulting from changes in population risk, age, and size. The authors projected to 2020 nationwide age-standardized incidence rates and cases (including the top 23 cancers).\n\n\nRESULTS\nSince 1975, incident cases increased among white individuals, primarily caused by an aging white population, and among black individuals, primarily caused by an increasing black population. Between 2010 and 2020, it is expected that overall incidence rates (proxy for risk) will decrease slightly among black men and stabilize in other groups. By 2020, the authors predict annual cancer cases (all races, all sites) to increase among men by 24.1% (-3.2% risk and 27.3% age/growth) to >1 million cases, and by 20.6% among women (1.2% risk and 19.4% age/growth) to >900,000 cases. The largest increases are expected for melanoma (white individuals); cancers of the prostate, kidney, liver, and urinary bladder in males; and the lung, breast, uterus, and thyroid in females.\n\n\nCONCLUSIONS\nOverall, the authors predict cancer incidence rates/risk to stabilize for the majority of the population; however, they expect the number of cancer cases to increase by >20%. A greater emphasis on primary prevention and early detection is needed to counter the effect of an aging and growing population on the burden of cancer."
}
] |
Nature Communications | 31308376 | PMC6629670 | 10.1038/s41467-019-11012-3 | Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences | Biomedical repositories such as the UK Biobank provide increasing access to prospectively collected cardiac imaging, however these data are unlabeled, which creates barriers to their use in supervised machine learning. We develop a weakly supervised deep learning model for classification of aortic valve malformations using up to 4,000 unlabeled cardiac MRI sequences. Instead of requiring highly curated training data, weak supervision relies on noisy heuristics defined by domain experts to programmatically generate large-scale, imperfect training labels. For aortic valve classification, models trained with imperfect labels substantially outperform a supervised model trained on hand-labeled MRIs. In an orthogonal validation experiment using health outcomes data, our model identifies individuals with a 1.8-fold increase in risk of a major adverse cardiac event. This work formalizes a deep learning baseline for aortic valve classification and outlines a general strategy for using weak supervision to train machine learning models using unlabeled medical images at scale. | Related workIn medical imaging, weak supervision refers to a broad range of techniques using limited, indirect, or noisy labels. Multiple instance learning (MIL) is one common weak supervision approach in medical images55. MIL approaches assume a label is defined over a bag of unlabeled instances, such as an image-level label being used to supervise a segmentation task. Xu et al.56 simultaneously performed binary classification and segmentation for histopathology images using a variant of MIL, where image-level labels are used to supervise both image classification and a segmentation subtask. ChestX-ray830 was used in Li et al.57 to jointly perform classification and localization using a small number of weakly labeled examples. Patient radiology reports and other medical record data are frequently used to generate noisy labels for imaging tasks30,58–60.Weak supervision shares similarities with semi-supervised learning61, which enables training models using a small labeled dataset combined with large, unlabeled data. The primary difference is how the structure of unlabeled data is specified in the model. In semi-supervised learning, we make smoothness assumptions and extract insights on structure directly from unlabeled data using task-agnostic properties such as distance metrics and entropy constraints62. Weak supervision, in contrast, relies on directly injecting domain knowledge into the model to incorporate the underlying structure of unlabeled data. In many cases, these sources of domain knowledge are readily available in existing knowledge bases, indirectly-labeled data like patient notes, or weak classification models and heuristics. | [
"15710758",
"20579534",
"28490615",
"27898976",
"28117445",
"24553384",
"30828647",
"29391769",
"18506017",
"26864668",
"27643430",
"27282895",
"28720123",
"28641372",
"10403851",
"23714095",
"19234262",
"24451178",
"28299607",
"22705287",
"22274839",
"12180402",
"25024921",
"9377276",
"20858131",
"28092576",
"24637156"
] | [
{
"pmid": "15710758",
"title": "Frequency by decades of unicuspid, bicuspid, and tricuspid aortic valves in adults having isolated aortic valve replacement for aortic stenosis, with or without associated aortic regurgitation.",
"abstract": "BACKGROUND\nAortic valve stenosis (with or without aortic regurgitation and without associated mitral stenosis) in adults in the Western world has been considered in recent years to most commonly be the result of degenerative or atherosclerotic disease.\n\n\nMETHODS AND RESULTS\nWe examined operatively excised, stenotic aortic valves from 932 patients aged 26 to 91 years (mean+/-SD, 70+/-12), and none had associated mitral valve replacement or evidence of mitral stenosis: A total of 504 (54%) had congenitally malformed valves (unicuspid in 46 [unicommissural in 42; acommissural in 4] and bicuspid in 458); 417 (45%) had tricuspid valves (either absent or minimal commissural fusion); and 11 (1%) had valves of undetermined type. It is likely that the latter 11 valves also had been congenitally malformed. Of the 584 men, 343 (59%) had either a unicuspid or a bicuspid valve; of the 348 women, 161 (46%) had either a unicuspid or a bicuspid aortic valve.\n\n\nCONCLUSIONS\nThe data from this large study of adults having isolated aortic valve replacement for aortic stenosis (with or without associated aortic regurgitation) and without associated mitral stenosis or mitral valve replacement strongly suggest that an underlying congenitally malformed valve, at least in men, is more common than a tricuspid aortic valve."
},
{
"pmid": "20579534",
"title": "Bicuspid aortic valve disease.",
"abstract": "Bicuspid aortic valve (BAV) disease is the most common congenital cardiac defect. While the BAV can be found in isolation, it is often associated with other congenital cardiac lesions. The most frequent associated finding is dilation of the proximal ascending aorta secondary to abnormalities of the aortic media. Changes in the aortic media are present independent of whether the valve is functionally normal, stenotic, or incompetent. Although symptoms often manifest in adulthood, there is a wide spectrum of presentations ranging from severe disease detected in utero to asymptomatic disease in old age. Complications can include aortic valve stenosis or incompetence, endocarditis, aortic aneurysm formation, and aortic dissection. Despite the potential complications, 2 large contemporary series have demonstrated that life expectancy in adults with BAV disease is not shortened when compared with the general population. Because BAV is a disease of both the valve and the aorta, surgical decision making is more complicated, and many undergoing aortic valve replacement will also need aortic root surgery. With or without surgery, patients with BAV require continued surveillance. Recent studies have improved our understanding of the genetics, the pathobiology, and the clinical course of the disease, but questions are still unanswered. In the future, medical treatment strategies and timing of interventions will likely be refined. This review summarizes our current understanding of the pathology, genetics, and clinical aspects of BAV disease with a focus on BAV disease in adulthood."
},
{
"pmid": "28490615",
"title": "Contemporary natural history of bicuspid aortic valve disease: a systematic review.",
"abstract": "We performed a systematic review of the current state of the literature regarding the natural history and outcomes of bicuspid aortic valve (BAV). PubMed and the reference lists of the included articles were searched for relevant studies reporting on longitudinal follow-up of BAV cohorts (mean follow-up ≥2 years). Studies limited to patients undergoing surgical interventions were excluded. 13 studies (11 502 patients with 2-16 years of follow-up) met the inclusion criteria. There was a bimodal age distribution (30-40 vs ≥50 years), with a 3:1 male to female ratio. Complications included moderate to severe aortic regurgitation (prevalence 13%-30%), moderate to severe aortic stenosis (12%-37%), infective endocarditis (2%-5%) and aortic dilatation (20%-40%). Aortic dissection or rupture was rare, occurring in 38 patients (0.4%, 27/6446 in native BAV and 11/2232 in post). With current aggressive surveillance and prophylactic surgical interventions, survival in three out of four studies was similar to that of a matched general population. In this systematic review, valvular dysfunction warranting surgical intervention in patients with BAV were common, aortic dissection was rare and, with the current management approach, survival was similar to that of the general population."
},
{
"pmid": "27898976",
"title": "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.",
"abstract": "Importance\nDeep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.\n\n\nObjective\nTo apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.\n\n\nDesign and Setting\nA specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.\n\n\nExposure\nDeep learning-trained algorithm.\n\n\nMain Outcomes and Measures\nThe sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.\n\n\nResults\nThe EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.\n\n\nConclusions and Relevance\nIn this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment."
},
{
"pmid": "28117445",
"title": "Dermatologist-level classification of skin cancer with deep neural networks.",
"abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care."
},
{
"pmid": "30828647",
"title": "Fast and accurate view classification of echocardiograms using deep learning.",
"abstract": "Echocardiography is essential to cardiology. However, the need for human interpretation has limited echocardiography's full potential for precision medicine. Deep learning is an emerging tool for analyzing images but has not yet been widely applied to echocardiograms, partly due to their complex multi-view format. The essential first step toward comprehensive computer-assisted echocardiographic interpretation is determining whether computers can learn to recognize these views. We trained a convolutional neural network to simultaneously classify 15 standard views (12 video, 3 still), based on labeled still images and videos from 267 transthoracic echocardiograms that captured a range of real-world clinical variation. Our model classified among 12 video views with 97.8% overall test accuracy without overfitting. Even on single low-resolution images, accuracy among 15 views was 91.7% vs. 70.2-84.0% for board-certified echocardiographers. Data visualization experiments showed that the model recognizes similarities among related views and classifies using clinically relevant image features. Our results provide a foundation for artificial intelligence-assisted echocardiographic interpretation."
},
{
"pmid": "29391769",
"title": "Inferring Generative Model Structure with Static Analysis.",
"abstract": "Obtaining enough labeled data to robustly train complex discriminative models is a major bottleneck in the machine learning pipeline. A popular solution is combining multiple sources of weak supervision using generative models. The structure of these models affects training label quality, but is difficult to learn without any ground truth labels. We instead rely on these weak supervision sources having some structure by virtue of being encoded programmatically. We present Coral, a paradigm that infers generative model structure by statically analyzing the code for these heuristics, thus reducing the data required to learn structure significantly. We prove that Coral's sample complexity scales quasilinearly with the number of heuristics and number of relations found, improving over the standard sample complexity, which is exponential in n for identifying nth degree relations. Experimentally, Coral matches or outperforms traditional structure learning approaches by up to 3.81 F1 points. Using Coral to model dependencies instead of assuming independence results in better performance than a fully supervised model by 3.07 accuracy points when heuristics are used to label radiology data without ground truth labels."
},
{
"pmid": "18506017",
"title": "Natural history of asymptomatic patients with normally functioning or minimally dysfunctional bicuspid aortic valve in the community.",
"abstract": "BACKGROUND\nBicuspid aortic valve is frequent and is reported to cause numerous complications, but the clinical outcome of patients diagnosed with normal or mildly dysfunctional valve is undefined.\n\n\nMETHODS AND RESULTS\nIn 212 asymptomatic community residents from Olmsted County, Minn (age, 32+/-20 years; 65% male), bicuspid aortic valve was diagnosed between 1980 and 1999 with ejection fraction > or =50% and aortic regurgitation or stenosis, absent or mild. Aortic valve degeneration at diagnosis was scored echocardiographically for calcification, thickening, and mobility reduction (0 to 3 each), with scores ranging from 0 to 9. At diagnosis, ejection fraction was 63+/-5% and left ventricular diameter was 48+/-9 mm. Survival 20 years after diagnosis was 90+/-3%, identical to the general population (P=0.72). Twenty years after diagnosis, heart failure, new cardiac symptoms, and cardiovascular medical events occurred in 7+/-2%, 26+/-4%, and 33+/-5%, respectively. Twenty years after diagnosis, aortic valve surgery, ascending aortic surgery, or any cardiovascular surgery was required in 24+/-4%, 5+/-2%, and 27+/-4% at a younger age than the general population (P<0.0001). No aortic dissection occurred. Thus, cardiovascular medical or surgical events occurred in 42+/-5% 20 years after diagnosis. Independent predictors of cardiovascular events were age > or =50 years (risk ratio, 3.0; 95% confidence interval, 1.5 to 5.7; P<0.01) and valve degeneration at diagnosis (risk ratio, 2.4; 95% confidence interval, 1.2 to 4.5; P=0.016; >70% events at 20 years). Baseline ascending aorta > or =40 mm independently predicted surgery for aorta dilatation (risk ratio, 10.8; 95% confidence interval, 1.8 to 77.3; P<0.01).\n\n\nCONCLUSIONS\nIn the community, asymptomatic patients with bicuspid aortic valve and no or minimal hemodynamic abnormality enjoy excellent long-term survival but incur frequent cardiovascular events, particularly with progressive valve dysfunction. Echocardiographic valve degeneration at diagnosis separates higher-risk patients who require regular assessment from lower-risk patients who require only episodic follow-up."
},
{
"pmid": "26864668",
"title": "Coronary anatomy as related to bicuspid aortic valve morphology.",
"abstract": "OBJECTIVE\nVariable coronary anatomy has been described in patients with bicuspid aortic valves (BAVs). This was never specified to BAV morphology, and prognostic relevance of coronary vessel dominance in this patient group is unclear. The purpose of this study was to evaluate valve morphology in relation to coronary artery anatomy and outcome in patients with isolated BAV and with associated aortic coarctation (CoA).\n\n\nMETHODS\nCoronary anatomy was evaluated in 186 patients with BAV (141 men (79%), 51±14 years) by CT and invasive coronary angiography. Correlation of coronary anatomy was made with BAV morphology and coronary events.\n\n\nRESULTS\nStrictly bicuspid valves (without raphe) with left-right cusp fusion (type 1B) had more left dominant coronary systems compared with BAVs with left-right cusp fusion with a raphe (type 1A) (48% vs. 26%, p=0.047) and showed more separate ostia (28% vs. 9%, p=0.016). Type 1B BAVs had more coronary artery disease than patients with type 1A BAV (36% vs. 19%, p=0.047). More left dominance was seen in BAV patients with CoA than in patients without (65% vs. 24%, p<0.05).\n\n\nCONCLUSIONS\nThe incidence of a left dominant coronary artery system and separate ostia was significantly related to BAVs with left-right fusion without a raphe (type 1B). These patients more often had significant coronary artery disease. In patients with BAV and CoA, left dominancy is more common."
},
{
"pmid": "27643430",
"title": "Multimodal population brain imaging in the UK Biobank prospective epidemiological study.",
"abstract": "Medical imaging has enormous potential for early disease prediction, but is impeded by the difficulty and expense of acquiring data sets before symptom onset. UK Biobank aims to address this problem directly by acquiring high-quality, consistently acquired imaging data from 100,000 predominantly healthy participants, with health outcomes being tracked over the coming decades. The brain imaging includes structural, diffusion and functional modalities. Along with body and cardiac imaging, genetics, lifestyle measures, biological phenotyping and health records, this imaging is expected to enable discovery of imaging markers of a broad range of diseases at their earliest stages, as well as provide unique insight into disease mechanisms. We describe UK Biobank brain imaging and present results derived from the first 5,000 participants' data release. Although this covers just 5% of the ultimate cohort, it has already yielded a rich range of associations between brain imaging and other measures collected by UK Biobank."
},
{
"pmid": "27282895",
"title": "Aortic Dissection in Patients With Genetically Mediated Aneurysms: Incidence and Predictors in the GenTAC Registry.",
"abstract": "BACKGROUND\nAortic dissection (AoD) is a serious complication of thoracic aortic aneurysm (TAA). Relative risk for AoD in relation to TAA etiology, incidence, and pattern after prophylactic TAA surgery are poorly understood.\n\n\nOBJECTIVES\nThis study sought to determine the incidence, pattern, and relative risk for AoD among patients with genetically associated TAA.\n\n\nMETHODS\nThe population included adult GenTAC participants without AoD at baseline. Standardized core laboratory tests classified TAA etiology and measured aortic size. Follow-up was performed for AoD.\n\n\nRESULTS\nBicuspid aortic valve (BAV) (39%) and Marfan syndrome (MFS) (22%) were the leading diagnoses in the studied GenTAC participants (n = 1,991). AoD occurred in 1.6% over 3.6 ± 2.0 years; 61% of AoD occurred in patients with MFS. Cumulative AoD incidence was 6-fold higher among patients with MFS (4.5%) versus others (0.7%; p < 0.001). MFS event rates were similarly elevated versus those in patients with BAV (0.3%; p < 0.001). AoD originated in the distal arch or descending aorta in 71%; 52% of affected patients, including 68% with MFS, had previously undergone aortic grafting. In patients with proximal aortic surgery, distal aortic size (descending thoracic, abdominal aorta) was larger among patients with AoD versus those without AoD (both p < 0.05), whereas the ascending aorta size was similar. Conversely, in patients without previous surgery, aortic root size was greater in patients with subsequent AoD (p < 0.05), whereas distal aortic segments were of similar size. MFS (odds ratio: 7.42; 95% confidence interval: 3.43 to 16.82; p < 0.001) and maximal aortic size (1.86 per cm; 95% confidence interval: 1.26 to 2.67; p = 0.001) were independently associated with AoD. Only 4 of 31 (13%) patients with AoD had pre-dissection images that fulfilled size criteria for prophylactic TAA surgery at a subsequent AoD site.\n\n\nCONCLUSIONS\nAmong patients with genetically associated TAA, MFS augments risk for AoD even after TAA grafting. Although increased aortic size is a risk factor for subsequent AoD, events typically occur below established thresholds for prophylactic TAA repair."
},
{
"pmid": "28720123",
"title": "Cardiovascular magnetic resonance in an adult human population: serial observations from the multi-ethnic study of atherosclerosis.",
"abstract": "The Multi-Ethnic Study of Atherosclerosis (MESA) is the first large-scale multi-ethnic population study in the U.S. to use advanced cardiovascular magnetic resonance (CMR) imaging. MESA participants were free of cardiovascular disease at baseline between 2000 and 2002, and were followed up between 2009 and 2011 with repeated CMR examinations as part of MESA. CMR allows the clinician to visualize and accurately quantify volume and dimensions of all four cardiac chambers; measure systolic and diastolic ventricular function; assess myocardial fibrosis; assess vessel lumen size, vessel wall morphology, and vessel stiffness. CMR has a number of advantages over other imaging modalities such as echocardiography, computed tomography, and invasive angiography, and has been proposed as a diagnostic strategy for high-risk populations. MESA has been extensively evaluating CMR imaging biomarkers, as markers of subclinical disease, in the last 15 years for low-risk populations. On a more practical level, some of the imaging biomarkers developed and studied are translatable to at-risk populations. In this review, we discuss the progression of subclinical cardiovascular disease and the mechanisms responsible for the transition to symptomatic clinical outcomes based on our findings from MESA."
},
{
"pmid": "28641372",
"title": "Comparison of Sociodemographic and Health-Related Characteristics of UK Biobank Participants With Those of the General Population.",
"abstract": "The UK Biobank cohort is a population-based cohort of 500,000 participants recruited in the United Kingdom (UK) between 2006 and 2010. Approximately 9.2 million individuals aged 40-69 years who lived within 25 miles (40 km) of one of 22 assessment centers in England, Wales, and Scotland were invited to enter the cohort, and 5.5% participated in the baseline assessment. The representativeness of the UK Biobank cohort was investigated by comparing demographic characteristics between nonresponders and responders. Sociodemographic, physical, lifestyle, and health-related characteristics of the cohort were compared with nationally representative data sources. UK Biobank participants were more likely to be older, to be female, and to live in less socioeconomically deprived areas than nonparticipants. Compared with the general population, participants were less likely to be obese, to smoke, and to drink alcohol on a daily basis and had fewer self-reported health conditions. At age 70-74 years, rates of all-cause mortality and total cancer incidence were 46.2% and 11.8% lower, respectively, in men and 55.5% and 18.1% lower, respectively, in women than in the general population of the same age. UK Biobank is not representative of the sampling population; there is evidence of a \"healthy volunteer\" selection bias. Nonetheless, valid assessment of exposure-disease relationships may be widely generalizable and does not require participants to be representative of the population at large."
},
{
"pmid": "10403851",
"title": "Association of aortic-valve sclerosis with cardiovascular mortality and morbidity in the elderly.",
"abstract": "BACKGROUND\nAlthough aortic-valve stenosis is clearly associated with adverse cardiovascular outcomes, it is unclear whether valve sclerosis increases the risk of cardiovascular events.\n\n\nMETHODS\nWe assessed echocardiograms obtained at base line from 5621 men and women 65 years of age or older who were enrolled in a population-based prospective study. On echocardiography, the aortic valve was normal in 70 percent (3919 subjects), sclerotic without outflow obstruction in 29 percent (1610), and stenotic in 2 percent (92). The subjects were followed for a mean of 5.0 years to assess the risk of death from any cause and of death from cardiovascular causes. Cardiovascular morbidity was defined as new episodes of myocardial infarction, angina pectoris, congestive heart failure, or stroke.\n\n\nRESULTS\nThere was a stepwise increase in deaths from any cause (P for trend, <0.001) and deaths from cardiovascular causes (P for trend, <0.001) with increasing aortic-valve abnormality; the respective rates were 14.9 and 6.1 percent in the group with normal aortic valves, 21.9 and 10.1 percent in the group with aortic sclerosis, and 41.3 and 19.6 percent in the group with aortic stenosis. The relative risk of death from cardiovascular causes among subjects without coronary heart disease at base line was 1.66 (95 percent confidence interval, 1.23 to 2.23) for those with sclerotic valves as compared with those with normal valves, after adjustment for age and sex. The relative risk remained elevated after further adjustment for clinical factors associated with sclerosis (relative risk, 1.52; 95 percent confidence interval, 1.12 to 2.05). The relative risk of myocardial infarction was 1.40 (95 percent confidence interval, 1.07 to 1.83) among subjects with aortic sclerosis, as compared with those with normal aortic valves.\n\n\nCONCLUSIONS\nAortic sclerosis is common in the elderly and is associated with an increase of approximately 50 percent in the risk of death from cardiovascular causes and the risk of myocardial infarction, even in the absence of hemodynamically significant obstruction of left ventricular outflow."
},
{
"pmid": "23714095",
"title": "Imaging in population science: cardiovascular magnetic resonance in 100,000 participants of UK Biobank - rationale, challenges and approaches.",
"abstract": "UK Biobank is a prospective cohort study with 500,000 participants aged 40 to 69. Recently an enhanced imaging study received funding. Cardiovascular magnetic resonance (CMR) will be part of a multi-organ, multi-modality imaging visit in 3-4 dedicated UK Biobank imaging centres that will acquire and store imaging data from 100,000 participants (subject to successful piloting). In each of UK Biobank's dedicated bespoke imaging centres, it is proposed that 15-20 participants will undergo a 2 to 3 hour visit per day, seven days a week over a period of 5-6 years. The imaging modalities will include brain MRI at 3 Tesla, CMR and abdominal MRI at 1.5 Tesla, carotid ultrasound and DEXA scans using carefully selected protocols. We reviewed the rationale, challenges and proposed approaches for concise phenotyping using CMR on such a large scale. Here, we discuss the benefits of this imaging study and review existing and planned population based cardiovascular imaging in prospective cohort studies. We will evaluate the CMR protocol, feasibility, process optimisation and costs. Procedures for incidental findings, quality control and data processing and analysis are also presented. As is the case for all other data in the UK Biobank resource, this database of images and related information will be made available through UK Biobank's Access Procedures to researchers (irrespective of their country of origin and whether they are academic or commercial) for health-related research that is in the public interest."
},
{
"pmid": "19234262",
"title": "Cardiovascular applications of phase-contrast MRI.",
"abstract": "OBJECTIVE\nThe purpose of this study was to review and illustrate various clinical applications of phase-contrast MRI.\n\n\nCONCLUSION\nCardiac MRI has emerged as a valuable noninvasive clinical tool for evaluation of the cardiovascular system. Phase-contrast MRI has a variety of established applications in quantifying blood flow and velocity and several emerging applications, such as evaluation of diastolic function and myocardial dyssynchrony."
},
{
"pmid": "24451178",
"title": "Cardiac magnetic resonance imaging of congenital bicuspid aortic valves and associated aortic pathologies in adults.",
"abstract": "AIMS\nBicuspid aortic valve (BAV) represents the most frequent congenital cardiac abnormality resulting in premature valvular degeneration and aortic dilatation. In a large series of consecutive patients, we evaluated the distribution of BAV types and the associated valvular and aortic abnormalities.\n\n\nMETHODS AND RESULTS\nWe investigated 266 patients (58 ± 14 years) with BAV using a 1.5 T cardiac magnetic resonance (CMR) scanner. Valve morphology was described according to the Sievers classification. The aortic valve orifice area, aortic regurgitation (AR) fraction, and aortic dilation were quantified. Two hundred and forty-two data sets were available for analysis; 24% had BAV without a valvular lesion. The predominant valvular lesion was aortic stenosis (AS) with 51%. Lone AR was found in 17%. A combined lesion of AS and AR was found in 9%. Those with AS were older than the overall average (64 ± 12 vs. 57 ± 15 years, P < 0.001). The patients with AR and those without valvular abnormality were younger than average (49 ± 13 and 50 ± 12 years vs. 57 ± 15 years, P < 0.01 respectively). Comparing two observers Kappa coefficient was 0.77 for differentiation of six valve morphologies and 0.80 for the differentiation of bicuspid and tricuspid valve. Aortic dilatation was found in 39% of cases with no discernible preference for any specific BAV-type and mainly affecting the ascending aorta.\n\n\nCONCLUSION\nCMR can non-invasively differentiate various morphologies in BAV with low inter-observer variability. Valvular pathologies vary across age. Aortic dilatation is frequent in BAV independent from valvular morphology or lesion. In future CMR might help to guide management in patients with BAV."
},
{
"pmid": "28299607",
"title": "Comprehensive 4-stage categorization of bicuspid aortic valve leaflet morphology by cardiac MRI in 386 patients.",
"abstract": "Bicuspid aortic valve (BAV) disease is heterogeneous and related to valve dysfunction and aortopathy. Appropriate follow up and surveillance of patients with BAV may depend on correct phenotypic categorization. There are multiple classification schemes, however a need exists to comprehensively capture commissure fusion, leaflet asymmetry, and valve orifice orientation. Our aim was to develop a BAV classification scheme for use at MRI to ascertain the frequency of different phenotypes and the consistency of BAV classification. The BAV classification scheme builds on the Sievers surgical BAV classification, adding valve orifice orientation, partial leaflet fusion and leaflet asymmetry. A single observer successfully applied this classification to 386 of 398 Cardiac MRI studies. Repeatability of categorization was ascertained with intraobserver and interobserver kappa scores. Sensitivity and specificity of MRI findings was determined from operative reports, where available. Fusion of the right and left leaflets accounted for over half of all cases. Partial leaflet fusion was seen in 46% of patients. Good interobserver agreement was seen for orientation of the valve opening (κ = 0.90), type (κ = 0.72) and presence of partial fusion (κ = 0.83, p < 0.0001). Retrospective review of operative notes showed sensitivity and specificity for orientation (90, 93%) and for Sievers type (73, 87%). The proposed BAV classification schema was assessed by MRI for its reliability to classify valve morphology in addition to illustrating the wide heterogeneity of leaflet size, orifice orientation, and commissural fusion. The classification may be helpful in further understanding the relationship between valve morphology, flow derangement and aortopathy."
},
{
"pmid": "22705287",
"title": "Strategies for improved interpretation of computer-aided detections for CT colonography utilizing distributed human intelligence.",
"abstract": "Computer-aided detection (CAD) systems have been shown to improve the diagnostic performance of CT colonography (CTC) in the detection of premalignant colorectal polyps. Despite the improvement, the overall system is not optimal. CAD annotations on true lesions are incorrectly dismissed, and false positives are misinterpreted as true polyps. Here, we conduct an observer performance study utilizing distributed human intelligence in the form of anonymous knowledge workers (KWs) to investigate human performance in classifying polyp candidates under different presentation strategies. We evaluated 600 polyp candidates from 50 patients, each case having at least one polyp ≥6 mm, from a large database of CTC studies. Each polyp candidate was labeled independently as a true or false polyp by 20 KWs and an expert radiologist. We asked each labeler to determine whether the candidate was a true polyp after looking at a single 3D-rendered image of the candidate and after watching a video fly-around of the candidate. We found that distributed human intelligence improved significantly when presented with the additional information in the video fly-around. We noted that performance degraded with increasing interpretation time and increasing difficulty, but distributed human intelligence performed better than our CAD classifier for \"easy\" and \"moderate\" polyp candidates. Further, we observed numerous parallels between the expert radiologist and the KWs. Both showed similar improvement in classification moving from single-image to video interpretation. Additionally, difficulty estimates obtained from the KWs using an expectation maximization algorithm correlated well with the difficulty rating assigned by the expert radiologist. Our results suggest that distributed human intelligence is a powerful tool that will aid in the development of CAD for CTC."
},
{
"pmid": "22274839",
"title": "Distributed human intelligence for colonic polyp classification in computer-aided detection for CT colonography.",
"abstract": "PURPOSE\nTo assess the diagnostic performance of distributed human intelligence for the classification of polyp candidates identified with computer-aided detection (CAD) for computed tomographic (CT) colonography.\n\n\nMATERIALS AND METHODS\nThis study was approved by the institutional Office of Human Subjects Research. The requirement for informed consent was waived for this HIPAA-compliant study. CT images from 24 patients, each with at least one polyp of 6 mm or larger, were analyzed by using CAD software to identify 268 polyp candidates. Twenty knowledge workers (KWs) from a crowdsourcing platform labeled each polyp candidate as a true or false polyp. Two trials involving 228 KWs were conducted to assess reproducibility. Performance was assessed by comparing the area under the receiver operating characteristic curve (AUC) of KWs with the AUC of CAD for polyp classification.\n\n\nRESULTS\nThe detection-level AUC for KWs was 0.845 ± 0.045 (standard error) in trial 1 and 0.855 ± 0.044 in trial 2. These were not significantly different from the AUC for CAD, which was 0.859 ± 0.043. When polyp candidates were stratified by difficulty, KWs performed better than CAD on easy detections; AUCs were 0.951 ± 0.032 in trial 1, 0.966 ± 0.027 in trial 2, and 0.877 ± 0.048 for CAD (P = .039 for trial 2). KWs who participated in both trials showed a significant improvement in performance going from trial 1 to trial 2; AUCs were 0.759 ± 0.052 in trial 1 and 0.839 ± 0.046 in trial 2 (P = .041).\n\n\nCONCLUSION\nThe performance of distributed human intelligence is not significantly different from that of CAD for colonic polyp classification."
},
{
"pmid": "12180402",
"title": "Training products of experts by minimizing contrastive divergence.",
"abstract": "It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."
},
{
"pmid": "25024921",
"title": "scikit-image: image processing in Python.",
"abstract": "scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "20858131",
"title": "Deep, big, simple neural nets for handwritten digit recognition.",
"abstract": "Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning."
},
{
"pmid": "28092576",
"title": "Multiple-Instance Learning for Medical Image and Video Analysis.",
"abstract": "Multiple-instance learning (MIL) is a recent machine-learning paradigm that is particularly well suited to medical image and video analysis (MIVA) tasks. Based solely on class labels assigned globally to images or videos, MIL algorithms learn to detect relevant patterns locally in images or videos. These patterns are then used for classification at a global level. Because supervision relies on global labels, manual segmentations are not needed to train MIL algorithms, unlike traditional single-instance learning (SIL) algorithms. Consequently, these solutions are attracting increasing interest from the MIVA community: since the term was coined by Dietterich et al. in 1997, 73 research papers about MIL have been published in the MIVA literature. This paper reviews the existing strategies for modeling MIVA tasks as MIL problems, recommends general-purpose MIL algorithms for each type of MIVA tasks, and discusses MIVA-specific MIL algorithms. Various experiments performed in medical image and video datasets are compiled in order to back up these discussions. This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases. In other words, MIL is the ideal solution for many MIVA tasks. Recent trends are discussed, and future directions are proposed for this emerging paradigm."
},
{
"pmid": "24637156",
"title": "Weakly supervised histopathology cancer image segmentation and classification.",
"abstract": "Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods."
}
] |
Frontiers in Computational Neuroscience | 31354463 | PMC6629969 | 10.3389/fncom.2019.00045 | A Game-Theoretical Network Formation Model for C. elegans Neural Network | Studying and understanding human brain structures and functions have become one of the most challenging issues in neuroscience today. However, the mammalian nervous system is made up of hundreds of millions of neurons and billions of synapses. This complexity made it impossible to reconstruct such a huge nervous system in the laboratory. So, most researchers focus on C. elegans neural network. The C. elegans neural network is the only biological neural network that is fully mapped. This nervous system is the simplest neural network that exists. However, many fundamental behaviors like movement emerge from this basic network. These features made C. elegans a convenient case to study the nervous systems. Many studies try to propose a network formation model for C. elegans neural network. However, these studies could not meet all characteristics of C. elegans neural network, such as significant factors that play a role in the formation of C. elegans neural network. Thus, new models are needed to be proposed in order to explain all aspects of C. elegans neural network. In this paper, a new model based on game theory is proposed in order to understand the factors affecting the formation of nervous systems, which meet the C. elegans frontal neural network characteristics. In this model, neurons are considered to be agents. The strategy for each neuron includes either making or removing links to other neurons. After choosing the basic network, the utility function is built using structural and functional factors. In order to find the coefficients for each of these factors, linear programming is used. Finally, the output network is compared with C. elegans frontal neural network and previous models. The results implicate that the game-theoretical model proposed in this paper can better predict the influencing factors in the formation of C. elegans neural network compared to previous models. | 2. Related WorksCoelenterates like Cnidaria were the first species to have the neuronal network (Spencer and Satterlie, 1980). Their neural network was a two-dimensional regular or lattice network (Watson and Augustine, 1982). It means that their neural network formed a regular network (Bergström and Nevanlinna, 1972). In regular networks, neighbors are well-connected and there is no link between the nodes in long distance (Jerauld et al., 1984). This type of network still can be seen in two-dimensional structures of the neuronal network, such as the retina, cortical, and sub-cortical layered structures(Bassett and Bullmore, 2017). However, regular networks fail to describe more complex neuronal networks when neural network wiring is a combination of genetic information, stochastic processes, and learning mechanisms (Walters and Byrne, 1983).Some studies have proposed random networks like Erdos-Renye, and random scale-free networks to simulate, model or analysis of biological neural networks, such as the Macaque cortical connectome or the C. elegans frontal ganglia connectome (Prettejohn et al., 2011; Cannistraci et al., 2013).In order to find the factor affecting network formation, Itzhack and Louzoun proposed a random model based on the distances between neurons (Itzhack and Louzoun, 2010). The model is a random network based on the Euclidean distance between each neuron. This random network is then compared against the C. elegans neural network. While the average shortest path between this model and the real network are similar, there is a huge difference between their clustering coefficients.The model proposed by Itzhack and Louzoun can describe some characteristics of C. elegans neural network. However, sometimes neurons in long distances form synaptic links. To find the reason for the formation of links between neurons in long distance, Kaiser et al. have investigated the role of birth time in the formation of neurons and have demonstrated the effect of birth time in the formation of neuronal networks (Varier and Kaiser, 2011).Although these models can describe some characteristics of neuronal networks, they failed to capture all the factors that affect neuronal network formation processes. These models take one or two structural or functional factors into account, individually. In the formation of neural networks, both structural and functional factors have major roles, and network formation models should take all these types of factors into account. In this paper, a game-theoretical network formation model for C. elegans frontal neural network is proposed to include both structural and functional factors in the formation of neural networks. | [
"10521342",
"27655008",
"23563618",
"3981252",
"28950589",
"20081220",
"22837521",
"22149674",
"11497662",
"21441986",
"28659782",
"6101612",
"16201007",
"25678023",
"21253561",
"6294834",
"7122273",
"9623998",
"30508808"
] | [
{
"pmid": "10521342",
"title": "Emergence of scaling in random networks",
"abstract": "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
},
{
"pmid": "27655008",
"title": "Small-World Brain Networks Revisited.",
"abstract": "It is nearly 20 years since the concept of a small-world network was first quantitatively defined, by a combination of high clustering and short path length; and about 10 years since this metric of complex network topology began to be widely applied to analysis of neuroimaging and other neuroscience data as part of the rapid growth of the new field of connectomics. Here, we review briefly the foundational concepts of graph theoretical estimation and generation of small-world networks. We take stock of some of the key developments in the field in the past decade and we consider in some detail the implications of recent studies using high-resolution tract-tracing methods to map the anatomical networks of the macaque and the mouse. In doing so, we draw attention to the important methodological distinction between topological analysis of binary or unweighted graphs, which have provided a popular but simple approach to brain network analysis in the past, and the topology of weighted graphs, which retain more biologically relevant information and are more appropriate to the increasingly sophisticated data on brain connectivity emerging from contemporary tract-tracing and other imaging studies. We conclude by highlighting some possible future trends in the further development of weighted small-worldness as part of a deeper and broader understanding of the topology and the functional value of the strong and weak links between areas of mammalian cortex."
},
{
"pmid": "23563618",
"title": "Fabrication and characterization of fully flattened carbon nanotubes: a new graphene nanoribbon analogue.",
"abstract": "Graphene nanoribbons (GNR) are one of the most promising candidates for the fabrication of graphene-based nanoelectronic devices such as high mobility field effect transistors (FET). Here, we report a high-yield fabrication of a high quality another type of GNR analogue, fully flattened carbon nanotubes (flattened CNTs), using solution-phase extraction of inner tubes from large-diameter multi-wall CNTs (MWCNTs). Transmission electron microscopy (TEM) observations show that flattened CNTs have width of typically 20 nm and a barbell-like cross section. Measurements of the low-bias conductance of isolated flattened CNTs as a function of gate voltage shows that the flattened CNTs display ambipolar conduction which is different from those of MWCNTs. The estimated gap based on temperature dependence of conductivity measurements of isolated flattened CNTs is 13.7 meV, which is probably caused by the modified electronic structure due to the flattening."
},
{
"pmid": "3981252",
"title": "The neural circuit for touch sensitivity in Caenorhabditis elegans.",
"abstract": "The neural pathways for touch-induced movement in Caenorhabditis elegans contain six touch receptors, five pairs of interneurons, and 69 motor neurons. The synaptic relationships among these cells have been deduced from reconstructions from serial section electron micrographs, and the roles of the cells were assessed by examining the behavior of animals after selective killing of precursors of the cells by laser microsurgery. This analysis revealed that there are two pathways for touch-mediated movement for anterior touch (through the AVD and AVB interneurons) and a single pathway for posterior touch (via the PVC interneurons). The anterior touch circuitry changes in two ways as the animal matures. First, there is the formation of a neural network of touch cells as the three anterior touch cells become coupled by gap junctions. Second, there is the addition of the AVB pathway to the pre-existing AVD pathway. The touch cells also synapse onto many cells that are probably not involved in the generation of movement. Such synapses suggest that stimulation of these receptors may modify a number of behaviors."
},
{
"pmid": "28950589",
"title": "Frequency-difference-dependent stochastic resonance in neural systems.",
"abstract": "Biological neurons receive multiple noisy oscillatory signals, and their dynamical response to the superposition of these signals is of fundamental importance for information processing in the brain. Here we study the response of neural systems to the weak envelope modulation signal, which is superimposed by two periodic signals with different frequencies. We show that stochastic resonance occurs at the beat frequency in neural systems at the single-neuron as well as the population level. The performance of this frequency-difference-dependent stochastic resonance is influenced by both the beat frequency and the two forcing frequencies. Compared to a single neuron, a population of neurons is more efficient in detecting the information carried by the weak envelope modulation signal at the beat frequency. Furthermore, an appropriate fine-tuning of the excitation-inhibition balance can further optimize the response of a neural ensemble to the superimposed signal. Our results thus introduce and provide insights into the generation and modulation mechanism of the frequency-difference-dependent stochastic resonance in neural systems."
},
{
"pmid": "20081220",
"title": "Random distance dependent attachment as a model for neural network generation in the Caenorhabditis elegans.",
"abstract": "MOTIVATION\nThe topology of the network induced by the neurons connectivity's in the Caenorhabditis elegans differs from most common random networks. The neurons positions of the C.elegans have been previously explained as being optimal to induce the required network wiring. We here propose a complementary explanation that the network wiring is the direct result of a local stochastic synapse formation process.\n\n\nRESULTS\nWe show that a model based on the physical distance between neurons can explain the C.elegans neural network structure, specifically, we demonstrate that a simple model based on a geometrical synapse formation probability and the inhibition of short coherent cycles can explain the properties of the C.elegans' neural network. We suggest this model as an initial framework to discuss neural network generation and as a first step toward the development of models for more advanced creatures. In order to measure the circle frequency in the network, a novel graph-theory circle length measurement algorithm is proposed."
},
{
"pmid": "22837521",
"title": "The connectome of a decision-making neural network.",
"abstract": "In order to understand the nervous system, it is necessary to know the synaptic connections between the neurons, yet to date, only the wiring diagram of the adult hermaphrodite of the nematode Caenorhabditis elegans has been determined. Here, we present the wiring diagram of the posterior nervous system of the C. elegans adult male, reconstructed from serial electron micrograph sections. This region of the male nervous system contains the sexually dimorphic circuits for mating. The synaptic connections, both chemical and gap junctional, form a neural network with four striking features: multiple, parallel, short synaptic pathways directly connecting sensory neurons to end organs; recurrent and reciprocal connectivity among sensory neurons; modular substructure; and interneurons acting in feedforward loops. These features help to explain how the network robustly and rapidly selects and executes the steps of a behavioral program on the basis of the inputs from multiple sensory neurons."
},
{
"pmid": "22149674",
"title": "Evolution and development of brain networks: from Caenorhabditis elegans to Homo sapiens.",
"abstract": "Neural networks show a progressive increase in complexity during the time course of evolution. From diffuse nerve nets in Cnidaria to modular, hierarchical systems in macaque and humans, there is a gradual shift from simple processes involving a limited amount of tasks and modalities to complex functional and behavioral processing integrating different kinds of information from highly specialized tissue. However, studies in a range of species suggest that fundamental similarities, in spatial and topological features as well as in developmental mechanisms for network formation, are retained across evolution. 'Small-world' topology and highly connected regions (hubs) are prevalent across the evolutionary scale, ensuring efficient processing and resilience to internal (e.g. lesions) and external (e.g. environment) changes. Furthermore, in most species, even the establishment of hubs, long-range connections linking distant components, and a modular organization, relies on similar mechanisms. In conclusion, evolutionary divergence leads to greater complexity while following essential developmental constraints."
},
{
"pmid": "11497662",
"title": "Random graphs with arbitrary degree distributions and their applications.",
"abstract": "Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the world-wide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph."
},
{
"pmid": "21441986",
"title": "Methods for generating complex networks with selected structural properties for simulations: a review and tutorial for neuroscientists.",
"abstract": "Many simulations of networks in computational neuroscience assume completely homogenous random networks of the Erdös-Rényi type, or regular networks, despite it being recognized for some time that anatomical brain networks are more complex in their connectivity and can, for example, exhibit the \"scale-free\" and \"small-world\" properties. We review the most well known algorithms for constructing networks with given non-homogeneous statistical properties and provide simple pseudo-code for reproducing such networks in software simulations. We also review some useful mathematical results and approximations associated with the statistics that describe these network models, including degree distribution, average path length, and clustering coefficient. We demonstrate how such results can be used as partial verification and validation of implementations. Finally, we discuss a sometimes overlooked modeling choice that can be crucially important for the properties of simulated networks: that of network directedness. The most well known network algorithms produce undirected networks, and we emphasize this point by highlighting how simple adaptations can instead produce directed networks."
},
{
"pmid": "28659782",
"title": "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.",
"abstract": "The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities."
},
{
"pmid": "6101612",
"title": "Electrical and dye coupling in an identified group of neurons in a coelenterate.",
"abstract": "A compressed network of \"giant\" neurons, lying within the inner nerve-ring of the hydrozoan jellyfish Polyorchis, functions as the overall pattern generator and the motor neuron system for the subumbrellar swimming musculature. The neurons that form the network are all electrically coupled. The coupling is tight, so that action potentials and slow membrane-potential oscillations are synchronous throughout the network. The fluorescent dye Lucifer Yellow CH passes throughout the network following iontophoretic injection into a single neuron. The sites of both current and dye passage are presumably the numerous gap junctions which are found where the giants run together. Based on the morphological identification of the giant network from the dye injections and ultrastructural studies, the electrophysiological data on the firing pattern and input--output relations of the network, and its position relative to other neurons in the inner nerve-ring, the giant network can be considered an identified neuronal group."
},
{
"pmid": "16201007",
"title": "The human connectome: A structural description of the human brain.",
"abstract": "The connection matrix of the human brain (the human \"connectome\") represents an indispensable foundation for basic and applied neurobiological research. However, the network of anatomical connections linking the neuronal elements of the human brain is still largely unknown. While some databases or collations of large-scale anatomical connection patterns exist for other mammalian species, there is currently no connection matrix of the human brain, nor is there a coordinated research effort to collect, archive, and disseminate this important information. We propose a research strategy to achieve this goal, and discuss its potential impact."
},
{
"pmid": "25678023",
"title": "The relation between structural and functional connectivity patterns in complex brain networks.",
"abstract": "OBJECTIVE\nAn important problem in systems neuroscience is the relation between complex structural and functional brain networks. Here we use simulations of a simple dynamic process based upon the susceptible-infected-susceptible (SIS) model of infection dynamics on an empirical structural brain network to investigate the extent to which the functional interactions between any two brain areas depend upon (i) the presence of a direct structural connection; and (ii) the degree product of the two areas in the structural network.\n\n\nMETHODS\nFor the structural brain network, we used a 78×78 matrix representing known anatomical connections between brain regions at the level of the AAL atlas (Gong et al., 2009). On this structural network we simulated brain dynamics using a model derived from the study of epidemic processes on networks. Analogous to the SIS model, each vertex/brain region could be in one of two states (inactive/active) with two parameters β and δ determining the transition probabilities. First, the phase transition between the fully inactive and partially active state was investigated as a function of β and δ. Second, the statistical interdependencies between time series of node states were determined (close to and far away from the critical state) with two measures: (i) functional connectivity based upon the correlation coefficient of integrated activation time series; and (ii) effective connectivity based upon conditional co-activation at different time intervals.\n\n\nRESULTS\nWe find a phase transition between an inactive and a partially active state for a critical ratio τ=β/δ of the transition rates in agreement with the theory of SIS models. Slightly above the critical threshold, node activity increases with degree, also in line with epidemic theory. The functional, but not the effective connectivity matrix closely resembled the underlying structural matrix. Both functional connectivity and, to a lesser extent, effective connectivity were higher for connected as compared to disconnected (i.e.: not directly connected) nodes. Effective connectivity scaled with the degree product. For functional connectivity, a weaker scaling relation was only observed for disconnected node pairs. For random networks with the same degree distribution as the original structural network, similar patterns were seen, but the scaling exponent was significantly decreased especially for effective connectivity.\n\n\nCONCLUSIONS\nEven with a very simple dynamical model it can be shown that functional relations between nodes of a realistic anatomical network display clear patterns if the system is studied near the critical transition. The detailed nature of these patterns depends on the properties of the functional or effective connectivity measure that is used. While the strength of functional interactions between any two nodes clearly depends upon the presence or absence of a direct connection, this study has shown that the degree product of the nodes also plays a large role in explaining interaction strength, especially for disconnected nodes and in combination with an effective connectivity measure. The influence of degree product on node interaction strength probably reflects the presence of large numbers of indirect connections."
},
{
"pmid": "21253561",
"title": "Neural development features: spatio-temporal development of the Caenorhabditis elegans neuronal network.",
"abstract": "The nematode Caenorhabditis elegans, with information on neural connectivity, three-dimensional position and cell linage, provides a unique system for understanding the development of neural networks. Although C. elegans has been widely studied in the past, we present the first statistical study from a developmental perspective, with findings that raise interesting suggestions on the establishment of long-distance connections and network hubs. Here, we analyze the neuro-development for temporal and spatial features, using birth times of neurons and their three-dimensional positions. Comparisons of growth in C. elegans with random spatial network growth highlight two findings relevant to neural network development. First, most neurons which are linked by long-distance connections are born around the same time and early on, suggesting the possibility of early contact or interaction between connected neurons during development. Second, early-born neurons are more highly connected (tendency to form hubs) than later-born neurons. This indicates that the longer time frame available to them might underlie high connectivity. Both outcomes are not observed for random connection formation. The study finds that around one-third of electrically coupled long-range connections are late forming, raising the question of what mechanisms are involved in ensuring their accuracy, particularly in light of the extremely invariant connectivity observed in C. elegans. In conclusion, the sequence of neural network development highlights the possibility of early contact or interaction in securing long-distance and high-degree connectivity."
},
{
"pmid": "6294834",
"title": "Associative conditioning of single sensory neurons suggests a cellular mechanism for learning.",
"abstract": "A cellular analog of associative learning has been demonstrated in individual sensory neurons of the tail withdrawal reflex of Aplysia. Sensory cells activated by intracellular current injection shortly before a sensitizing shock to the animal's tail display significantly more facilitation of their monosynaptic connections to a tail motor neuron than cells trained either with intracellular stimulation unpaired to tail shock or with tail shock alone. This associative effect is acquired rapidly and is expressed as a temporally specific amplification of heterosynaptic facilitation. The results suggest that activity-dependent neuromodulation may be a mechanism underlying associative information storage and point to aspects of subcellular processes that might be involved in the formation of neural associations."
},
{
"pmid": "7122273",
"title": "Peptide and amine modulation of the Limulus heart: a simple neural network and its target tissue.",
"abstract": "The Limulus heart consists of a relatively simple neural network, the cardiac ganglion, and its target tissue, cardiac muscle. The large size and exceptional in vitro viability of this system has made it relatively easy to extract, purify, and identify endogenous compounds which alter cardiac function. These agents included peptides, such as protolin and Limulus chromatophorotropic factor, and amines such as dopamine, epinephrine, norepinephrine, octopamine, and serotonin. The accessibility and simple organization of the cardiac ganglion has also permitted clear identification of the sites of action of these amines and peptides. The Limulus heart is thus a very favorable system for studying peptide and amine neurohormones at the network, cellular and molecular levels."
},
{
"pmid": "9623998",
"title": "Collective dynamics of 'small-world' networks.",
"abstract": "Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices."
},
{
"pmid": "30508808",
"title": "Heterogeneity of synaptic input connectivity regulates spike-based neuronal avalanches.",
"abstract": "Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. A central topic in neuroscience is to elucidate the underlying circuitry mechanisms of neuronal avalanches in the brain. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modeling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies."
}
] |
Micromachines | 31163692 | PMC6630831 | 10.3390/mi10060371 | Tight Evaluation of Real-Time Task Schedulability for Processor’s DVS and Nonvolatile Memory Allocation | A power-saving approach for real-time systems that combines processor voltage scaling and task placement in hybrid memory is presented. The proposed approach incorporates the task’s memory placement problem between the DRAM (dynamic random access memory) and NVRAM (nonvolatile random access memory) into the task model of the processor’s voltage scaling and adopts power-saving techniques for processor and memory selectively without violating the deadline constraints. Unlike previous work, our model tightly evaluates the worst-case execution time of a task, considering the time delay that may overlap between the processor and memory, thereby reducing the power consumption of real-time systems by 18–88%. | 4. Related Works4.1. Hybrid Memory TechnologiesRecently, hybrid memory technologies consisting of DRAM and NVRAM have been catching interest. As NVRAM is byte-accessible, similar to DRAM, but consumes less energy and provides higher scalability than DRAM, it is anticipated to be adopted in the main memory hierarchy of future computer systems. Mogul et al. suggest an efficient memory management policy for DRAM and PRAM hybrid memory [4]. Their policy tries to place read-only pages in PRAM, while writable pages in DRAM, thereby reducing the slow PRAM writes [4]. Dhiman et al. propose a hybrid memory architecture consisting of PRAM and DRAM, which dynamically moves data between PRAM and DRAM in order to balance the write count of PRAM [5]. Qureshi et al. propose a hierarchical memory architecture consisting of DRAM and PRAM [7]. Specifically, they use DRAM as the write buffer of PRAM in order to prolong the lifespan of PRAM and hide the slow write performances of PRAM. Lee et al. propose the CLOCK-DWF (clock with dirty bits and write frequency) policy for hybrid memory architecture, consisting of DRAM and PRAM [6]. They allocate read-intensive pages to PRAM and write-intensive pages to DRAM by online characterization of memory access patterns. Zhou et al. propose a hierarchical memory architecture consisting of DRAM and PRAM [8]. In particular, they propose a page replacement policy that tries to reduce both the cache misses and the write-backs from DRAM. Narayan et al. propose a page allocation approach for hybrid memory architectures at the memory object level [13]. They characterize memory objects and allocate them to their best-fit memory module to improve performance and energy efficiency. Kannan et al. propose heterogeneous memory management in virtualized systems [14]. They designed a heterogeneity-aware guest operating system (OS), which allows for placing data in the appropriate memory, which avoids page migrations. They also present migration policies for performance-critical pages and memory sharing policies for guest machines. 4.2. Low-Power Techniques for Real-time SchedulingMany studies have been performed on DVS in order to reduce power consumption in real-time systems [15,16,17,18]. Pillai and Shin propose a mechanism of selecting the lowest operating frequency that will meet deadlines for a given task set [19]. They propose three algorithms for DVS: Static DVS, cycle-conserving DVS, and look-ahead DVS. Static DVS selects the voltage of a processor statically, whereas cycle-conserving DVS uses reclaimed cycles for lowering the voltage when the actual execution time of a task is shorter than the worst-case execution time. Look-ahead DVS lowers the voltage further by determining future computation requirements and deferring the execution of the task in accordance. Lee et al. use the slack time to lower the processor’s voltage [1]. Specifically, initial voltages can be dynamically switched upon reclaiming unused clock cycles when a task completes before its deadline. Lin et al. point out that there is a memory mapping problem, as heterogeneous memory types are used [10]. They use dynamic programming and greedy approximation for solving the problem. Zhang et al. propose task placement in hybrid memory to save energy consumption [20]. In their scheme, tasks are located one by one in the NVRAM and the schedulability is checked. This procedure is repeated until the locations of all tasks are determined. Ghor and Aggoune propose a slack-based method to find the least voltage schedule for real-time tasks [16]. They stretch the execution time of tasks through off-line computing and schedule tasks as late as possible without missing their deadlines. | [] | [] |
Frontiers in Neurorobotics | 31354467 | PMC6636604 | 10.3389/fnbot.2019.00051 | Machine Learning for Haptics: Inferring Multi-Contact Stimulation From Sparse Sensor Configuration | Robust haptic sensation systems are essential for obtaining dexterous robots. Currently, we have solutions for small surface areas, such as fingers, but affordable and robust techniques for covering large areas of an arbitrary 3D surface are still missing. Here, we introduce a general machine learning framework to infer multi-contact haptic forces on a 3D robot's limb surface from internal deformation measured by only a few physical sensors. The general idea of this framework is to predict first the whole surface deformation pattern from the sparsely placed sensors and then to infer number, locations, and force magnitudes of unknown contact points. We show how this can be done even if training data can only be obtained for single-contact points using transfer learning at the example of a modified limb of the Poppy robot. With only 10 strain-gauge sensors we obtain a high accuracy also for multiple-contact points. The method can be applied to arbitrarily shaped surfaces and physical sensor types, as long as training data can be obtained. | 2. Related WorkIn order to make it easier to understand state-of-the-art large surface haptic applications, we gathered a representative set of approaches: array shaped sensors, optic sensors, anisotropic electrical impedance tomography (aEIT) based sensors and sensor systems with sparse sensor configuration.HEX-o-SKIN by Mittendorfer and Cheng (2011) integrates a proximity sensor, an accelerometer, three normal force sensors and a temperature sensor on one 15 × 15 mm hexagonal printed circuit. It allows covering a surface, e.g., of a robot exoskeleton, with multiple HEX-o-SKIN chips forming a dense array. In this way a large surface can be covered, however, the robustness of the system might be challenging.TacCylinder by Ward-Cherrier et al. (2018) is a camera-based system. It is shaped as a cylinder with a tube through its center, which holds a camera and a bulky catadioptric mirror system to capture the whole limb deformation pattern internally. The sensor has a dimension of 63 × 63 × 82 mm and delivers comprehensive information about the deformation of the soft cylindrical surface. The surface shape is restricted and a new shape requires an adaptation of the optical system. Additionally, the inside of the robotic part needs to be empty for this method to be applicable.Lee (2017) uses stretchable conductive materials (skin) with a few electrodes assembled on the skin boundary and measure all combinations of pairwise conductivities. The force location is determined by anisotropic electrical impedance tomography (aEIT). Only 16 electrodes are required on the skin boundary with skin size of 40 × 100 mm. However large computational costs arise, requiring special hardware.In a previous work, we proposed HapDef (Sun and Martius, 2018), which employs Machine Learning for single-contact force prediction from a sparse sensor configuration. With this method contact position and force magnitude can be inferred with sufficient precision on a robot shin with a surface of about 200 × 120 mm equipped with only 10 strain gauge sensors (8 × 5 mm each). The positions of the sensors are optimized using different optimization criteria. Using the same physical setup and taking it as a basis, we explore in this paper the potential of the setting for more precise measurement and the extension to multiple-contact points. We will elaborate in section 3 on more details about the HapDef design choices.To put the multi-contact tactile spatial accuracy in relation, we compare it with the acuity of human tactile sensation quantified by the “two point discrimination” criterion, which is widely used to assess tactile perception in clinical settings (Shooter, 2005; Blumenfeld, 2010). It describes the ability to distinguish two nearby stimulations on the skin to be two distinct contacts instead of one. In the human body, this ability largely differs from body part to body part (Bickley et al., 2017). We will compare to the acuity on the fingertip, palm and shin. | [
"23482014",
"28120886",
"29442240",
"16176227",
"29297773"
] | [
{
"pmid": "23482014",
"title": "Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators.",
"abstract": "Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable."
},
{
"pmid": "28120886",
"title": "Soft Nanocomposite Based Multi-point, Multi-directional Strain Mapping Sensor Using Anisotropic Electrical Impedance Tomography.",
"abstract": "The practical utilization of soft nanocomposites as a strain mapping sensor in tactile sensors and artificial skins requires robustness for various contact conditions as well as low-cost fabrication process for large three dimensional surfaces. In this work, we propose a multi-point and multi-directional strain mapping sensor based on multiwall carbon nanotube (MWCNT)-silicone elastomer nanocomposites and anisotropic electrical impedance tomography (aEIT). Based on the anisotropic resistivity of the sensor, aEIT technique can reconstruct anisotropic resistivity distributions using electrodes around the sensor boundary. This strain mapping sensor successfully estimated stretch displacements (error of 0.54 ± 0.53 mm), surface normal forces (error of 0.61 ± 0.62 N), and multi-point contact locations (error of 1.88 ± 0.95 mm in 30 mm × 30 mm area for a planar shaped sensor and error of 4.80 ± 3.05 mm in 40 mm × 110 mm area for a three dimensional contoured sensor). In addition, the direction of lateral stretch was also identified by reconstructing anisotropic distributions of electrical resistivity. Finally, a soft human-machine interface device was demonstrated as a practical application of the developed sensor."
},
{
"pmid": "29442240",
"title": "Review of emerging surgical robotic technology.",
"abstract": "BACKGROUND\nThe use of laparoscopic and robotic procedures has increased in general surgery. Minimally invasive robotic surgery has made tremendous progress in a relatively short period of time, realizing improvements for both the patient and surgeon. This has led to an increase in the use and development of robotic devices and platforms for general surgery. The purpose of this review is to explore current and emerging surgical robotic technologies in a growing and dynamic environment of research and development.\n\n\nMETHODS\nThis review explores medical and surgical robotic endoscopic surgery and peripheral technologies currently available or in development. The devices discussed here are specific to general surgery, including laparoscopy, colonoscopy, esophagogastroduodenoscopy, and thoracoscopy. Benefits and limitations of each technology were identified and applicable future directions were described.\n\n\nRESULTS\nA number of FDA-approved devices and platforms for robotic surgery were reviewed, including the da Vinci Surgical System, Sensei X Robotic Catheter System, FreeHand 1.2, invendoscopy E200 system, Flex® Robotic System, Senhance, ARES, the Single-Port Instrument Delivery Extended Research (SPIDER), and the NeoGuide Colonoscope. Additionally, platforms were reviewed which have not yet obtained FDA approval including MiroSurge, ViaCath System, SPORT™ Surgical System, SurgiBot, Versius Robotic System, Master and Slave Transluminal Endoscopic Robot, Verb Surgical, Miniature In Vivo Robot, and the Einstein Surgical Robot.\n\n\nCONCLUSIONS\nThe use and demand for robotic medical and surgical platforms is increasing and new technologies are continually being developed. New technologies are increasingly implemented to improve on the capabilities of previously established systems. Future studies are needed to further evaluate the strengths and weaknesses of each robotic surgical device and platform in the operating suite."
},
{
"pmid": "16176227",
"title": "Use of two-point discrimination as a nerve repair assessment tool: preliminary report.",
"abstract": "BACKGROUND\nTwo-point discrimination, static and dynamic, has long been used as an assessment tool for tactile gnosis, and to assess recovery after repair of a peripheral nerve. While use of a bent paperclip with a specified intertip distance as the assessment device has been described, no research has been performed on the accuracy of setting this distance by hand and eye alone. The aim of the present study was to demonstrate this accuracy.\n\n\nMETHODS\nFive orthopaedic registrars, four residents and three clinic nurses performed static and dynamic two-point discrimination testing on each other. They set the tip distance by hand and eye by bending a paperclip such that the distance between the two ends was their best approximation of 5 mm and then 10 mm. The testing was repeated after 7 days, n = 264 for each tip distance.\n\n\nRESULTS\nTwo-sample t-tests showed no significant difference (P > 0.53-0.93) between tip distance setting performed by registrars, nurses and residents; while single sample t-test showed a statistically significant difference (P < 0.0001) between the attempted tip distance and the overall mean tip distance achieved at 5 mm and 10 mm.\n\n\nCONCLUSION\nStatistical analysis showed that the single sample t-test could be discarded. Static and dynamic two-point discrimination testing with a paperclip set by hand and eye is therefore an accurate and reproducible test capable of being administered by both medical and non-medical staff, and is suitable for inclusion in a peripheral nerve repair testing protocol."
},
{
"pmid": "29297773",
"title": "The TacTip Family: Soft Optical Tactile Sensors with 3D-Printed Biomimetic Morphologies.",
"abstract": "Tactile sensing is an essential component in human-robot interaction and object manipulation. Soft sensors allow for safe interaction and improved gripping performance. Here we present the TacTip family of sensors: a range of soft optical tactile sensors with various morphologies fabricated through dual-material 3D printing. All of these sensors are inspired by the same biomimetic design principle: transducing deformation of the sensing surface via movement of pins analogous to the function of intermediate ridges within the human fingertip. The performance of the TacTip, TacTip-GR2, TacTip-M2, and TacCylinder sensors is here evaluated and shown to attain submillimeter accuracy on a rolling cylinder task, representing greater than 10-fold super-resolved acuity. A version of the TacTip sensor has also been open-sourced, enabling other laboratories to adopt it as a platform for tactile sensing and manipulation research. These sensors are suitable for real-world applications in tactile perception, exploration, and manipulation, and will enable further research and innovation in the field of soft tactile sensing."
}
] |
BMC Medical Informatics and Decision Making | 31315618 | PMC6637616 | 10.1186/s12911-019-0842-8 | Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning | BackgroundWith the advancement of powerful image processing and machine learning techniques, Computer Aided Diagnosis has become ever more prevalent in all fields of medicine including ophthalmology. These methods continue to provide reliable and standardized large scale screening of various image modalities to assist clinicians in identifying diseases. Since optic disc is the most important part of retinal fundus image for glaucoma detection, this paper proposes a two-stage framework that first detects and localizes optic disc and then classifies it into healthy or glaucomatous.MethodsThe first stage is based on Regions with Convolutional Neural Network (RCNN) and is responsible for localizing and extracting optic disc from a retinal fundus image while the second stage uses Deep Convolutional Neural Network to classify the extracted disc into healthy or glaucomatous. Unfortunately, none of the publicly available retinal fundus image datasets provides any bounding box ground truth required for disc localization. Therefore, in addition to the proposed solution, we also developed a rule-based semi-automatic ground truth generation method that provides necessary annotations for training RCNN based model for automated disc localization.ResultsThe proposed method is evaluated on seven publicly available datasets for disc localization and on ORIGA dataset, which is the largest publicly available dataset with healthy and glaucoma labels, for glaucoma classification. The results of automatic localization mark new state-of-the-art on six datasets with accuracy reaching 100% on four of them. For glaucoma classification we achieved Area Under the Receiver Operating Characteristic Curve equal to 0.874 which is 2.7% relative improvement over the state-of-the-art results previously obtained for classification on ORIGA dataset.ConclusionOnce trained on carefully annotated data, Deep Learning based methods for optic disc detection and localization are not only robust, accurate and fully automated but also eliminates the need for dataset-dependent heuristic algorithms. Our empirical evaluation of glaucoma classification on ORIGA reveals that reporting only Area Under the Curve, for datasets with class imbalance and without pre-defined train and test splits, does not portray true picture of the classifier’s performance and calls for additional performance metrics to substantiate the results. | Related workEarly diagnosis of glaucoma is vital for timely treatment of patients. Medical practitioners have proposed a number of criteria for early diagnosis and these criteria mostly focus on or around OD region. If the position, centre, and size of OD is calculated accurately it can greatly help in further automated analysis of the image modality. Rest of this subsection discusses various image processing and machine learning approaches making use of these diagnosis criteria for disc localization and glaucoma identification.Localization of optic discAlthough optic disc can be spotted manually as a round bright spot in a retinal image, yet performing large scale manual screening can prove to be really tiresome, time consuming, and prone to human fatigue and predisposition. CAD can provide efficient and reliable alternative solution with near human accuracy (as shown in Table 4). Usually the disc is the brightest region in the image. However, if ambient light finds its way into the image while capturing the photo it can look brighter than optic disc. Furthermore, occasionally some shiny reflective areas appear in the fundus image during image capturing. These shiny reflections can also look very bright and mislead a heuristic algorithm in considering them as candidate regions of interest. There are many approaches laid out by researchers for OD localization exploiting different image characteristics. Some of these approaches are briefly covered below.Intensity variations in the image can help locate optic disc in fundus images. To make use of this variation the image contrast is first improved using some locally adaptive transforms. The appearance of OD is then identified by noticing rapid variation in intensity as the disc has dark blood vessels alongside bright nerve fibres. The image is normalized and average intensity variance is calculated within a window of size roughly equal to expected disc size. The disc centre is marked at the point where the highest intensity is found. Eswaran et al. [12] used such intensity variation based approach. They applied a 25 × 35 averaging filter with equal weights of 1 on the image to smooth it and get rid of low intensity variations and preserve ROI. Chràstek et al. [13] used 31 × 31 averaging filter and the ROI is assumed to be 130 × 130 pixels. They used Canny Edge Detector [14] to plot the edges in the image. To localize the optic disc region they used only green channel of RGB image. Abràmoff et al. [15] proposed that the optic disc can be selected by taking only top 5% brightest pixels and hue values in the yellow range. The surrounding pixels are then clustered to constitute a candidate region. The clusters which are below a certain threshold are discarded. Liu et al. [16] used a similar approach. They first divided the image into 8 × 8 pixels grid and selected the block with maximum number of top 5% brightest pixels as the centre of the disc. Nyúl [17] employed an adaptive thresholding with a window whose size is determined to approximately match the size of the vessel thickness. A mean filter with the large kernel is then used with threshold probing for rough localization.Another extensively used approach is threshold based localization. A quick look at the retinal image tells that the optic disc is mostly the brightest region in the image. This observation is made and exploited by many including Siddalingaswamy and Prabhu [18]. It is also noticed that the green channel of RGB has the greatest contrast compared to red and blue channels [19–21], however, red channel has also been used [22] due to the fact that it has fewer blood vessels that can confuse the rule-based localization algorithm. Optimal threshold is chosen based upon approximation of image histogram. The histogram of the image is gradually scanned from a high intensity value I1, slowly decreasing the intensity until it reaches a lower value I2 that produces at least 1000 pixels with the same intensity. It results in a subset of histogram. The optimal threshold is taken as the mean of the two intensities I1 and I2. Applying this threshold produces a number of connected candidate regions. The region with the highest number of pixels is taken as the optic disc. Dashtbozorg et al. [23] used Sliding Band Filter (SBF) [24] on downsampled versions of high resolution images since SBF is computationally very expensive. They apply this SBF first to a larger region of interest on downsampled image to get a rough localization. The position of this roughly estimated ROI is then used to establish a smaller ROI on original sized image for a second application of SBF. The maximum filter response results in k-candidates pointing to potential OD regions. They then use a regression algorithm to smooth the disc boundary. Zhang et al. [25] proposed a fast method to detect optic disc. Three vessel distribution features are used to calculate possible horizontal coordinates of the disc. These features are local vessel density, compactness of the vessels and their uniformity. The vertical coordinates of the disc are calculated using Hough Transform according to the global vessel direction characteristics.Hough Transform (HT) has been widely utilized to detect OD [25–27] due to disc’s inherent circular shape and bright intensity. The technique is applied to binary images after they have undergone morphological operations to remove noise or reflection of light from ocular fundus that may interfere with the calculation of Hough Circles. The HT maps any point (x, y) in the image to a circle in a parameter space that is characterized by centre (a, b) and radius r, and passes through the point (x, y) by following the equation of circle. Consequently, the set of all feature points in the binary image are associated with circles that may almost be concentric around a circular shape in the image for some given value of radius r. This value of r should be known a priori by experience or experiments. Akyol et al. [28] presented an automatic method to localize OD from retinal images. They employ keypoint detectors to extract discriminative information about the image and Structural Similarity (SSIM) index for textual analysis. They then used visual dictionary and random forest classifier [29] to detect the disc location.Glaucoma classificationAutomatic detection and classification of glaucoma has also been widely studied by researcher since long. A brief overview of some of the current works is presented below. For a thorough coverage of glaucoma detection techniques [30–32] may be consulted.Fuente-Arriaga et al. [33] proposed measuring blood vessels displacement within the disc for glaucoma detection. They first segment vascular bundle in OD to set a reference point in the temporal side of the cup. Centroid positions of inferior, superior, and nasal vascular bundles are then determined which are used to calculate L1 distance between centroid and normal position of vascular bundles. They applied their method on a set of 67 images carefully selected for clarity and quality of retina from a private dataset and report 91.34% overall accuracy. Ahmad et al. [34] and Khan et al. [35] have used almost similar techniques to detect glaucoma. They calculate CDR and ISNT quadrants and classify an image as glaucomatous if the CDR is greater than 0.5 and it violates ISNT rule. Ahmad et al. applied the method on 80 images taken from DMED dataset, FAU data library, and Messidor dataset and reported 97.5% accuracy whereas Khan et al. used 50 images taken from the above-mentioned datasets and reported 94% accuracy. Though the accuracies reported by the aforementioned researchers are well above 90%, their test images are handpicked and so fewer in number that the results are not statistically significant and cannot be reliably generalized to large scale public datasets.ORIGA [36] is a publicly available dataset of 650 retinal fundus images for benchmarking computer aided segmentation and classification. Xu et al. [37] formulated a reconstruction based method for localizing and classifying optic discs. They generate a codebook by random sampling from manually labelled images. This codebook is then used to calculate OD parameters based on their similarity to the input and their contribution towards reconstruction of input image. They report AUC for glaucoma diagnosis at 0.823. Noting that classification based approaches perform better than segmentation based approaches for glaucoma detection, Li et al. [38] proposed to integrate local features with holistic feature to improve glaucoma classification. They ran various CNNs like AlexNet, VGG-16 and VGG-19 [39] and found that combining holistic and local features with AlexNet as classifier gives highest AUC at 0.8384 using 10-fold cross validation, while the manual classification gives AUC equal to 0.8390 on ORIGA dataset. Chen et al. [6] also used deep convolutional networks based approach for glaucoma classification on ORIGA dataset. Their method inserts micro neural networks within more complex models so that the receptive field has more abstract representation of data. They also make use of a contextualization network to get hierarchal and discriminative representation of images. Their achieved AUC is 0.838 with 99 randomly selected train images and rest for testing. In another of their publications Chen et al. [5] used a six layer CNN to detect glaucoma from ORIGA images. They used the same strategy of taking 99 random images for training and rest for testing and obtained AUC at 0.831.Recently, Al-Bander et al. [40] used deep learning approach to segment optic cup and OD from fundus images. Their segmentation model has a U-Shape architecture inspired from U-Net [41] with Densely connected convolutional blocks, inspired from DenseNet [42]. They outperformed state-of-the-art segmentation results on various fundus datasets including ORIGA. For glaucoma diagnosis, however, in spite of combining commonly used vertical CDR with horizontal CDR, they were able to achieve AUC at 0.778 only. Similarly Fu et al. [43] also proposed a U-Net like architecture for joint segmentation of optic cup and OD and named it M-Net. They added a multi-scale input layer that gets the input image at various scales and gives receptive fields of respective sizes. The main U-shaped convolutional network learns hierarchical representation. The so-called side-output layers generate prediction maps for early layers. These side-output layers not only relieve vanishing gradient problem by back propagating side-output loss directly to the early layers but also help achieve better output by supervising the output maps of each scale. For glaucoma screening on ORIGA data set, they trained their model on 325 images and tested on rest of 325 images. Using vertical CDR of their segmented discs and cups they achieved AUC at 0.851. | [
"23787338",
"22275207",
"21156389",
"14518729",
"25464343",
"25361515",
"24530536",
"15084075",
"18534830",
"22588616",
"20562037"
] | [
{
"pmid": "23787338",
"title": "Representation learning: a review and new perspectives.",
"abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."
},
{
"pmid": "22275207",
"title": "Retinal imaging and image analysis.",
"abstract": "Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships."
},
{
"pmid": "21156389",
"title": "Detection of new vessels on the optic disc using retinal photographs.",
"abstract": "Proliferative diabetic retinopathy is a rare condition likely to lead to severe visual impairment. It is characterized by the development of abnormal new retinal vessels. We describe a method for automatically detecting new vessels on the optic disc using retinal photography. Vessel-like candidate segments are first detected using a method based on watershed lines and ridge strength measurement. Fifteen feature parameters, associated with shape, position, orientation, brightness, contrast and line density are calculated for each candidate segment. Based on these features, each segment is categorized as normal or abnormal using a support vector machine (SVM) classifier. The system was trained and tested by cross-validation using 38 images with new vessels and 71 normal images from two diabetic retinal screening centers and one hospital eye clinic. The discrimination performance of the fifteen features was tested against a clinical reference standard. Fourteen features were found to be effective and used in the final test. The area under the receiver operator characteristic curve was 0.911 for detecting images with new vessels on the disc. This accuracy may be sufficient for it to play a useful clinical role in an automated retinopathy analysis system."
},
{
"pmid": "14518729",
"title": "A novel approach to diagnose diabetes based on the fractal characteristics of retinal images.",
"abstract": "A novel diagnostic scheme to develop quantitative indexes of diabetes is introduced in this paper. The fractal dimension of the vascular distribution is estimated because we discovered that the fractal dimension of a severe diabetic patient's retinal vascular distribution appears greater than that of a normal human's. The issue of how to yield an accurate fractal dimension is to use high-quality images. To achieve a better image-processing result, an appropriate image-processing algorithm is adopted in this paper. Another important fractal feature introduced in this paper is the measure of lacunarity, which describes the characteristics of fractals that have the same fractal dimension but different appearances. For those vascular distributions in the same fractal dimension, further classification can be made using the degree of lacunarity. In addition to the image-processing technique, the resolution of original image is also discussed here. In this paper, the influence of the image resolution upon the fractal dimension is explored. We found that a low-resolution image cannot yield an accurate fractal dimension. Therefore, an approach for examining the lower bound of image resolution is also proposed in this paper. As for the classification of diagnosis results, four different approaches are compared to achieve higher accuracy. In this study, the fractal dimension and the measure of lacunarity have shown their significance in the classification of diabetes and are adequate for use as quantitative indexes."
},
{
"pmid": "25464343",
"title": "Optic disc segmentation using the sliding band filter.",
"abstract": "BACKGROUND\nThe optic disc (OD) centre and boundary are important landmarks in retinal images and are essential for automating the calculation of health biomarkers related with some prevalent systemic disorders, such as diabetes, hypertension, cerebrovascular and cardiovascular diseases.\n\n\nMETHODS\nThis paper presents an automatic approach for OD segmentation using a multiresolution sliding band filter (SBF). After the preprocessing phase, a low-resolution SBF is applied on a downsampled retinal image and the locations of maximal filter response are used for focusing the analysis on a reduced region of interest (ROI). A high-resolution SBF is applied to obtain a set of pixels associated with the maximum response of the SBF, giving a coarse estimation of the OD boundary, which is regularized using a smoothing algorithm.\n\n\nRESULTS\nOur results are compared with manually extracted boundaries from public databases (ONHSD, MESSIDOR and INSPIRE-AVR datasets) outperforming recent approaches for OD segmentation. For the ONHSD, 44% of the results are classified as Excellent, while the remaining images are distributed between the Good (47%) and Fair (9%) categories. An average overlapping area of 83%, 89% and 85% is achieved for the images in ONHSD, MESSIDOR and INSPIR-AVR datasets, respectively, when comparing with the manually delineated OD regions.\n\n\nDISCUSSION\nThe evaluation results on the images of three datasets demonstrate the better performance of the proposed method compared to recently published OD segmentation approaches and prove the independence of this method when from changes in image characteristics such as size, quality and camera field of view."
},
{
"pmid": "25361515",
"title": "Novel Accurate and Fast Optic Disc Detection in Retinal Images With Vessel Distribution and Directional Characteristics.",
"abstract": "A novel accurate and fast optic disc (OD) detection method is proposed by using vessel distribution and directional characteristics. A feature combining three vessel distribution characteristics, i.e., local vessel density, compactness, and uniformity, is designed to find possible horizontal coordinate of OD. Then, according to the global vessel direction characteristic, a General Hough Transformation is introduced to identify the vertical coordinate of OD. By confining the possible OD vertical range and by simplifying vessel structure with blocks, we greatly decrease the computational cost of the algorithm. Four public datasets have been tested. The OD localization accuracy lies from 93.8% to 99.7%, when 8-20% vessel detection results are adopted to achieve OD detection. Average computation times for STARE images are about 3.4-11.5 s, which relate to image size. The proposed method shows satisfactory robustness on both normal and diseased images. It is better than many previous methods with respect to accuracy and efficiency."
},
{
"pmid": "24530536",
"title": "Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images.",
"abstract": "This paper presents a methodology for glaucoma detection based on measuring displacements of blood vessels within the optic disc (vascular bundle) in human retinal images. The method consists of segmenting the region of the vascular bundle in an optic disc to set a reference point in the temporal side of the cup, determining the position of the centroids of the superior, inferior, and nasal vascular bundle segmented zones located within the segmented region, and calculating the displacement from normal position using the chessboard distance metric. The method was successful in 62 images out of 67, achieving 93.02% sensitivity, 91.66% specificity, and 91.34% global accuracy in pre-diagnosis."
},
{
"pmid": "15084075",
"title": "Ridge-based vessel segmentation in color images of the retina.",
"abstract": "A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kappaNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. The results show that our method is significantly better than the two rule-based methods (p < 0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer."
},
{
"pmid": "18534830",
"title": "Identification of the optic nerve head with genetic algorithms.",
"abstract": "OBJECTIVE\nThis work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms.\n\n\nMETHODS AND MATERIAL\nDomain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain).\n\n\nRESULTS AND CONCLUSIONS\nThe results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position."
},
{
"pmid": "22588616",
"title": "Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets.",
"abstract": "The optic disk (OD) center and margin are typically requisite landmarks in establishing a frame of reference for classifying retinal and optic nerve pathology. Reliable and efficient OD localization and segmentation are important tasks in automatic eye disease screening. This paper presents a new, fast, and fully automatic OD localization and segmentation algorithm developed for retinal disease screening. First, OD location candidates are identified using template matching. The template is designed to adapt to different image resolutions. Then, vessel characteristics (patterns) on the OD are used to determine OD location. Initialized by the detected OD center and estimated OD radius, a fast, hybrid level-set model, which combines region and local gradient information, is applied to the segmentation of the disk boundary. Morphological filtering is used to remove blood vessels and bright regions other than the OD that affect segmentation in the peripapillary region. Optimization of the model parameters and their effect on the model performance are considered. Evaluation was based on 1200 images from the publicly available MESSIDOR database. The OD location methodology succeeded in 1189 out of 1200 images (99% success). The average mean absolute distance between the segmented boundary and the reference standard is 10% of the estimated OD radius for all image sizes. Its efficiency, robustness, and accuracy make the OD localization and segmentation scheme described herein suitable for automatic retinal disease screening in a variety of clinical settings."
},
{
"pmid": "20562037",
"title": "Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.",
"abstract": "Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper."
}
] |
Scientific Reports | 31320740 | PMC6639365 | 10.1038/s41598-019-46939-6 | Discovering Links Between Side Effects and Drugs Using a Diffusion Based Method | Identifying the unintended effects of drugs (side effects) is a very important issue in pharmacological studies. The laboratory verification of associations between drugs and side effects requires costly, time-intensive research. Thus, an approach to predicting drug side effects based on known side effects, using a computational model, is highly desirable. To provide such a model, we used openly available data resources to model drugs and side effects as a bipartite graph. The drug-drug network is constructed using the word2vec model where the edges between drugs represent the semantic similarity between them. We integrated the bipartite graph and the semantic similarity graph using a matrix factorization method and a diffusion based model. Our results show the effectiveness of this integration by computing weighted (i.e., ranked) predictions of initially unknown links between side effects and drugs. | Related WorkIn a biological context, diffusion-based approaches for predicting relations between diseases and genes are well studied. Network propagation has become a popular technique in computational system biology with a focus on protein function prediction and disease-gene prioritization20. Many methods that rely on biological information use protein-target as features. The assumption underlying these approaches is the idea that drugs with similar in vitro protein-binding profiles tend to exhibit similar side effects21. There are some methods that have been developed to determine the association between ADR and perturbed biological pathways because these pathways shared the proteins that the drugs target. Li et al.22 describe a chemical system biology approach to identifying the off-targets of drugs. However, these approaches are based on the accessibility of gene-expression data collected during the chemical perturbations produced by the drugs. The success of these methods depends upon the availability of 3D structures of the protein which limits their usability because of the higher cost involved.Cowen et al.23 have claimed that network-based propagation is a powerful data transformation method of broad utility in biomedical research. There are different variants of network propagation proposed, such as random walk24 and PageRank search25 algorithms applied to a biological problem. Nitsch et al.20 showed that heat diffusion algorithms have the potential to help prioritizing disease gene associations and perform best among all network-based diffusion approaches.Finding associations between side effects and a drug is a link prediction problem. Matrix factorization is widely used for link prediction, where the networks are represented as matrices having cells representing relationships. Therefore, according to Menon et al.26, link prediction can be treated as a problem of matrix completion. For example, low-rank matrix decomposition based on Singular Value Decomposition (SVD)27 has been used for this purpose. Another variant of matrix factorization called Non-negative Matrix Factorization (NMF)28 has also been used in link prediction tasks29. One of the advantages of using NMF-based matrix factorization is that it can easily integrate heterogeneous information30 and has non negative interpretable advantages. For multi-relational link prediction, tensor-based factorization is prominently used. The strength of tensors is that the multi-relational graph can be expressed in higher-order tensors which can be easily factorized. These models do not require a priori knowledge that needs to be inferred from the data, unlike graphical models such as Markov Logic Networks (MLN) or Bayesian Networks31. In recent studies, a node2vec32 approach was used to analyze different network neighborhoods to embed nodes based on the assumption of homophily as well as structural equivalence for link prediction in a homogeneous network for the same edge type. Due to high accuracy, the node embedding techniques33 are preferred but they also have some limitations. These methods actually require learning steps which might be unfeasible for large-scale networks which have millions of nodes34. Similarity-based propagation methods are also well studied in predicting the links in bipartite networks. The classic network based propagation in recommender system predicting most relevant objects for users35 predict the links between two dissimilar node types.Our diffusion approach differs from the methods mentioned above in two important ways. First, those heat diffusion-based approaches described above are applied in a homogeneous network, where nodes and the edges are of the same type. While we consider heterogeneous networks and integrate them in an effective way. Second, we used 2 different networks, first for learning the seed nodes to carry side effect information in a drug-drug similarity network, and second to predict the associations between side effects and drugs. More specifically, we integrated NMF and heat diffusion methods to effectively handle the two different networks. | [
"9554902",
"12136375",
"16243262",
"29943160",
"21613989",
"19434832",
"23157436",
"23593264",
"26610385",
"25832646",
"16875881",
"20840752",
"16370374",
"19436720",
"28854195",
"30496261",
"28881986",
"17995171",
"21173440",
"30724742",
"30537965",
"22538619",
"27415801"
] | [
{
"pmid": "12136375",
"title": "Admissions caused by adverse drug events to internal medicine and emergency departments in hospitals: a longitudinal population-based study.",
"abstract": "OBJECTIVE\nTo estimate incidence rates of drug-related hospitalizations (DRHs) in a longitudinal population-based study with prospective event assessment.\n\n\nDESIGN\nCohort study and time-trend analysis.\n\n\nSETTING\nAll departments of internal medicine and emergency departments in the urban regions of Jena and Rostock, Germany, serving about 520,000 residents.\n\n\nPARTICIPANTS\nAll patients admitted between October 1997 and March 2000. Patients with severe cutaneous reactions were excluded.\n\n\nMAIN OUTCOME MEASURES\nIncidence of DRH was defined by symptoms or diagnoses at admission that were very likely, likely, or possibly caused by prescription medications, according to a standardized assessment.\n\n\nRESULTS\nThe incidence of DRH was 9.4 admissions per 10,000 treated patients [95% confidence interval (CI) 9.0-9.9]. Rates were highest for antithrombotics with 26.9 admissions per 10,000 treated patients (95% CI 23.6, 30.1). Most frequent events were gastroduodenal lesions and bleeding (45%). Digitalis preparations showed a linearly increasing trend from 2/10,000 to 14/10,000 during ten quarters ( P<0.0001), which was exclusively attributable to digitoxin, the major source of digitalis in the study area (93%). The incidence of DRH increased with age (4/10,000 to 20/10,000). The mean length of stays in patients with DRH was 13+/-10.6 days. Cumulative direct costs for hospitalization were Euro 4 million in the two urban study areas. The annual direct costs for Germany were estimated to be Euro 400 million.\n\n\nCONCLUSIONS\nDRHs are a considerable public health and economic burden. A longitudinal design can observe changes in population-based incidence over time. This approach can be used for public-health planning or to evaluate outcomes of quality management programs designed to reduce drug-induced illness."
},
{
"pmid": "16243262",
"title": "Keynote review: in vitro safety pharmacology profiling: an essential tool for successful drug development.",
"abstract": "Broad-scale in vitro pharmacology profiling of new chemical entities during early phases of drug discovery has recently become an essential tool to predict clinical adverse effects. Modern, relatively inexpensive assay technologies and rapidly expanding knowledge about G-protein coupled receptors, nuclear receptors, ion channels and enzymes have made it possible to implement a large number of assays addressing possible clinical liabilities. Together with other in vitro assays focusing on toxicology and bioavailability, they provide a powerful tool to aid drug development. In this article, we review the development of this tool for drug discovery, its appropriate use and predictive value."
},
{
"pmid": "29943160",
"title": "Inferring potential small molecule-miRNA association based on triple layer heterogeneous network.",
"abstract": "Recently, many biological experiments have indicated that microRNAs (miRNAs) are a newly discovered small molecule (SM) drug targets that play an important role in the development and progression of human complex diseases. More and more computational models have been developed to identify potential associations between SMs and target miRNAs, which would be a great help for disease therapy and clinical applications for known drugs in the field of medical research. In this study, we proposed a computational model of triple layer heterogeneous network based small molecule-MiRNA association prediction (TLHNSMMA) to uncover potential SM-miRNA associations by integrating integrated SM similarity, integrated miRNA similarity, integrated disease similarity, experimentally verified SM-miRNA associations and miRNA-disease associations into a heterogeneous graph. To evaluate the performance of TLHNSMMA, we implemented global and two types of local leave-one-out cross validation as well as fivefold cross validation to compare TLHNSMMA with one previous classical computational model (SMiR-NBI). As a result, for Dataset 1, TLHNSMMA obtained the AUCs of 0.9859, 0.9845, 0.7645 and 0.9851 ± 0.0012, respectively; for Dataset 2, the AUCs are in turn 0.8149, 0.8244, 0.6057 and 0.8168 ± 0.0022. As the result of case studies shown, among the top 10, 20 and 50 potential SM-related miRNAs, there were 2, 7 and 14 SM-miRNA associations confirmed by experiments, respectively. Therefore, TLHNSMMA could be effectively applied to the prediction of SM-miRNA associations."
},
{
"pmid": "21613989",
"title": "Predicting adverse drug reactions using publicly available PubChem BioAssay data.",
"abstract": "Adverse drug reactions (ADRs) can have severe consequences, and therefore the ability to predict ADRs prior to market introduction of a drug is desirable. Computational approaches applied to preclinical data could be one way to inform drug labeling and marketing with respect to potential ADRs. Based on the premise that some of the molecular actors of ADRs involve interactions that are detectable in large, and increasingly public, compound screening campaigns, we generated logistic regression models that correlate postmarketing ADRs with screening data from the PubChem BioAssay database. These models analyze ADRs at the level of organ systems, using the system organ classes (SOCs). Of the 19 SOCs under consideration, nine were found to be significantly correlated with preclinical screening data. With regard to six of the eight established drugs for which we could retropredict SOC-specific ADRs, prior knowledge was found that supports these predictions. We conclude this paper by predicting that SOC-specific ADRs will be associated with three unapproved or recently introduced drugs."
},
{
"pmid": "19434832",
"title": "Gaining insight into off-target mediated effects of drug candidates with a comprehensive systems chemical biology analysis.",
"abstract": "We present a workflow that leverages data from chemogenomics based target predictions with Systems Biology databases to better understand off-target related toxicities. By analyzing a set of compounds that share a common toxic phenotype and by comparing the pathways they affect with pathways modulated by nontoxic compounds we are able to establish links between pathways and particular adverse effects. We further link these predictive results with literature data in order to explain why a certain pathway is predicted. Specifically, relevant pathways are elucidated for the side effects rhabdomyolysis and hypotension. Prospectively, our approach is valuable not only to better understand toxicities of novel compounds early on but also for drug repurposing exercises to find novel uses for known drugs."
},
{
"pmid": "23157436",
"title": "Drug side-effect prediction based on the integration of chemical and biological spaces.",
"abstract": "Drug side-effects, or adverse drug reactions, have become a major public health concern and remain one of the main causes of drug failure and of drug withdrawal once they have reached the market. Therefore, the identification of potential severe side-effects is a challenging issue. In this paper, we develop a new method to predict potential side-effect profiles of drug candidate molecules based on their chemical structures and target protein information on a large scale. We propose several extensions of kernel regression model for multiple responses to deal with heterogeneous data sources. The originality lies in the integration of the chemical space of drug chemical structures and the biological space of drug target proteins in a unified framework. As a result, we demonstrate the usefulness of the proposed method on the simultaneous prediction of 969 side-effects for approved drugs from their chemical substructure and target protein profiles and show that the prediction accuracy consistently improves owing to the proposed regression model and integration of chemical and biological information. We also conduct a comprehensive side-effect prediction for uncharacterized drug molecules stored in DrugBank and confirm interesting predictions using independent information sources. The proposed method is expected to be useful at many stages of the drug development process."
},
{
"pmid": "23593264",
"title": "Drug target prediction and repositioning using an integrated network-based approach.",
"abstract": "The discovery of novel drug targets is a significant challenge in drug development. Although the human genome comprises approximately 30,000 genes, proteins encoded by fewer than 400 are used as drug targets in the treatment of diseases. Therefore, novel drug targets are extremely valuable as the source for first in class drugs. On the other hand, many of the currently known drug targets are functionally pleiotropic and involved in multiple pathologies. Several of them are exploited for treating multiple diseases, which highlights the need for methods to reliably reposition drug targets to new indications. Network-based methods have been successfully applied to prioritize novel disease-associated genes. In recent years, several such algorithms have been developed, some focusing on local network properties only, and others taking the complete network topology into account. Common to all approaches is the understanding that novel disease-associated candidates are in close overall proximity to known disease genes. However, the relevance of these methods to the prediction of novel drug targets has not yet been assessed. Here, we present a network-based approach for the prediction of drug targets for a given disease. The method allows both repositioning drug targets known for other diseases to the given disease and the prediction of unexploited drug targets which are not used for treatment of any disease. Our approach takes as input a disease gene expression signature and a high-quality interaction network and outputs a prioritized list of drug targets. We demonstrate the high performance of our method and highlight the usefulness of the predictions in three case studies. We present novel drug targets for scleroderma and different types of cancer with their underlying biological processes. Furthermore, we demonstrate the ability of our method to identify non-suspected repositioning candidates using diabetes type 1 as an example."
},
{
"pmid": "26610385",
"title": "Early identification of adverse drug reactions from search log data.",
"abstract": "The timely and accurate identification of adverse drug reactions (ADRs) following drug approval is a persistent and serious public health challenge. Aggregated data drawn from anonymized logs of Web searchers has been shown to be a useful source of evidence for detecting ADRs. However, prior studies have been based on the analysis of established ADRs, the existence of which may already be known publically. Awareness of these ADRs can inject existing knowledge about the known ADRs into online content and online behavior, and thus raise questions about the ability of the behavioral log-based methods to detect new ADRs. In contrast to previous studies, we investigate the use of search logs for the early detection of known ADRs. We use a large set of recently labeled ADRs and negative controls to evaluate the ability of search logs to accurately detect ADRs in advance of their publication. We leverage the Internet Archive to estimate when evidence of an ADR first appeared in the public domain and adjust the index date in a backdated analysis. Our results demonstrate how search logs can be used to detect new ADRs, the central challenge in pharmacovigilance."
},
{
"pmid": "25832646",
"title": "A survey of current trends in computational drug repositioning.",
"abstract": "Computational drug repositioning or repurposing is a promising and efficient tool for discovering new uses from existing drugs and holds the great potential for precision medicine in the age of big data. The explosive growth of large-scale genomic and phenotypic data, as well as data of small molecular compounds with granted regulatory approval, is enabling new developments for computational repositioning. To achieve the shortest path toward new drug indications, advanced data processing and analysis strategies are critical for making sense of these heterogeneous molecular measurements. In this review, we show recent advancements in the critical areas of computational drug repositioning from multiple aspects. First, we summarize available data sources and the corresponding computational repositioning strategies. Second, we characterize the commonly used computational techniques. Third, we discuss validation strategies for repositioning studies, including both computational and experimental methods. Finally, we highlight potential opportunities and use-cases, including a few target areas such as cancers. We conclude with a brief discussion of the remaining challenges in computational drug repositioning."
},
{
"pmid": "16875881",
"title": "Measures of semantic similarity and relatedness in the biomedical domain.",
"abstract": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures."
},
{
"pmid": "20840752",
"title": "Candidate gene prioritization by network analysis of differential expression using machine learning approaches.",
"abstract": "BACKGROUND\nDiscovering novel disease genes is still challenging for diseases for which no prior knowledge--such as known disease genes or disease-related pathways--is available. Performing genetic studies frequently results in large lists of candidate genes of which only few can be followed up for further investigation. We have recently developed a computational method for constitutional genetic disorders that identifies the most promising candidate genes by replacing prior knowledge by experimental data of differential gene expression between affected and healthy individuals.To improve the performance of our prioritization strategy, we have extended our previous work by applying different machine learning approaches that identify promising candidate genes by determining whether a gene is surrounded by highly differentially expressed genes in a functional association or protein-protein interaction network.\n\n\nRESULTS\nWe have proposed three strategies scoring disease candidate genes relying on network-based machine learning approaches, such as kernel ridge regression, heat kernel, and Arnoldi kernel approximation. For comparison purposes, a local measure based on the expression of the direct neighbors is also computed. We have benchmarked these strategies on 40 publicly available knockout experiments in mice, and performance was assessed against results obtained using a standard procedure in genetics that ranks candidate genes based solely on their differential expression levels (Simple Expression Ranking). Our results showed that our four strategies could outperform this standard procedure and that the best results were obtained using the Heat Kernel Diffusion Ranking leading to an average ranking position of 8 out of 100 genes, an AUC value of 92.3% and an error reduction of 52.8% relative to the standard procedure approach which ranked the knockout gene on average at position 17 with an AUC value of 83.7%.\n\n\nCONCLUSION\nIn this study we could identify promising candidate genes using network based machine learning approaches even if no knowledge is available about the disease or phenotype."
},
{
"pmid": "16370374",
"title": "Analysis of drug-induced effect patterns to link structure and side effects of medicines.",
"abstract": "The high failure rate of experimental medicines in clinical trials accentuates inefficiencies of current drug discovery processes caused by a lack of tools for translating the information exchange between protein and organ system networks. Recently, we reported that biological activity spectra (biospectra), derived from in vitro protein binding assays, provide a mechanism for assessing a molecule's capacity to modulate the function of protein-network components. Herein we describe the translation of adverse effect data derived from 1,045 prescription drug labels into effect spectra and show their utility for diagnosing drug-induced effects of medicines. In addition, notwithstanding the limitation imposed by the quality of drug label information, we show that biospectrum analysis, in concert with effect spectrum analysis, provides an alignment between preclinical and clinical drug-induced effects. The identification of this alignment provides a mechanism for forecasting clinical effect profiles of medicines."
},
{
"pmid": "19436720",
"title": "Drug discovery using chemical systems biology: identification of the protein-ligand binding network to explain the side effects of CETP inhibitors.",
"abstract": "Systematic identification of protein-drug interaction networks is crucial to correlate complex modes of drug action to clinical indications. We introduce a novel computational strategy to identify protein-ligand binding profiles on a genome-wide scale and apply it to elucidating the molecular mechanisms associated with the adverse drug effects of Cholesteryl Ester Transfer Protein (CETP) inhibitors. CETP inhibitors are a new class of preventive therapies for the treatment of cardiovascular disease. However, clinical studies indicated that one CETP inhibitor, Torcetrapib, has deadly off-target effects as a result of hypertension, and hence it has been withdrawn from phase III clinical trials. We have identified a panel of off-targets for Torcetrapib and other CETP inhibitors from the human structural genome and map those targets to biological pathways via the literature. The predicted protein-ligand network is consistent with experimental results from multiple sources and reveals that the side-effect of CETP inhibitors is modulated through the combinatorial control of multiple interconnected pathways. Given that combinatorial control is a common phenomenon observed in many biological processes, our findings suggest that adverse drug effects might be minimized by fine-tuning multiple off-target interactions using single or multiple therapies. This work extends the scope of chemogenomics approaches and exemplifies the role that systems biology has in the future of drug discovery."
},
{
"pmid": "28854195",
"title": "Link prediction based on non-negative matrix factorization.",
"abstract": "With the rapid expansion of internet, the complex networks has become high-dimensional, sparse and redundant. Besides, the problem of link prediction in such networks has also obatined increasingly attention from different types of domains like information science, anthropology, sociology and computer sciences. It makes requirements for effective link prediction techniques to extract the most essential and relevant information for online users in internet. Therefore, this paper attempts to put forward a link prediction algorithm based on non-negative matrix factorization. In the algorithm, we reconstruct the correlation between different types of matrix through the projection of high-dimensional vector space to a low-dimensional one, and then use the similarity between the column vectors of the weight matrix as the scoring matrix. The experiment results demonstrate that the algorithm not only reduces data storage space but also effectively makes the improvements of the prediction performance during the process of sustaining a low time complexity."
},
{
"pmid": "30496261",
"title": "A unified framework for link prediction based on non-negative matrix factorization with coupling multivariate information.",
"abstract": "Many link prediction methods have been developed to infer unobserved links or predict missing links based on the observed network structure that is always incomplete and subject to interfering noise. Thus, the performance of existing methods is usually limited in that their computation depends only on input graph structures, and they do not consider external information. The effects of social influence and homophily suggest that both network structure and node attribute information should help to resolve the task of link prediction. This work proposes SASNMF, a link prediction unified framework based on non-negative matrix factorization that considers not only graph structure but also the internal and external auxiliary information, which refers to both the node attributes and the structural latent feature information extracted from the network. Furthermore, three different combinations of internal and external information are proposed and input into the framework to solve the link prediction problem. Extensive experimental results on thirteen real networks, five node attribute networks and eight non-attribute networks show that the proposed framework has competitive performance compared with benchmark methods and state-of-the-art methods, indicating the superiority of the presented algorithm."
},
{
"pmid": "28881986",
"title": "Predicting multicellular function through multi-layer tissue networks.",
"abstract": "MOTIVATION\nUnderstanding functions of proteins in specific human tissues is essential for insights into disease diagnostics and therapeutics, yet prediction of tissue-specific cellular function remains a critical challenge for biomedicine.\n\n\nRESULTS\nHere, we present OhmNet , a hierarchy-aware unsupervised node feature learning approach for multi-layer networks. We build a multi-layer network, where each layer represents molecular interactions in a different human tissue. OhmNet then automatically learns a mapping of proteins, represented as nodes, to a neural embedding-based low-dimensional space of features. OhmNet encourages sharing of similar features among proteins with similar network neighborhoods and among proteins activated in similar tissues. The algorithm generalizes prior work, which generally ignores relationships between tissues, by modeling tissue organization with a rich multiscale tissue hierarchy. We use OhmNet to study multicellular function in a multi-layer protein interaction network of 107 human tissues. In 48 tissues with known tissue-specific cellular functions, OhmNet provides more accurate predictions of cellular function than alternative approaches, and also generates more accurate hypotheses about tissue-specific protein actions. We show that taking into account the tissue hierarchy leads to improved predictive power. Remarkably, we also demonstrate that it is possible to leverage the tissue hierarchy in order to effectively transfer cellular functions to a functionally uncharacterized tissue. Overall, OhmNet moves from flat networks to multiscale models able to predict a range of phenotypes spanning cellular subsystems.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSource code and datasets are available at http://snap.stanford.edu/ohmnet .\n\n\nCONTACT\[email protected]."
},
{
"pmid": "17995171",
"title": "Heat conduction process on community networks as a recommendation model.",
"abstract": "Using heat conduction mechanism on a social network we develop a systematic method to predict missing values as recommendations. This method can treat very large matrices that are typical of internet communities. In particular, with an innovative, exact formulation that accommodates arbitrary boundary condition, our method is easy to use in real applications. The performance is assessed by comparing with traditional recommendation methods using real data."
},
{
"pmid": "21173440",
"title": "Graph Regularized Nonnegative Matrix Factorization for Data Representation.",
"abstract": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems."
},
{
"pmid": "30724742",
"title": "Detecting Potential Adverse Drug Reactions Using a Deep Neural Network Model.",
"abstract": "BACKGROUND\nAdverse drug reactions (ADRs) are common and are the underlying cause of over a million serious injuries and deaths each year. The most familiar method to detect ADRs is relying on spontaneous reports. Unfortunately, the low reporting rate of spontaneous reports is a serious limitation of pharmacovigilance.\n\n\nOBJECTIVE\nThe objective of this study was to identify a method to detect potential ADRs of drugs automatically using a deep neural network (DNN).\n\n\nMETHODS\nWe designed a DNN model that utilizes the chemical, biological, and biomedical information of drugs to detect ADRs. This model aimed to fulfill two main purposes: identifying the potential ADRs of drugs and predicting the possible ADRs of a new drug. For improving the detection performance, we distributed representations of the target drugs in a vector space to capture the drug relationships using the word-embedding approach to process substantial biomedical literature. Moreover, we built a mapping function to address new drugs that do not appear in the dataset.\n\n\nRESULTS\nUsing the drug information and the ADRs reported up to 2009, we predicted the ADRs of drugs recorded up to 2012. There were 746 drugs and 232 new drugs, which were only recorded in 2012 with 1325 ADRs. The experimental results showed that the overall performance of our model with mean average precision at top-10 achieved is 0.523 and the rea under the receiver operating characteristic curve (AUC) score achieved is 0.844 for ADR prediction on the dataset.\n\n\nCONCLUSIONS\nOur model is effective in identifying the potential ADRs of a drug and the possible ADRs of a new drug. Most importantly, it can detect potential ADRs irrespective of whether they have been reported in the past."
},
{
"pmid": "30537965",
"title": "A heterogeneous label propagation approach to explore the potential associations between miRNA and disease.",
"abstract": "BACKGROUND\nResearch on microRNAs (miRNAs) has attracted increasingly worldwide attention over recent years as growing experimental results have made clear that miRNA correlates with masses of critical biological processes and the occurrence, development, and diagnosis of human complex diseases. Nonetheless, the known miRNA-disease associations are still insufficient considering plenty of human miRNAs discovered now. Therefore, there is an urgent need for effective computational model predicting novel miRNA-disease association prediction to save time and money for follow-up biological experiments.\n\n\nMETHODS\nIn this study, considering the insufficiency of the previous computational methods, we proposed the model named heterogeneous label propagation for MiRNA-disease association prediction (HLPMDA), in which a heterogeneous label was propagated on the multi-network of miRNA, disease and long non-coding RNA (lncRNA) to infer the possible miRNA-disease association. The strength of the data about lncRNA-miRNA association and lncRNA-disease association enabled HLPMDA to produce a better prediction.\n\n\nRESULTS\nHLPMDA achieved AUCs of 0.9232, 0.8437 and 0.9218 ± 0.0004 based on global and local leave-one-out cross validation and 5-fold cross validation, respectively. Furthermore, three kinds of case studies were implemented and 47 (esophageal neoplasms), 49 (breast neoplasms) and 46 (lymphoma) of top 50 candidate miRNAs were proved by experiment reports.\n\n\nCONCLUSIONS\nAll the results adequately showed that HLPMDA is a recommendable miRNA-disease association prediction method. We anticipated that HLPMDA could help the follow-up investigations by biomedical researchers."
},
{
"pmid": "22538619",
"title": "Drug-target interaction prediction by random walk on the heterogeneous network.",
"abstract": "Predicting potential drug-target interactions from heterogeneous biological data is critical not only for better understanding of the various interactions and biological processes, but also for the development of novel drugs and the improvement of human medicines. In this paper, the method of Network-based Random Walk with Restart on the Heterogeneous network (NRWRH) is developed to predict potential drug-target interactions on a large scale under the hypothesis that similar drugs often target similar target proteins and the framework of Random Walk. Compared with traditional supervised or semi-supervised methods, NRWRH makes full use of the tool of the network for data integration to predict drug-target associations. It integrates three different networks (protein-protein similarity network, drug-drug similarity network, and known drug-target interaction networks) into a heterogeneous network by known drug-target interactions and implements the random walk on this heterogeneous network. When applied to four classes of important drug-target interactions including enzymes, ion channels, GPCRs and nuclear receptors, NRWRH significantly improves previous methods in terms of cross-validation and potential drug-target interaction prediction. Excellent performance enables us to suggest a number of new potential drug-target interactions for drug development."
},
{
"pmid": "27415801",
"title": "NLLSS: Predicting Synergistic Drug Combinations Based on Semi-supervised Learning.",
"abstract": "Fungal infection has become one of the leading causes of hospital-acquired infections with high mortality rates. Furthermore, drug resistance is common for fungus-causing diseases. Synergistic drug combinations could provide an effective strategy to overcome drug resistance. Meanwhile, synergistic drug combinations can increase treatment efficacy and decrease drug dosage to avoid toxicity. Therefore, computational prediction of synergistic drug combinations for fungus-causing diseases becomes attractive. In this study, we proposed similar nature of drug combinations: principal drugs which obtain synergistic effect with similar adjuvant drugs are often similar and vice versa. Furthermore, we developed a novel algorithm termed Network-based Laplacian regularized Least Square Synergistic drug combination prediction (NLLSS) to predict potential synergistic drug combinations by integrating different kinds of information such as known synergistic drug combinations, drug-target interactions, and drug chemical structures. We applied NLLSS to predict antifungal synergistic drug combinations and showed that it achieved excellent performance both in terms of cross validation and independent prediction. Finally, we performed biological experiments for fungal pathogen Candida albicans to confirm 7 out of 13 predicted antifungal synergistic drug combinations. NLLSS provides an efficient strategy to identify potential synergistic antifungal combinations."
}
] |
Frontiers in Medicine | 31380377 | PMC6646468 | 10.3389/fmed.2019.00162 | Learning Domain-Invariant Representations of Histological Images | Histological images present high appearance variability due to inconsistent latent parameters related to the preparation and scanning procedure of histological slides, as well as the inherent biological variability of tissues. Machine-learning models are trained with images from a limited set of domains, and are expected to generalize to images from unseen domains. Methodological design choices have to be made in order to yield domain invariance and proper generalization. In digital pathology, standard approaches focus either on ad-hoc normalization of the latent parameters based on prior knowledge, such as staining normalization, or aim at anticipating new variations of these parameters via data augmentation. Since every histological image originates from a unique data distribution, we propose to consider every histological slide of the training data as a domain and investigated the alternative approach of domain-adversarial training to learn features that are invariant to this available domain information. We carried out a comparative analysis with staining normalization and data augmentation on two different tasks: generalization to images acquired in unseen pathology labs for mitosis detection and generalization to unseen organs for nuclei segmentation. We report that the utility of each method depends on the type of task and type of data variability present at training and test time. The proposed framework for domain-adversarial training is able to improve generalization performances on top of conventional methods. | 2. Related WorkMachine learning models for histopathology image analysis that directly tackle the appearance variability can be grouped in two main categories: (1) methods that rely on pre-processing of the image data and (2) methods that directly modify the machine learning model and/or training procedure.The first group of methods includes a variety of staining normalization techniques (3, 4). Some image processing pipelines handle the variability problem via extensive data augmentation strategies, often involving color transformations (2, 5–8). Hybrid strategies that perturb the staining distributions on top of a staining normalization procedure have also been investigated (9–12).The second group of methods is dominated by domain adaptation approaches. Domain adaptation assumes the model representation learned from a source domain can be adapted to a new target domain. Fine-tuning and domain-transfer solutions were proposed for deep learning models (13–16), and with applications to digital pathology (17–19). Another approach consists in considering the convolutional filters of the CNN as domain-invariant parameters whereas the domain variability can be captured with the Batch Normalization (BN) parameters (20, 21). Adaptation to new domains can be achieved by fine-tuning a new set of BN parameters dedicated to these new domains (21).Adversarial training of CNNs was proposed to achieve domain adaptation from a source domain of annotated data to a single target domain from which unlabeled data is available (22). Adversarial approaches aim at learning a shared representation that is invariant to the source and target domains via a discriminator CNN, that is used to penalize the model from learning domain-specific features (22–26). This type of method has been successfully applied and adapted to the field of medical image analysis (27). These methods, however, require that data from the target domains is available at training time, which is not a constraint of our approach and were not investigated on tasks involving histological images. Finally, we proposed in Lafarge et al. (2) a similar approach that enforces the model to learn a domain-agnostic representation for a given extensive domain variability present within the training data and we investigated its ability to perform on new unseen domains. | [
"28287963",
"27529701",
"26863654",
"30631215",
"30081241",
"29994060",
"28410112",
"12405945",
"25547073",
"23000897",
"11531144"
] | [
{
"pmid": "28287963",
"title": "A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.",
"abstract": "Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images."
},
{
"pmid": "27529701",
"title": "Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.",
"abstract": "BACKGROUND\nTumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far.\n\n\nMETHODS\nThe development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an \"external\" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method.\n\n\nRESULTS\nThe object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts."
},
{
"pmid": "26863654",
"title": "Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images.",
"abstract": "Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the centers of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and potentially lead to a better understanding of cancer."
},
{
"pmid": "30631215",
"title": "Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images.",
"abstract": "We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost."
},
{
"pmid": "30081241",
"title": "Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach.",
"abstract": "In this paper, we propose a method for automatically annotating slide images from colorectal tissue samples. Our objective is to segment glandular epithelium in histological images from tissue slides submitted to different staining techniques, including usual haematoxylin-eosin (H&E) as well as immunohistochemistry (IHC). The proposed method makes use of Deep Learning and is based on a new convolutional network architecture. Our method achieves better performances than the state of the art on the H&E images of the GlaS challenge contest, whereas it uses only the haematoxylin colour channel extracted by colour deconvolution from the RGB images in order to extend its applicability to IHC. The network only needs to be fine-tuned on a small number of additional examples to be accurate on a new IHC dataset. Our approach also includes a new method of data augmentation to achieve good generalisation when working with different experimental conditions and different IHC markers. We show that our methodology enables to automate the compartmentalisation of the IHC biomarker analysis, results concurring highly with manual annotations."
},
{
"pmid": "29994060",
"title": "Beyond Sharing Weights for Deep Domain Adaptation.",
"abstract": "The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared. We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings."
},
{
"pmid": "28410112",
"title": "Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images.",
"abstract": "Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain."
},
{
"pmid": "25547073",
"title": "Assessment of algorithms for mitosis detection in breast cancer histopathology images.",
"abstract": "The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists."
},
{
"pmid": "23000897",
"title": "Comprehensive molecular portraits of human breast tumours.",
"abstract": "We analysed primary breast cancers by genomic DNA copy number arrays, DNA methylation, exome sequencing, messenger RNA arrays, microRNA sequencing and reverse-phase protein arrays. Our ability to integrate information across platforms provided key insights into previously defined gene expression subtypes and demonstrated the existence of four main breast cancer classes when combining data from five platforms, each of which shows significant molecular heterogeneity. Somatic mutations in only three genes (TP53, PIK3CA and GATA3) occurred at >10% incidence across all breast cancers; however, there were numerous subtype-associated and novel gene mutations including the enrichment of specific mutations in GATA3, PIK3CA and MAP3K1 with the luminal A subtype. We identified two novel protein-expression-defined subgroups, possibly produced by stromal/microenvironmental elements, and integrated analyses identified specific signalling pathways dominant in each molecular subtype including a HER2/phosphorylated HER2/EGFR/phosphorylated EGFR signature within the HER2-enriched expression subtype. Comparison of basal-like breast tumours with high-grade serous ovarian tumours showed many molecular commonalities, indicating a related aetiology and similar therapeutic opportunities. The biological finding of the four main breast cancer subtypes caused by different subsets of genetic and epigenetic abnormalities raises the hypothesis that much of the clinically observable plasticity and heterogeneity occurs within, and not across, these major biological subtypes of breast cancer."
},
{
"pmid": "11531144",
"title": "Quantification of histochemical staining by color deconvolution.",
"abstract": "OBJECTIVE\nTo develop a flexible method of separation and quantification of immunohistochemical staining by means of color image analysis.\n\n\nSTUDY DESIGN\nAn algorithm was developed to deconvolve the color information acquired with red-green-blue (RGB) cameras and to calculate the contribution of each of the applied stains based on stain-specific RGB absorption. The algorithm was tested using different combinations of diaminobenzidine, hematoxylin and eosin at different staining levels.\n\n\nRESULTS\nQuantification of the different stains was not significantly influenced by the combination of multiple stains in a single sample. The color deconvolution algorithm resulted in comparable quantification independent of the stain combinations as long as the histochemical procedures did not influence the amount of stain in the sample due to bleaching because of stain solubility and saturation of staining was prevented.\n\n\nCONCLUSION\nThis image analysis algorithm provides a robust and flexible method for objective immunohistochemical analysis of samples stained with up to three different stains using a laboratory microscope, standard RGB camera setup and the public domain program NIH Image."
}
] |
BMC Medical Informatics and Decision Making | 31331309 | PMC6647294 | 10.1186/s12911-019-0839-3 | Multi-part quality evaluation of a customized mobile application for monitoring elderly patients with functional loss and helping caregivers | BackgroundThe challenges faced by caregivers of the elderly with chronic diseases are always complex. In this context, mobile technologies have been used with promising results, but often have restricted functionality, or are either difficult to use or do not provide the necessary support to the caregiver - which leads to declining usage over time. Therefore, we developed the Mobile System for Elderly Monitoring, SMAI. The purpose of SMAI is to monitor patients with functional loss and to improve the support to caregivers’ communication with the health team professionals, informing them the data related to the patients’ daily lives, while providing the health team better tools.MethodSMAI is composed of mobile applications developed for the caregivers and health team, and a web portal that supports management activities. Caregivers use an Android application to send information and receive care advice and feedback from the health team. The system was constructed using a refinement stage approach. Each stage involved caregivers and the health team in prototype release-test-assessment-refinement cycles. SMAI was evaluated during 18 months. We studied which features were being used the most, and their use pattern throughout the week. We also studied the users’ qualitative perceptions. Finally, the caregiver application was also evaluated for usability.ResultsSMAI functionalities showed to be very useful or useful to caregivers and health professionals. The Focus Group interviews reveled that among caregivers the use of the application gave them the sensation of being connected to the health team. The usability evaluation identified that the interface design and associated tasks were easy to use and the System Usability Scale, SUS, presented very good results.ConclusionsIn general, the use of SMAI represented a positive change for the family caregivers and for the NAI health team. The overall qualitative results indicate that the approach used to construct the system was appropriate to achieve the objectives. | Related workThere is a significant amount of interest in using telemedicine for controlling chronic diseases and for supporting health care systems [11–14]. In some of these studies the main point refers to the notifications and reminders to patients, in order to improve treatment adherence [13, 15].Arif et al. [16] stress the importance of using specific technologies for telemonitoring and medical strategies to support the elderly living with chronic diseases. However, they emphasize that solutions must address the specific needs of this population in order to have a significant impact on improving their quality of life.There are currently different health applications being used to monitor glycemia, blood pressure, daily exercise and nutritional support. But, after installing the software, many of these applications end up being discarded by the users due to usability problems or related to system instability [17]. According to Jin and Kim [18], most of these applications are developed with no previous requirements elicitation and with no clinical effectiveness assessed afterwards. In this context, Cook, Ellis and Hildebrand [19] emphasize that most of the mobile health applications are created without medical expert involvement and inaccurate content, resulting in risk of harm to the patient.Systematic reviews regarding technologies to assist older people and the aging population have verified the factors that influence acceptance [20], highlighting the acceptance difference regarding pre- and post-implementation, and the barriers to its adoption [21, 22]. The analysis of systems focusing on chronic diseases selected from the systematic review developed by Khosravi et al. [23] has results that range from “no effect”, in one of the systems, to “increased quality of life” and “social functioning” or “decrease in the number of hospital readmissions” in most of the systems.In that regard, Nicholas et al. [24] evaluated different e-Health technologies. Several reasons have been identified as recurring problems related to the discontinuation in using the e-Health system: users do not receive sufficient incentives, loss of interest and weak perception of real benefits, and other reactions. More recently Lee et al. [25] reviewed behavioral intervention strategies using mobile applications for chronic disease management. Their work found that features such as text reminders and improved communication between patients and healthcare providers result in “enhanced self-management in patients with chronic conditions”. However the relation engagement of users and outcome improvements were not conclusive. | [
"23479138",
"24050614",
"16620167",
"23611639",
"26431261",
"24529817",
"27573318",
"26216463",
"27185508",
"22564332",
"28663162",
"29331248",
"29195701",
"29602428",
"28550996"
] | [
{
"pmid": "23479138",
"title": "Randomized controlled clinical trial of \"virtual house calls\" for Parkinson disease.",
"abstract": "IMPORTANCE\nThe burden of neurological disorders is increasing, but access to care is limited. Providing specialty care to patients via telemedicine could help alleviate this growing problem.\n\n\nOBJECTIVE\nTo evaluate the feasibility, effectiveness, and economic benefits of using web-based videoconferencing (telemedicine) to provide specialty care to patients with Parkinson disease in their homes.\n\n\nDESIGN\nA 7-month, 2-center, randomized controlled clinical trial.\n\n\nSETTING\nPatients' homes and outpatient clinics at 2 academic medical centers.\n\n\nPARTICIPANTS\nTwenty patients with Parkinson disease with Internet access at home.\n\n\nINTERVENTION\nCare from a specialist delivered remotely at home or in person in the clinic.\n\n\nMAIN OUTCOME MEASURES\nThe primary outcome variable was feasibility, as measured by the percentage of telemedicine visits completed as scheduled. Secondary outcome measures included clinical benefit, as measured by the 39-item Parkinson Disease Questionnaire, and economic value, as measured by time and travel.\n\n\nRESULTS\nTwenty participants enrolled in the study and were randomly assigned to telemedicine (n = 9) or in-person care (n = 11). Of the 27 scheduled telemedicine visits, 25 (93%) were completed, and of the 33 scheduled in-person visits, 30 (91%) were completed (P = .99). In this small study, the change in quality of life did not differ for those randomly assigned to telemedicine compared with those randomly assigned to in-person care (4.0-point improvement vs 6.4-point improvement; P = .61). Compared with in-person visits, each telemedicine visit saved participants, on average, 100 miles of travel and 3 hours of time.\n\n\nCONCLUSION AND RELEVANCE\nUsing web-based videoconferencing to provide specialty care at home is feasible, provides value to patients, and may offer similar clinical benefit to that of in-person care. Larger studies are needed to determine whether the clinical benefits are indeed comparable to those of in-person care and whether the results observed are generalizable.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT01476306."
},
{
"pmid": "24050614",
"title": "Medication reminder service for mobile phones: an open feasibility study in patients with Parkinson's disease.",
"abstract": "Parkinson's disease (PD) is a neurodegenerative disorder in which drug dosing regimens become increasingly complicated with the progression of the disease. This poses a significant risk of nonadherence to drug dosing and a failure in treatment response. We hypothesized that a medication reminder delivered by short message service (SMS) could be one way to ameliorate the problem of medication errors. We conducted an open feasibility study in 50 patients with advanced PD. The subjects set up the process to receive reminders by a Web tool, after which they started to receive automatically transmitted text messages as a medication reminder for 4 weeks. In total, 35 of 50 subjects (70.0%) were able to set up the reminder system without any help. The majority (69%) of the subjects rated the set-up process as \"very easy\" or \"easy.\" Almost all (41 subjects, 91%) felt that SMS reminders worked well for them, and only 4 subjects (9%) felt that SMS texts were totally valueless. Almost half of the subjects (22 of 45, 49%) considered that there were clear benefits, and an additional 17 subjects (38%) enjoyed some benefits in using the medication reminder system. Our results indicate that an SMS medication reminder system is a feasible method, even in subjects with advanced PD."
},
{
"pmid": "16620167",
"title": "Home telehealth improves clinical outcomes at lower cost for home healthcare.",
"abstract": "Patient outcomes and cost were compared when home healthcare was delivered by telemedicine or by traditional means for patients receiving skilled nursing care at home. A randomized controlled trial was established using three groups. The first group, control group C, received traditional skilled nursing care at home. The second group, video intervention group V, received traditional skilled nursing care at home and virtual visits using videoconferencing technology. The third group, monitoring intervention group M, received traditional skilled nursing care at home, virtual visits using videoconferencing technology, and physiologic monitoring for their underlying chronic condition. Discharge to a higher level of care (hospital, nursing home) within 6 months of study participation was 42% for C subjects, 21% for V subjects, and 15% for M subjects. There was no difference in mortality between the groups. Morbidity, as evaluated by changes in the knowledge, behavior and status scales of the Omaha Assessment Tool, showed no differences between groups except for increased scores for activities of daily living at study discharge in the V and M groups. The average visit costs were $48.27 for face-to-face home visits, $22.11 for average virtual visits (video group), and $32.06 and $38.62 for average monitoring group visits for congestive heart failure and chronic obstructive pulmonary disease subjects, respectively. This study has demonstrated that virtual visits between a skilled home healthcare nurse and chronically ill patients at home can improve patient outcome at lower cost than traditional skilled face-to-face home healthcare visits."
},
{
"pmid": "23611639",
"title": "Electronic reminders to patients within an interactive patient health record.",
"abstract": "Keeping patients with complex medical illnesses up to date with their preventive care and chronic disease management services, such as lipid testing and retinal exam in patients with diabetes, is challenging. Within a commercially available electronic health record (EHR) with a secure personal health record (PHR), we developed a system that sends up to three weekly reminders to patients who will soon be due for preventive care services. The reminder messages reside within the secure PHR, which is linked to the EHR, and are displayed on a screen where patients can also send to the physician's office an electronic message to request appointments for the needed services. The reminder messages stop when the patient logs on to review the reminders. The system, designed with patient input, groups together all services that will be due in the next 3 months to avoid repeatedly messaging the patient. After 2 months, the cycle of reminders begins again. This system, which is feasible and economical to build, has the potential to improve care and compliance with quality measures."
},
{
"pmid": "26431261",
"title": "Development and Evaluation of an Evaluation Tool for Healthcare Smartphone Applications.",
"abstract": "INTRODUCTION\nVarious types of healthcare smartphone applications (apps) have been released in recent years, making it possible for people to manage their health anytime and anywhere. As a healthcare provider, who has the responsibility to provide guidance as to which apps can be used? The purpose of this study was to develop and evaluate an evaluation tool for the various aspects of healthcare smartphone apps.\n\n\nMATERIALS AND METHODS\nIn the first phase, a provisional version of an evaluation tool for healthcare smartphone apps was developed from a review of previous studies. In the second phase, the provisional tool was modified and edited after verification by five experts with regard to its content validity. In the third phase, from September 25 to October 4, 2013, 200 responses were collected to verify the construct validity and reliability of the tool.\n\n\nRESULTS\nThe edited tool had 23 evaluating items with three evaluating factors along with seven subevaluating factors as a result of confirmatory factor analysis. The reliability was found to be high (0.905).\n\n\nCONCLUSIONS\nThis study is meaningful because it demonstrates a healthcare smartphone app evaluation tool that is proven in terms of its validity and reliability. The evaluation tool developed and tested in this study is an appropriate and widely applicable tool with which to evaluate healthcare smartphone apps to determine if they are reliable and useful. However, this evaluation tool represents the beginning of the research in this area."
},
{
"pmid": "24529817",
"title": "Factors influencing acceptance of technology for aging in place: a systematic review.",
"abstract": "PURPOSE\nTo provide an overview of factors influencing the acceptance of electronic technologies that support aging in place by community-dwelling older adults. Since technology acceptance factors fluctuate over time, a distinction was made between factors in the pre-implementation stage and factors in the post-implementation stage.\n\n\nMETHODS\nA systematic review of mixed studies. Seven major scientific databases (including MEDLINE, Scopus and CINAHL) were searched. Inclusion criteria were as follows: (1) original and peer-reviewed research, (2) qualitative, quantitative or mixed methods research, (3) research in which participants are community-dwelling older adults aged 60 years or older, and (4) research aimed at investigating factors that influence the intention to use or the actual use of electronic technology for aging in place. Three researchers each read the articles and extracted factors.\n\n\nRESULTS\nSixteen out of 2841 articles were included. Most articles investigated acceptance of technology that enhances safety or provides social interaction. The majority of data was based on qualitative research investigating factors in the pre-implementation stage. Acceptance in this stage is influenced by 27 factors, divided into six themes: concerns regarding technology (e.g., high cost, privacy implications and usability factors); expected benefits of technology (e.g., increased safety and perceived usefulness); need for technology (e.g., perceived need and subjective health status); alternatives to technology (e.g., help by family or spouse), social influence (e.g., influence of family, friends and professional caregivers); and characteristics of older adults (e.g., desire to age in place). When comparing these results to qualitative results on post-implementation acceptance, our analysis showed that some factors are persistent while new factors also emerge. Quantitative results showed that a small number of variables have a significant influence in the pre-implementation stage. Fourteen out of the sixteen included articles did not use an existing technology acceptance framework or model.\n\n\nCONCLUSIONS\nAcceptance of technology in the pre-implementation stage is influenced by multiple factors. However, post-implementation research on technology acceptance by community-dwelling older adults is scarce and most of the factors in this review have not been tested by using quantitative methods. Further research is needed to determine if and how the factors in this review are interrelated, and how they relate to existing models of technology acceptance."
},
{
"pmid": "27573318",
"title": "Older people, assistive technologies, and the barriers to adoption: A systematic review.",
"abstract": "BACKGROUND\nOlder people generally prefer to continue living in their own homes rather than move into residential age care institutions. Assistive technologies and sensors in the home environment and/or bodily worn systems that monitor people's movement might contribute to an increased sense of safety and security at home. However, their use can raise ethical anxieties as little is known about how older persons perceive assistive and monitoring technologies.\n\n\nOBJECTIVES\nTo review the main barriers to the adoption of assistive technologies (ATs) by older adults in order to uncover issues of concern from empirical studies and to arrange these issues from the most critical to the least critical.\n\n\nMETHOD\nA 4-step systematic review was conducted using empirical studies: locating and identifying relevant articles; screening of located articles; examination of full text articles for inclusion/exclusion; and detail examination of the 44 articles included.\n\n\nRESULTS\nPrivacy is a top critical concern to older adults, registering a 34% of the total articles examined. Two other equally potent barriers to the adoption of ATs were trust and functionality/added value representing 27 and 25 per cent each respectively of the total studies examined. Also of serious concerns are cost of ATs and ease of use and suitability for daily use (23%) each respectively, perception of \"no need\" (20%), stigma (18%), and fear of dependence and lack of training (16%) each respectively. These underlying factors are generation/cohort effects and physical decline relating to aging, and negative attitudes toward technologies such as the so-called \"gerontechnologies\" specifically targeting older adults. However, more and more older adults adopt different kinds of ATs in order to fit in with the society.\n\n\nCONCLUSIONS\nThe identified underlying factors are generation/cohort effects and physical decline relating to aging, and negative attitudes toward technologies. The negative attitudes that are most frequently associated with technologies such as the so-called \"gerontechnologies\" specifically targeting older adults contain stigmatizing symbolism that might prevent them from adopting them."
},
{
"pmid": "26216463",
"title": "Investigating the effectiveness of technologies applied to assist seniors: A systematic literature review.",
"abstract": "BACKGROUND\nRecently, a number of Information and Communication Technologies have emerged with the aim to provide innovative and efficient ways to help seniors in their daily life and to reduce the cost of healthcare. Studies have been conducted to introduce an assistive technology to support seniors and to investigate the acceptance of these assistive technologies; however, research illustrating the effectiveness of assistive technologies is scant.\n\n\nMETHOD\nThis study undertakes a systematic literature review of ScienceDirect, PubMed, ProQuest and IEEE Explore databases to investigate current empirical studies on the assistive technologies applied in aged care. Our systematic review of an initial set of 2035 studies published from 2000 to 2014 examines the role of assistive technologies in seniors' daily lives, from enhancements in their mobility to improvements in the social connectedness and decreases in readmission to hospitals.\n\n\nRESULTS\nThis study found eight key issues in aged care that have been targeted by researchers from different disciplines (e.g., ICT, health and social science), namely, dependent living, fall risk, chronic disease, dementia, social isolation, depression, poor well-being, and poor medication management. This paper also identified the assistive technologies that have been proposed to overcome those problems, and we categorised these assistive technologies into six clusters, namely, general ICT, robotics, telemedicine, sensor technology, medication management applications, and video games. In addition, we analyzed the effectiveness of the identified technologies and noted that some technologies can change and enhance seniors' daily lives and relieve their problems. Our analysis showed a significant growth in the number of publications in this area in the past few years. It also showed that most of the studies in this area have been conducted in North America.\n\n\nCONCLUSION\nAssistive technologies are a reality and can be applied to improve quality of life, especially among older age groups. This study identified various assistive technologies proposed by ICT researchers to assist the elderly. We also identified the effectiveness of the proposed technologies. This review shows that, although assistive technologies have been positively evaluated, more studies are needed regarding the outcome and effectiveness of these technologies."
},
{
"pmid": "27185508",
"title": "Smart homes and home health monitoring technologies for older adults: A systematic review.",
"abstract": "BACKGROUND\nAround the world, populations are aging and there is a growing concern about ways that older adults can maintain their health and well-being while living in their homes.\n\n\nOBJECTIVES\nThe aim of this paper was to conduct a systematic literature review to determine: (1) the levels of technology readiness among older adults and, (2) evidence for smart homes and home-based health-monitoring technologies that support aging in place for older adults who have complex needs.\n\n\nRESULTS\nWe identified and analyzed 48 of 1863 relevant papers. Our analyses found that: (1) technology-readiness level for smart homes and home health monitoring technologies is low; (2) the highest level of evidence is 1b (i.e., one randomized controlled trial with a PEDro score ≥6); smart homes and home health monitoring technologies are used to monitor activities of daily living, cognitive decline and mental health, and heart conditions in older adults with complex needs; (3) there is no evidence that smart homes and home health monitoring technologies help address disability prediction and health-related quality of life, or fall prevention; and (4) there is conflicting evidence that smart homes and home health monitoring technologies help address chronic obstructive pulmonary disease.\n\n\nCONCLUSIONS\nThe level of technology readiness for smart homes and home health monitoring technologies is still low. The highest level of evidence found was in a study that supported home health technologies for use in monitoring activities of daily living, cognitive decline, mental health, and heart conditions in older adults with complex needs."
},
{
"pmid": "22564332",
"title": "Design of an mHealth app for the self-management of adolescent type 1 diabetes: a pilot study.",
"abstract": "BACKGROUND\nThe use of mHealth apps has shown improved health outcomes in adult populations with type 2 diabetes mellitus. However, this has not been shown in the adolescent type 1 population, despite their predisposition to the use of technology. We hypothesized that a more tailored approach and a strong adherence mechanism is needed for this group.\n\n\nOBJECTIVE\nTo design, develop, and pilot an mHealth intervention for the management of type 1 diabetes in adolescents.\n\n\nMETHODS\nWe interviewed adolescents with type 1 diabetes and their family caregivers. Design principles were derived from a thematic analysis of the interviews. User-centered design was then used to develop the mobile app bant. In the 12-week evaluation phase, a pilot group of 20 adolescents aged 12-16 years, with a glycated hemoglobin (HbA(1c)) of between 8% and 10% was sampled. Each participant was supplied with the bant app running on an iPhone or iPod Touch and a LifeScan glucometer with a Bluetooth adapter for automated transfers to the app. The outcome measure was the average daily frequency of blood glucose measurement during the pilot compared with the preceding 12 weeks.\n\n\nRESULTS\nThematic analysis findings were the role of data collecting rather than decision making; the need for fast, discrete transactions; overcoming decision inertia; and the need for ad hoc information sharing. Design aspects of the resultant app emerged through the user-centered design process, including simple, automated transfer of glucometer readings; the use of a social community; and the concept of gamification, whereby routine behaviors and actions are rewarded in the form of iTunes music and apps. Blood glucose trend analysis was provided with immediate prompting of the participant to suggest both the cause and remedy of the adverse trend. The pilot evaluation showed that the daily average frequency of blood glucose measurement increased 50% (from 2.4 to 3.6 per day, P = .006, n = 12). A total of 161 rewards (average of 8 rewards each) were distributed to participants. Satisfaction was high, with 88% (14/16 participants) stating that they would continue to use the system. Demonstrating improvements in HbA(1c) will require a properly powered study of sufficient duration.\n\n\nCONCLUSIONS\nThis mHealth diabetes app with the use of gamification incentives showed an improvement in the frequency of blood glucose monitoring in adolescents with type 1 diabetes. Extending this to improved health outcomes will require the incentives to be tied not only to frequency of blood glucose monitoring but also to patient actions and decision making based on those readings such that glycemic control can be improved."
},
{
"pmid": "28663162",
"title": "Developing and Evaluating Digital Interventions to Promote Behavior Change in Health and Health Care: Recommendations Resulting From an International Workshop.",
"abstract": "Devices and programs using digital technology to foster or support behavior change (digital interventions) are increasingly ubiquitous, being adopted for use in patient diagnosis and treatment, self-management of chronic diseases, and in primary prevention. They have been heralded as potentially revolutionizing the ways in which individuals can monitor and improve their health behaviors and health care by improving outcomes, reducing costs, and improving the patient experience. However, we are still mainly in the age of promise rather than delivery. Developing and evaluating these digital interventions presents new challenges and new versions of old challenges that require use of improved and perhaps entirely new methods for research and evaluation. This article discusses these challenges and provides recommendations aimed at accelerating the rate of progress in digital behavior intervention research and practice. Areas addressed include intervention development in a rapidly changing technological landscape, promoting user engagement, advancing the underpinning science and theory, evaluating effectiveness and cost-effectiveness, and addressing issues of regulatory, ethical, and information governance. This article is the result of a two-day international workshop on how to create, evaluate, and implement effective digital interventions in relation to health behaviors. It was held in London in September 2015 and was supported by the United Kingdom's Medical Research Council (MRC), the National Institute for Health Research (NIHR), the Methodology Research Programme (PI Susan Michie), and the Robert Wood Johnson Foundation of the United States (PI Kevin Patrick). Important recommendations to manage the rapid pace of change include considering using emerging techniques from data science, machine learning, and Bayesian approaches and learning from other disciplines including computer science and engineering. With regard to assessing and promoting engagement, a key conclusion was that sustained engagement is not always required and that for each intervention it is useful to establish what constitutes \"effective engagement,\" that is, sufficient engagement to achieve the intended outcomes. The potential of digital interventions for testing and advancing theories of behavior change by generating ecologically valid, real-time objective data was recognized. Evaluations should include all phases of the development cycle, designed for generalizability, and consider new experimental designs to make the best use of rich data streams. Future health economics analyses need to recognize and model the complex and potentially far-reaching costs and benefits of digital interventions. In terms of governance, developers of digital behavior interventions should comply with existing regulatory frameworks, but with consideration for emerging standards around information governance, ethics, and interoperability."
},
{
"pmid": "29331248",
"title": "Usability evaluation of a commercial inpatient portal.",
"abstract": "OBJECTIVES\nPatient portals designed for inpatients have potential to increase patient engagement. However, little is known about how patients use inpatient portals. To address this gap, we aimed to understand how users 1) interact with, 2) learn to use, and 3) communicate with their providers through an inpatient portal.\n\n\nMATERIALS AND METHODS\nWe conducted a usability evaluation using think-aloud protocol to study user interactions with a commercially available inpatient portal - MyChart Bedside (MCB). Study participants (n=19) were given a tablet that had MCB installed. They explored MCB and completed eight assigned tasks. Each session's recordings were coded and analyzed. We analyzed task completion, errors, and user feedback. We categorized errors into operational errors, system errors, and tablet-related errors, and indicated their violations of Nielsen's ten heuristic principles.\n\n\nRESULTS\nParticipants frequently made operational errors with most in navigation and assuming non-existent functionalities. We also noted that participants' learning styles varied, with age as a potential factor that influenced how they learned MCB. Also, participants preferred to individually message providers and wanted feedback on status.\n\n\nCONCLUSION\nThe design of inpatient portals can greatly impact how patients navigate and comprehend information in inpatient portals; poor design can result in a frustrating user experience. For inpatient portals to be effective in promoting patient engagement, it remains critical for technology developers and hospital administrators to understand how users interact with this technology and the resources that may be necessary to support its use."
},
{
"pmid": "29195701",
"title": "Adequacy of UTAUT in clinician adoption of health information systems in developing countries: The case of Cameroon.",
"abstract": "PURPOSE\nDespite the great potential Health Information Systems (HIS) have in improving the quality of healthcare delivery services, very few studies have been carried out on the adoption of such systems in developing countries. This article is concerned with investigating the adequacy of UTAUT1 in determining factors that influence the adoption of HIS by clinicians in developing countries, based on the case of Cameroon.\n\n\nMETHODS\nA paper-based questionnaire was distributed to clinicians in 4 out of 7 major public hospitals in Cameroon. A modified UTAUT was tested using structural equation modeling (SEM) method to identify the determinants of clinicians' intention to use HIS. Self-efficacy and cost-effectiveness were determinants used to extend the original UTAUT.\n\n\nRESULTS\n228 out of 286 questionnaires were validated for this study. The original UTAUT performed poorly, explaining 12% of the variance in clinicians' intention to use HIS. Age was the only significant moderating factor, improving the model to 46%. Self-efficacy and cost effectiveness has no direct significant effect on HIS adoption in the context of this study.\n\n\nCONCLUSIONS\nThe original UTAUT is not adequate in identifying factors that influence the adoption of HIS by clinicians in developing countries. Simplifying the model by using age as the only moderating factor significantly increases the model's ability to predict HIS adoption in this context. Thus, the younger clinicians are more likely and ready to adopt HIS than the older ones. Context-specific should also be used to increase the explanatory power of UTAUT in any given context."
},
{
"pmid": "29602428",
"title": "Diagnostic concordance between mobile interfaces and conventional workstations for emergency imaging assessment.",
"abstract": "INTRODUCTION\nMobile devices and software are now available with sufficient computing power, speed and complexity to allow for real-time interpretation of radiology exams. In this paper, we perform a multivariable user study that investigates concordance of image-based diagnoses provided using mobile devices on the one hand and conventional workstations on the other hand.\n\n\nMETHODS\nWe performed a between-subjects task-analysis using CT, MRI and radiography datasets. Moreover, we investigated the adequacy of the screen size, image quality, usability and the availability of the tools necessary for the analysis. Radiologists, members of several teams, participated in the experiment under real work conditions. A total of 64 studies with 93 main diagnoses were analyzed.\n\n\nRESULTS\nOur results showed that 56 cases were classified with complete concordance (87.69%), 5 cases with almost complete concordance (7.69%) and 1 case (1.56%) with partial concordance. Only 2 studies presented discordance between the reports (3.07%). The main reason to explain the cause of those disagreements was the lack of multiplanar reconstruction tool in the mobile viewer. Screen size and image quality had no direct impact on the mobile diagnosis process.\n\n\nCONCLUSION\nWe concluded that for images from emergency modalities, a mobile interface provides accurate interpretation and swift response, which could benefit patients' healthcare."
},
{
"pmid": "28550996",
"title": "Web-based health interventions for family caregivers of elderly individuals: A Scoping Review.",
"abstract": "BACKGROUND\nFor the growing proportion of elders globally, aging-related illnesses are primary causes of morbidity causing reliance on family members for support in the community. Family caregivers experience poorer physical and mental health than their non-caregiving counterparts. Web-based interventions can provide accessible support to family caregivers to offset declines in their health and well-being. Existing reviews focused on web-based interventions for caregivers have been limited to single illness populations and have mostly focused on the efficacy of the interventions. We therefore have limited insight into how web-based interventions for family caregiver have been developed, implemented and evaluated across aging-related illness.\n\n\nOBJECTIVES\nTo describe: a) theoretical underpinnings of the literature; b) development, content and delivery of web-based interventions; c) caregiver usage of web-based interventions; d) caregiver experience with web-based interventions and e) impact of web-based interventions on caregivers' health outcomes.\n\n\nMETHODS\nWe followed Arksey and O'Malley's methodological framework for conducting scoping reviews which entails setting research questions, selecting relevant studies, charting the data and synthesizing the results in a report.\n\n\nRESULTS\nFifty-three publications representing 32 unique web-based interventions were included. Over half of the interventions were targeted at dementia caregivers, with the rest targeting caregivers to the stroke, cancer, diabetes and general frailty populations. Studies used theory across the intervention trajectory. Interventions aimed to improve a range of health outcomes for caregivers through static and interactive delivery methods Caregivers were satisfied with the usability and accessibility of the websites but usage was generally low and declined over time. Depression and caregiver burden were the most common outcomes evaluated. The interventions ranged in their impact on health and social outcomes but reductions in perception of caregiver burden were consistently observed.\n\n\nCONCLUSIONS\nCaregivers value interactive interventions that are tailored to their unique needs and the illness context. However, usage of the interventions was sporadic and declined over time, indicating that future interventions should address stage-specific needs across the caregiving trajectory. A systematic review has the potential to be conducted given the consistency in caregiver burden and depression as outcomes."
}
] |
Frontiers in Psychology | 31379657 | PMC6650763 | 10.3389/fpsyg.2019.01593 | Mindful Learning Experience Facilitates Mastery Experience Through Heightened Flow and Self-Efficacy in Game-Based Creativity Learning | This study was performed within the limited framework of computer-game-based educational programs designed to enhance creativity. Furthermore, the utilization of mindful learning and moderators such as flow, mastery experience, and self-efficacy, brings this research to the forefront of modern educational practices. The present researchers developed a comprehensive game-based creativity learning program for fifth and sixth grade pupils. Further analyses presented relationship trends between mindful learning experience, flow experience, self-efficacy, and mastery experience. Eighty-three 5th and 6th grade participants undertook the six-week game-based creativity learning program. Upon completion of the experimental instruction, self-evaluation revealed that participants with higher scores on the concerned variables improved more in both creative ability and confidence than their counterparts. Additionally, path model analysis revealed that mindful learning experience was a powerful predictor of both mastery experience and flow experience; it also influenced mastery experience through flow experience and self-efficacy. The findings support the effectiveness of the game-based learning program developed in this study. Moreover, this study contributes to the theoretical construction of how game-based learning can be designed to facilitate mindful learning experience, flow experience, self-efficacy, and mastery experience during creativity. Some additional enhancement mechanisms utilized in the program were: rewards for high-quality performance, challenging tasks, a variety of design components, immediate feedback, and idea sharing. The theoretical design of this study provides support for the ongoing scientific investigation of new applications of mindful learning in educational programs concerning the learning of creativity. | Related WorkMindful Learning and Mastery Experience of Creativity in Game-Based LearningThe literature on mindfulness has focused on two principal schools of thought: one promoted by Kabat-Zinn and his associates (e.g., Kabat-Zinn, 2003), which is based on Buddhist meditation practices and often regarded as an Eastern approach to mindfulness, and the other presented by Langer and her colleagues (e.g., Langer, 1989), which is considered a Western view on mindfulness (Ivtzan and Hart, 2016). Langerian concepts serve as the backbone of the present research involving mindfulness and mindful learning. Langer and Moldoveanu (2000) defined mindfulness as the process of drawing novel distinctions, where the pertinent action is to stay in the present moment by noticing new things. This mindful behavior inspires greater sensitivity to the surrounding environment, openness to new information, concepts of new categories or perceptions, and enhanced awareness of various perspectives within problem solving (Langer and Moldoveanu, 2000; Davenport and Pagnini, 2016). Moreover, mindfulness increases flexibility pertaining to attitude, valence, perceived experience, perceived control, and self-efficacy by altering cognitive, affective, or behavioral factors (Gärtner, 2013). When encountering negative emotions, mindfulness may bolster coping abilities that help decrease negative thoughts which undermine self-efficacy (Gärtner, 2013).More recently, Bercovitz et al. (2017) addressed that cognitive flexibility, novelty production, novelty seeking, and openness are all central components of both Langerian mindfulness and creativity. When problem solving, the mindful learner exercises divergent thinking by imagining various perspectives to find multiple solutions (Langer, 1993). These parallels between Langerian mindfulness and creativity have additionally been explored in school settings. Study findings (Davenport and Pagnini, 2016) revealed that classroom implementation of inquiry-based mindful learning strategies presented in three stages of exploration, expression and exposition, provided significant opportunities for students to exercise skills in creativity; as classrooms devoted 4 weeks to each of these three stages, teachers guided students through activities that required creativity, communication, collaboration and critical thinking. Given these empirical results, we presuppose that mindful participants may encounter mastery experience during the game-based creativity learning.Mastery experience is the personal experience of success (Bandura, 1997). In this study, mastery experience pertains to the ability and confidence in solving problems during game-based creativity learning. Mindful learning may directly influence the process of developing mastery experience through improving the acquisition of knowledge, remaining open to feedback, and enhancing focus and awareness. These mindful learning techniques were found to be effective in improving elementary school students’ mastery experience in reading, science, and math (Anglin et al., 2008; Bakosh et al., 2016). Therefore, when mindful learning is implemented by the student, creativity may be enhanced (Davenport and Pagnini, 2016) and this improved performance is likely to contribute to the feeling of success, otherwise known as mastery experience (Bandura, 1997). The researchers of the present study drew connections between mindful learning and mastery experience to investigate possible use within digital creativity enhancement games.Indirect Influences of Mindful Learning on Mastery ExperienceMindful learning may contribute to the improvement of mastery experience in creativity games through flow experience and self-efficacy. Flow refers to an optimal experience in which individuals are completely absorbed or engaged in an activity (Bellanca and Brandt, 2010). There are nine elements of flow, including challenge-skill balance, action-awareness merging, clear goals, unambiguous feedback, concentration on the task at hand, sense of control, loss of self-consciousness, transformation of time, and internally-driven experience (Beard, 2015). When experiencing flow, the mind is performing at an elevated level, balancing task complexities with strategies, fully engaged, and intrinsically motivated (Csikszentmihalyi, 1990, 1993, 2014), which may result from mindful learning. It was found that when attainable goals gradually increased in difficulty, a consistent mastery curve and overall enjoyment of playing on a personal or even social level appeared (Starks, 2014). Cognitive science research also suggests that flow is achieved when technical skills are in harmony with the complexities of the task (Bellanca and Brandt, 2010) or in situations where attention is directed towards goal achievement (Reid, 2011). Accordingly, flow experience should contribute to the achievement of mastery experience.On the other hand, self-efficacy refers to individuals’ confidence in their own abilities to execute actions with a desired outcome. People with self-efficacy act with forethought, self-reactiveness, and self-reflectiveness (Bandura, 2001); they also persevere in their goals and demonstrate resiliency to attain the desired outcome. In addition, self-efficacy provides foundations for predicting behaviors pertaining to attention, or motivational processes in learning or education (Bandura, 2012). Notably, self-efficacy can mediate the quality of products in creative endeavors pertaining to education or in the workplace (Liao et al., 2010; Wang et al., 2018). Similarly, creative self-efficacy refers to efficacy that is specific to the belief in one’s ability to produce creative outcomes (Tierney and Farmer, 2002), and it is thought to be a mechanism of creativity (Wang et al., 2018). It may also additionally reflect intrinsic motivation to exhibit creative behaviors (Gong et al., 2009).Research has demonstrated that people who score higher on creative self-efficacy tend to be more creative (Tierney and Farmer, 2002; Gong et al., 2009; Wang et al., 2018). Creative self-efficacy has also been observed to mediate employee’s creative performance (Tierney and Farmer, 2011). This relationship between employee creative self-efficacy and creative performance has additionally been replicated in student learning. Students who scored higher on creative self-efficacy were less likely to cease efforts and disengage from a creative project or process (Liu et al., 2016). This personality characteristic that can be taught and improved upon (Tierney and Farmer, 2011) has a clear relationship to creative performance. The aforementioned research provides a basis for the current study that explores the role of creative self-efficacy in facilitating mastery experience in computer-based creativity training.Flow and mindfulness are positively correlated (Beard, 2015; Kaufman et al., 2018) and both increase attention and focus (e.g., Bakosh et al., 2016). Athletes who scored higher in mindfulness demonstrated elevated abilities in challenge-skill balance, synthesizing action and awareness, goal setting, loss of self-consciousness, concentration, attention control, emotional control, and self-talk. These findings imply that mindfulness can amplify flow dispositions and mental skills (Kee and Wang, 2008). When utilizing mindful learning practices to bolster flow and self-efficacy, these factors are likely to improve mastery experience. Additionally, a review article on mindfulness highlighted the association between mindfulness and self-efficacy (Caldwell et al., 2010). Researchers (Greason and Cashwell, 2011) observed that mindfulness was a significant predictor of counseling self-efficacy and attention was a mediator of that relationship. In the same vein, an empirical study found that the negative influence of abusive supervision on employee self-efficacy can be buffered by employee mindfulness (Zheng and Liu, 2017). In light of the positive relationship between mindfulness, flow, self-efficacy, and learning outcomes that have been found in various domains, the current researchers presumed that there would be a positive connection between mindfulness, flow experience, self-efficacy, and mastery experience within game-based creativity learning.The Present Study and HypothesesTo date, game-based learning programs that include comprehensive creativity skills and disposition training are still very limited. Based on a previously developed training program for 3rd and 4th graders (Yeh and Lin, 2018), the current program for 5th and 6th graders provides more challenging story content and tasks. This study first examined the learning effects of the developed program—Digital Game-based Learning of Creativity (DGLC-B) and, further, explored the relationship among mindful learning experience, flow experience, self-efficacy, and mastery experience. Experimental instruction was delivered and it was hypothesized that pupils would improve their creativity upon completion of the game-based creativity training; moreover, it was additionally hypothesized that mindful learning experience would directly influence mastery experience and self-efficacy, as well as indirectly influence mastery experience through flow experience and self-efficacy in the game-based creativity learning. The hypothesized theoretical model with an integrated literature review is illustrated in Figure 1.Figure 1Theoretical model for proposed hypotheses. | [
"20304755",
"27672377",
"20561898",
"19575609",
"21395198",
"24550858",
"20954756",
"28955285"
] | [
{
"pmid": "20304755",
"title": "Developing mindfulness in college students through movement-based courses: effects on self-regulatory self-efficacy, mood, stress, and sleep quality.",
"abstract": "OBJECTIVE\nThis study examined whether mindfulness increased through participation in movement-based courses and whether changes in self-regulatory self-efficacy, mood, and perceived stress mediated the relationship between increased mindfulness and better sleep.\n\n\nPARTICIPANTS\n166 college students enrolled in the 2007-2008 academic year in 15 week classes in Pilates, Taiji quan, or GYROKINESIS.\n\n\nMETHODS\nAt beginning, middle, and end of the semester, participants completed measures of mindfulness, self-regulatory self-efficacy, mood, perceived stress, and sleep quality.\n\n\nRESULTS\nTotal mindfulness scores and mindfulness subscales increased overall. Greater changes in mindfulness were directly related to better sleep quality at the end of the semester after adjusting for sleep disturbance at the beginning. Tiredness, Negative Arousal, Relaxation, and Perceived Stress mediated the effect of increased mindfulness on improved sleep.\n\n\nCONCLUSIONS\nMovement-based courses can increase mindfulness. Increased mindfulness accounts for changes in mood and perceived stress, which explain, in part, improved sleep quality."
},
{
"pmid": "27672377",
"title": "Mindful Learning: A Case Study of Langerian Mindfulness in Schools.",
"abstract": "The K-12 classroom applications of mindfulness as developed by Ellen Langer are discussed in a case study of a first-year charter school. Langerian Mindfulness, which is the act of drawing distinctions and noticing novelty, is deeply related to well-being and creativity, yet its impact has yet to be tested at the primary or secondary school level. The objective of the article is to display how Langerian Mindfulness strategies could increase 21st century skills and Social-Emotional Learning in primary classrooms. The New School San Francisco, an inquiry-based, socioeconomically and racially integrated charter school, serves as a model for mindful teaching and learning strategies. It is concluded that when mindful strategies are implemented, students have significant opportunities to exercise the 21st century skills of creativity, collaboration, communication and critical thinking. Langerian Mindfulness is also considered as a tool for increasing Social-Emotional Learning in integrated classrooms. It is recommended that mindful interventions be further investigated in the primary and secondary school context."
},
{
"pmid": "20561898",
"title": "Enhancing creativity by means of cognitive stimulation: evidence from an fMRI study.",
"abstract": "Cognitive stimulation via the exposure to ideas of other people is an effective tool in stimulating creativity in group-based creativity techniques. In this fMRI study, we investigate whether creative cognition can be enhanced through idea sharing and how performance improvements are reflected in brain activity. Thirty-one participants had to generate alternative uses of everyday objects during fMRI recording. Additionally, participants performed this task after a time period in which they had to reflect on their ideas or in which they were confronted with stimulus-related ideas of others. Cognitive stimulation was effective in improving originality, and this performance improvement was associated with activation increases in a neural network including right-hemispheric temporo-parietal, medial frontal, and posterior cingulate cortices, bilaterally. Given the involvement of these brain areas in semantic integration, memory retrieval, and attentional processes, cognitive stimulation could have resulted in a modulation of bottom-up attention enabling participants to produce more original ideas."
},
{
"pmid": "19575609",
"title": "Creativity.",
"abstract": "The psychological study of creativity is essential to human progress. If strides are to be made in the sciences, humanities, and arts, we must arrive at a far more detailed understanding of the creative process, its antecedents, and its inhibitors. This review, encompassing most subspecialties in the study of creativity and focusing on twenty-first-century literature, reveals both a growing interest in creativity among psychologists and a growing fragmentation in the field. To be sure, research into the psychology of creativity has grown theoretically and methodologically sophisticated, and researchers have made important contributions from an ever-expanding variety of disciplines. But this expansion has not come without a price. Investigators in one subfield often seem unaware of advances in another. Deeper understanding requires more interdisciplinary research, based on a systems view of creativity that recognizes a variety of interrelated forces operating at multiple levels."
},
{
"pmid": "21395198",
"title": "Mindfulness and flow in occupational engagement: presence in doing.",
"abstract": "BACKGROUND\nFlow is a psychological state that might be viewed as desirable, and it occurs when a person is aware of his or her actions but is not being aware of his or her awareness. Mindfulness is viewed not as the achievement of any particular state, but as intentional awareness of what is, being aware of awareness.\n\n\nPURPOSE\nTo examine theoretical perspectives and empirical research on flow and mindfulness, and offer suggestions about the relevance of these concepts to occupational engagement.\n\n\nKEY ISSUES\nBoth flow and mindfulness involve being present, actively engaged, and attentive. The experience and practice of flow and mindfulness are relevant to the experience of occupational engagement.\n\n\nIMPLICATIONS\nUnderstanding flow and mindfulness may help occupational therapists improve the therapeutic occupational engagement process with their clients through enhancing depth and meaning of occupational experiences, as well as health and well-being."
},
{
"pmid": "24550858",
"title": "Cognitive behavioral game design: a unified model for designing serious games.",
"abstract": "Video games have a unique ability to engage, challenge, and motivate, which has led teachers, psychology specialists, political activists and health educators to find ways of using them to help people learn, grow and change. Serious games, as they are called, are defined as games that have a primary purpose other than entertainment. However, it is challenging to create games that both educate and entertain. While game designers have embraced some psychological concepts such as flow and mastery, understanding how these concepts work together within established psychological theory would assist them in creating effective serious games. Similarly, game design professionals have understood the propensity of video games to teach while lamenting that educators do not understand how to incorporate educational principles into game play in a way that preserves the entertainment. Bandura (2006) social cognitive theory (SCT) has been used successfully to create video games that create positive behavior outcomes, and teachers have successfully used Gardner's (1983) theory of multiple intelligences (MIs) to create engaging, immersive learning experiences. Cognitive behavioral game design is a new framework that incorporates SCT and MI with game design principles to create a game design blueprint for serious games."
},
{
"pmid": "20954756",
"title": "Creative self-efficacy development and creative performance over time.",
"abstract": "Building from an established framework of self-efficacy development, this study provides a longitudinal examination of the development of creative self-efficacy in an ongoing work context. Results show that increases in employee creative role identity and perceived creative expectation from supervisors over a 6-month time period were associated with enhanced sense of employee capacity for creative work. Contrary to what was expected, employees who experienced increased requirements for creativity in their jobs actually reported a decreased sense of efficaciousness for creative work. Results show that increases in creative self-efficacy corresponded with increases in creative performance as well."
},
{
"pmid": "28955285",
"title": "The Buffering Effect of Mindfulness on Abusive Supervision and Creative Performance: A Social Cognitive Framework.",
"abstract": "Our research draws upon social cognitive theory and incorporates a regulatory approach to investigate why and when abusive supervision influences employee creative performance. The analyses of data from multiple time points and multiple sources reveal that abusive supervision hampers employee self-efficacy at work, which in turn impairs employee creative performance. Further, employee mindfulness buffers the negative effects of abusive supervision on employee self-efficacy at work as well as the indirect effects of abusive supervision on employee creative performance. Our findings have implications for both theory and practice. Limitations and directions for future research are also discussed."
}
] |
Materials | 31269641 | PMC6651616 | 10.3390/ma12132123 | Influence of Electrical Field Collector Positioning and Motion Scheme on Electrospun Bifurcated Vascular Graft Membranes | Currently, electrospinning membranes for vascular graft applications has been limited, due to random fiber alignment, to use in mandrel-spun, straight tubular shapes. However, straight, circular tubes with constant diameters are rare in the body. This study presents a method to fabricate curved, non-circular, and bifurcated vascular grafts based on electrospinning. In order to create a system capable of electrospinning membranes to meet specific patient needs, this study focused on characterizing the influence of fiber source, electrical field collector position (inside vs. outside the mandrel), and the motion scheme of the mandrel (rotation vs. rotation and tilting) on the vascular graft membrane morphology and mechanical properties. Given the extensive use of poly(ε-caprolactone) (PCL) in tubular vascular graft membranes, the same material was used here to facilitate a comparison. Our results showed that the best morphology was obtained using orthogonal sources and collector positioning, and a well-timed rotation and tilting motion scheme. In terms of mechanical properties, our bifurcated vascular graft membranes showed burst pressure comparable to that of tubular vascular graft membranes previously reported, with values up to 5126 mmHg. However, the suture retention strength shown by the bifurcated vascular graft membranes was less than desired, not clinically viable values. Process improvements are being contemplated to introduce these devices into the clinical range. | 1.2. Related WorkSynthetic materials, such as poly(ε-caprolactone) (PCL), have been extensively employed in the research of vascular graft tissue engineering [3,5] because of its excellent biocompatibility, bioactivity, non-toxicity and, in some configurations, high elasticity and degradability. However, the long degradation period of pure PCL, usually more than 18 months in vivo, could act as a barrier to tissue regeneration if the healing window is missed. PCL is resistant to degradation because of its hydrophobic nature and high level of crystallinity [2,3]. On the other hand, if it fails mechanically before it resorbs, small, broken-up pieces can become distributed throughout the regeneration site, further preventing adequate tissue regeneration.One of the principle tissue engineered vascular graft perquisites is that the grafts have similar mechanical properties to the native tissue at the placement site, before tissue regeneration and remodeling has taken place. A mechanical mismatch is acknowledged as a key determinant in the loss of long-term patency, resulting in aneurysm formation and implant failure [5]. Different approaches and techniques have been explored to produce a clinically viable vascular graft. These can be classified into different categories: vascular graft membrane-based, self-assembly processes [6], 3D printing, solvent casting, phase separation spinning, and electrospinning [9]. However, electrospinning is the most widely studied and used for the fabrication of vascular grafts.The electrospinning process uses an electric field to direct a jet of polymer solution from a syringe’s capillary tip toward a target for deposition [10,11]. Under the influence of a strong electrostatic field, charges are induced in the solution and the charged polymer is accelerated towards the grounded metal collector. At low electrostatic field strength the pendant drop emerging from the tip of the pipette is prevented from dripping due to the surface tension of the solution. It is, therefore, critical to be able to control the syringe’s pump speed in terms of the release of the polymer within the speed window. The window must be between insufficient to form a thread and dripping that overtakes the surface tension needed to engage the electrical field. As the intensity of the electric field is increased, the induced charges on the liquid surface repel each other and create shear stresses [12]. In the electrospinning of vascular graft membranes, fiber diameter, orientation, alignment (mandrels increase the possibility of alignment), and pore size have a significant impact on the final functionality of the vascular conduit [13]. According to the standard procedures for setting up a basic electrospinning process [14], once the polymer is selected, the literature recommends two principles to select the solvent: (a) the polymer should be completely soluble and (b) the boiling point of the solvent should be moderate in order to allow evaporation during the trajectory of the solution between the needle tip and the collector. Some of the most common solvents used in combination with PCL are hexafluoropropanol (HFP), chloroform, acetone and dimethylformamide (DMF) [15]. The most important point in vascular tissue engineering is simulating the native tri-layered structure and recovering vascular function on placement and throughout the regenerative process. As previously noted, electrospinning methods have been primarily used to generate the internal membrane. The diameter and orientation of fibers and the pore geometry and permeability are essential to rapid cell attachment and endothelialization. It is equally important to develop a mechanically long-lasting and functionally sustainable tissue-engineered vascular graft for clinical application in the outer layers [16]. Reports of earlier relevant studies (see Table 1) show the careful consideration of vascular graft morphology and mechanical characteristics. Techniques like electrospinning allow the construction of tubular vascular graft membranes through a rotating mandrel, acting as an internal collector to attract the fibers. Efforts to emulate complex geometries such as bifurcations date back to the 1970s, when a graft made out of Dacron fiber was used as an arterial bypass graft in 135 patients [25]. Later, a Y-graft tailored from a bovine pericardium was created from two tubes of pericardium [26]. These bovine grafts were closer in size to the human aorta and common iliac arteries than other available standard bifurcated grafts. Most progress in tissue engineered vascular grafts has been concentrated on tubular shapes (see Table 1). More recently, tissue engineering approaches have used different synthetic materials to create customizable bifurcated grafts or a combination of different technologies, such as 3D printing, electrospinning, and custom-designed mandrels with cardiovascular magnetic resonance imaging (MRI) datasets from different patients [27], aiming to emulate the original tissue, in terms of mechanical and morphological characteristics. Despite the progress, there is still a need for a bifurcated vascular graft based on resorbable materials, and with appropriate mechanical properties. This study focused on the performance of these types of complex shapes and mechanical properties. | [
"28754230",
"24501590",
"26447530",
"28289246",
"28447487",
"16771634",
"20566037",
"20890639",
"16701837",
"18400292",
"27111627",
"4266083",
"29361303",
"28867371",
"11309796",
"26862364"
] | [
{
"pmid": "28754230",
"title": "Tissue-engineered vascular grafts for congenital cardiac disease: Clinical experience and current status.",
"abstract": "Congenital heart disease is a leading cause of death in the newborn period, and man-made grafts currently used for reconstruction are associated with multiple complications. Tissue engineering can provide an alternative source of vascular tissue in congenital cardiac surgery. Clinical trials have been successful overall, but the most frequent complication is graft stenosis. Recent studies in animal models have indicated the important role of the recipient׳s immune response in neotissue formation, and that modulating the immune response can reduce the incidence of stenosis."
},
{
"pmid": "24501590",
"title": "Polymer scaffolds for small-diameter vascular tissue engineering.",
"abstract": "To better engineer small-diameter blood vessels, a few types of novel scaffolds were fabricated from biodegradable poly(L-lactic acid) (PLLA) by means of thermally induced phase separation (TIPS) techniques. By utilizing the differences in thermal conductivities of the mold materials, the scaffolds with oriented gradient microtubular structures in axial or radial direction were created using benzene as the solvent. The porosity, tubular size, and the orientation direction of the microtubules can be controlled by polymer concentration, TIPS temperature, and materials of different thermal conductivities. The gradient microtubular structure was intended to facilitate cell seeding and mass transfer for cell growth and function. We also developed nanofibrous scaffolds with oriented and interconnected micro-tubular pore network by a one-step TIPS method using benzene/tetrahydrofuran mixture as the solvent without using porogen materials. The structural features of such scaffolds can be conveniently adjusted by varying the solvent ratio, phase separation temperature and polymer concentration to mimic the nanofibrous feature of extracellular matrix. These scaffolds were fabricated for the tissue engineering of small-diameter blood vessels by utilizing their advantageous structural features to facilitate blood vessel regeneration."
},
{
"pmid": "26447530",
"title": "The Tissue-Engineered Vascular Graft-Past, Present, and Future.",
"abstract": "Cardiovascular disease is the leading cause of death worldwide, with this trend predicted to continue for the foreseeable future. Common disorders are associated with the stenosis or occlusion of blood vessels. The preferred treatment for the long-term revascularization of occluded vessels is surgery utilizing vascular grafts, such as coronary artery bypass grafting and peripheral artery bypass grafting. Currently, autologous vessels such as the saphenous vein and internal thoracic artery represent the gold standard grafts for small-diameter vessels (<6 mm), outperforming synthetic alternatives. However, these vessels are of limited availability, require invasive harvest, and are often unsuitable for use. To address this, the development of a tissue-engineered vascular graft (TEVG) has been rigorously pursued. This article reviews the current state of the art of TEVGs. The various approaches being explored to generate TEVGs are described, including scaffold-based methods (using synthetic and natural polymers), the use of decellularized natural matrices, and tissue self-assembly processes, with the results of various in vivo studies, including clinical trials, highlighted. A discussion of the key areas for further investigation, including graft cell source, mechanical properties, hemodynamics, integration, and assessment in animal models, is then presented."
},
{
"pmid": "28289246",
"title": "The Heart and Great Vessels.",
"abstract": "Cardiovascular disease is the leading cause of mortality worldwide. We have made large strides over the past few decades in management, but definitive therapeutic options to address this health-care burden are still limited. Given the ever-increasing need, much effort has been spent creating engineered tissue to replaced diseased tissue. This article gives a general overview of this work as it pertains to the development of great vessels, myocardium, and heart valves. In each area, we focus on currently studied methods, limitations, and areas for future study."
},
{
"pmid": "28447487",
"title": "Tissue engineered vascular grafts: current state of the field.",
"abstract": "Conventional synthetic vascular grafts are limited by the inability to remodel, as well as issues of patency at smaller diameters. Tissue-engineered vascular grafts (TEVGs), constructed from biologically active cells and biodegradable scaffolds have the potential to overcome these limitations, and provide growth capacity and self-repair. Areas covered: This article outlines the TEVG design, biodegradable scaffolds, TEVG fabrication methods, cell seeding, drug delivery, strategies to reduce wait times, clinical trials, as well as a 5-year view with expert commentary. Expert commentary: TEVG technology has progressed significantly with advances in scaffold material and design, graft design, cell seeding and drug delivery. Strategies have been put in place to reduce wait times and improve 'off-the-shelf' capability of TEVGs. More recently, clinical trials have been conducted to investigate the clinical applications of TEVGs."
},
{
"pmid": "16771634",
"title": "Electrospinning of polymeric nanofibers for tissue engineering applications: a review.",
"abstract": "Interest in electrospinning has recently escalated due to the ability to produce materials with nanoscale properties. Electrospun fibers have been investigated as promising tissue engineering scaffolds since they mimic the nanoscale properties of native extracellular matrix. In this review, we examine electrospinning by providing a brief description of the theory behind the process, examining the effect of changing the process parameters on fiber morphology, and discussing the potential applications and impacts of electrospinning on the field of tissue engineering."
},
{
"pmid": "20566037",
"title": "Heparin-Conjugated PCL Scaffolds Fabricated by Electrospinning and Loaded with Fibroblast Growth Factor 2.",
"abstract": "A biodegradable poly(ε-caprolactone) (PCL) was synthesized by ring-opening polymerization of ε-caprolactone catalyzed by Sn(Oct)2/BDO, followed by the heparin conjugation using EDC/NHS chemistry. The structure of the heparin-PCL conjugate was characterized by (1)H-NMR and GPC. The results of static contact angle and water uptake ratio measurements also confirmed the conjugation of heparin with the polyester. Its in vitro anticoagulation time was substantially extended, as evidenced by activated partial thromboplastin time (APTT) testing. Afterwards the conjugate was electrospun into small-diameter tubular scaffolds and loaded with Fibroblast Growth Factor 2 (FGF2) in aqueous solution. The loading efficiency was assayed by enzyme-linked immunosorbent assay (ELISA); the results indicated that the conjugate holds a higher loading efficiency than the blank polyester. The viability of released FGF2 was evaluated by MTT and cell adhesion tests. The amount and morphology of cells were significantly improved after FGF2 loading onto the electrospun heparin-PCL vascular scaffolds."
},
{
"pmid": "20890639",
"title": "Electrospinning of small diameter 3-D nanofibrous tubular scaffolds with controllable nanofiber orientations for vascular grafts.",
"abstract": "The control of nanofiber orientation in nanofibrous tubular scaffolds can benefit the cell responses along specific directions. For small diameter tubular scaffolds, however, it becomes difficult to engineer nanofiber orientation. This paper reports a novel electrospinning technique for the fabrication of 3-D nanofibrous tubular scaffolds with controllable nanofiber orientations. Synthetic absorbable poly-ε-caprolactone (PCL) was used as the model biomaterial to demonstrate this new electrospinning technique. Electrospun 3-D PCL nanofibrous tubular scaffolds of 4.5 mm in diameter with different nanofiber orientations (viz. circumferential, axial, and combinations of circumferential and axial directions) were successfully fabricated. The degree of nanofiber alignment in the electrospun 3-D tubular scaffolds was quantified by using the fast Fourier transform (FFT) analysis. The results indicated that excellent circumferential nanofiber alignment could be achieved in the 3-D nanofibrous PCL tubular scaffolds. The nanofibrous tubular scaffolds with oriented nanofibers had not only directional mechanical property but also could facilitate the orientation of the endothelial cell attachment on the fibers. Multiple layers of aligned nanofibers in different orientations can produce 3-D nanofibrous tubular scaffolds of different macroscopic properties."
},
{
"pmid": "16701837",
"title": "Design of scaffolds for blood vessel tissue engineering using a multi-layering electrospinning technique.",
"abstract": "Aiming to develop a scaffold architecture mimicking morphological and mechanically that of a blood vessel, a sequential multi-layering electrospinning (ME) was performed on a rotating mandrel-type collector. A bi-layered tubular scaffold composed of a stiff and oriented PLA outside fibrous layer and a pliable and randomly oriented PCL fibrous inner layer (PLA/PCL) was fabricated. Control over the level of fibre orientation of the different layers was achieved through the rotation speed of the collector. The structural and mechanical properties of the scaffolds were examined using scanning electron microscopy (SEM) and tensile testing. To assess their capability to support cell attachment, proliferation and migration, 3T3 mouse fibroblasts and later human venous myofibroblasts (HVS) were cultured, expanded and seeded on the scaffolds. In both cases, the cell-polymer constructs were cultured under static conditions for up to 4 weeks. Environmental-scanning electron microscopy (SEM), confocal laser scanning microscopy (CLSM), histological examination and biochemical assays for cell proliferation (DNA) and extracellular matrix production (collagen and glycosaminoglycans) were performed. The findings suggest the feasibility of ME to design scaffolds with a hierarchical organization through a layer-by-layer process and control over fibre orientation. The resulting scaffolds achieved the desirable levels of pliability (elastic up to 10% strain) and proved to be capable to promote cell growth and proliferation. The electrospun PLA/PCL bi-layered tube presents appropriate characteristics to be considered a candidate scaffold for blood vessel tissue engineering."
},
{
"pmid": "18400292",
"title": "Development of a composite vascular scaffolding system that withstands physiological vascular conditions.",
"abstract": "Numerous scaffolds that possess ideal characteristics for vascular grafts have been fabricated for clinical use. However, many of these scaffolds may not show consistent properties when they are exposed to physiologic vascular environments that include high pressure and flow, and they may eventually fail due to unexpected rapid degradation and low resistance to shear stress. There is a demand to develop a more durable scaffold that could withstand these conditions until vascular tissue matures in vivo. In this study, vascular scaffolds composed of poly(epsilon-caprolactone) (PCL) and collagen were fabricated by electrospinning. Morphological, biomechanical, and biological properties of these composite scaffolds were examined. The PCL/collagen composite scaffolds, with fiber diameters of approximately 520 nm, possessed appropriate tensile strength (4.0+/-0.4 MPa) and adequate elasticity (2.7+/-1.2 MPa). The burst pressure of the composite scaffolds was 4912+/-155 mmHg, which is much greater than that of the PCL-only scaffolds (914+/-130 mmHg) and native vessels. The composite scaffolds seeded with bovine endothelial cells (bECs) and smooth muscle cells (bSMCs) showed the formation of a confluent layer of bECs on the lumen and bSMCs on the outer surface of the scaffold. The PCL/collagen composite scaffolds are biocompatible, possess biomechanical properties that resist high degrees of pressurized flow over long term, and provide a favorable environment that supports the growth of vascular cells."
},
{
"pmid": "27111627",
"title": "The mechanical performance of weft-knitted/electrospun bilayer small diameter vascular prostheses.",
"abstract": "Cardiovascular disease (CVD) accounts for a significant mortality rate worldwide. Autologous vessels, such as the saphenous vein and the internal mammary artery, are currently the gold standard materials for by-pass surgery. However, they may not always be available due to aging, previous harvesting or the pre-existing arterial disease. Synthetic commercial ePTFE and polyester (PET) are not suitable for small diameter vascular grafts (<6mm), mainly due to their poor circumferential compliance, rapid thrombus formation and low endothelialization. In order to reduce thrombogenicity and improve cell proliferation, we developed a collagen/elastin knitted/electrospun bilayer graft made of biodegradable and biocompatible poly(lactic acid) (PLA) and poly(lactide-co-caprolactone) (PLCL) polymers to mimic the multilayer structure of native arteries. We also designed the prostheses to provide some of the required mechanical properties. While the bilayer structure had excellent circumferential tensile strength, bursting strength and suture retention resistance, the radial compliance did not show any observable improvement."
},
{
"pmid": "29361303",
"title": "Virtual surgical planning, flow simulation, and 3-dimensional electrospinning of patient-specific grafts to optimize Fontan hemodynamics.",
"abstract": "BACKGROUND\nDespite advances in the Fontan procedure, there is an unmet clinical need for patient-specific graft designs that are optimized for variations in patient anatomy. The objective of this study is to design and produce patient-specific Fontan geometries, with the goal of improving hepatic flow distribution (HFD) and reducing power loss (Ploss), and manufacturing these designs by electrospinning.\n\n\nMETHODS\nCardiac magnetic resonance imaging data from patients who previously underwent a Fontan procedure (n = 2) was used to create 3-dimensional models of their native Fontan geometry using standard image segmentation and geometry reconstruction software. For each patient, alternative designs were explored in silico, including tube-shaped and bifurcated conduits, and their performance in terms of Ploss and HFD probed by computational fluid dynamic (CFD) simulations. The best-performing options were then fabricated using electrospinning.\n\n\nRESULTS\nCFD simulations showed that the bifurcated conduit improved HFD between the left and right pulmonary arteries, whereas both types of conduits reduced Ploss. In vitro testing with a flow-loop chamber supported the CFD results. The proposed designs were then successfully electrospun into tissue-engineered vascular grafts.\n\n\nCONCLUSIONS\nOur unique virtual cardiac surgery approach has the potential to improve the quality of surgery by manufacturing patient-specific designs before surgery, that are also optimized with balanced HFD and minimal Ploss, based on refinement of commercially available options for image segmentation, computer-aided design, and flow simulations."
},
{
"pmid": "28867371",
"title": "The suture retention test, revisited and revised.",
"abstract": "A systematic investigation of the factors affecting the suture retention test is performed. The specimen width w and the distance a of the suture bite from the specimen free edge emerge as the most influential geometrical parameters. A conservative approach for the quantification of suture retention strength is identified, based on the use of a camera to monitor the incipient failure and detect the instant of earliest crack propagation. The corresponding critical force, called break starting strength, is extremely robust against test parameter variations and its dependence on the specimen geometry becomes negligible when a≥ 2mm and w≥ 10mm. Comparison of suture retention and mode I crack opening tests reveals a linear correlation between break starting strength and tearing energy. This suggests that the defect created by the needle and the load applied by the suture thread lead to a fracture mechanics problem, which dominates the initiation of failure."
},
{
"pmid": "11309796",
"title": "Effects of carbodiimide crosslinking conditions on the physical properties of laminated intestinal submucosa.",
"abstract": "Functional tissue engineering of load-bearing repair tissues requires the design and production of biomaterials that provide a remodelable scaffold for host infiltration and tissue regeneration while maintaining the repair function throughout the remodeling process. Layered constructs have been fabricated from chemically and mechanically cleaned porcine intestinal collagen using ethyl-3(3-dimethylamino) propyl carbodiimide (EDC) and an acetone solvent. By varying the concentration of the crosslinker from 1 to 10 mM and the solvent from 0 to 90% acetone, the strength, stiffness, maximum strain, thermal stability, lamination strength, and suture retention strength can be adjusted. These parameters have either functional importance or the potential to modify the remodeling kinetics, or they have both. This study investigates the interdependence of these parameters, the specific effects that variations in concentration can achieve, and how the two crosslinking variables interact. The results demonstrate that there is substantial latitude in the design of these constructs by these straightforward crosslinking modifications. These data provide the basis for studying the in vivo response to crosslinking conditions that will supply the requisite strength while still allowing host cell infiltration and remodeling."
},
{
"pmid": "26862364",
"title": "Biliary and pancreatic stenting: Devices and insertion techniques in therapeutic endoscopic retrograde cholangiopancreatography and endoscopic ultrasonography.",
"abstract": "Stents are tubular devices made of plastic or metal. Endoscopic stenting is the most common treatment for obstruction of the common bile duct or of the main pancreatic duct, but also employed for the treatment of bilio-pancreatic leakages, for preventing post- endoscopic retrograde cholangiopancreatography pancreatitis and to drain the gallbladder and pancreatic fluid collections. Recent progresses in techniques of stent insertion and metal stent design are represented by new, fully-covered lumen apposing metal stents. These stents are specifically designed for transmural drainage, with a saddle-shape design and bilateral flanges, to provide lumen-to-lumen anchoring, reducing the risk of migration and leakage. This review is an update of the technique of stent insertion and metal stent deployment, of the most recent data available on stent types and characteristics and the new applications for biliopancreatic stents."
}
] |
JMIR mHealth and uHealth | 31140439 | PMC6660123 | 10.2196/12293 | Design and Implementation of a Novel System for Correcting Posture Through the Use of a Wearable Necklace Sensor | BackgroundTo our knowledge, few studies have examined the use of wearable sensing devices to effectively integrate information communication technologies and apply them to health care issues (particularly those pertaining to posture correction).ObjectiveA novel system for posture correction involving the application of wearable sensing technology was developed in this study. The system was created with the aim of preventing the unconscious development of bad postures (as well as potential spinal diseases over the long term).MethodsThe newly developed system consists of a combination of 3 subsystems, namely, a smart necklace, notebook computer, and smartphone. The notebook computer is enabled to use a depth camera to read the relevant data, to identify the skeletal structure and joint reference points of a user, and to compute calculations relating to those reference points, after which the computer then sends signals to the smart necklace to enable calibration of the smart necklace’s standard values (base values for posture assessment). The gravitational acceleration data of the user are collected and analyzed by a microprocessor unit-6050 sensor housed in the smart necklace when the smart necklace is worn, with those data being used by the smart necklace to determine the user’s body posture. When poor posture is detected by the smart necklace, the smart necklace sends the user’s smartphone a reminder to correct his or her posture; a mobile app that was also developed as part of the study allows the smart necklace to transmit such messages to the smartphone.ResultsThe system effectively enables a user to monitor and correct his or her own posture, which in turn will assist the user in preventing spine-related diseases and, consequently, in living a healthier life.ConclusionsThe proposed system makes it possible for (1) the user to self-correct his or her posture without resorting to the use of heavy, thick, or uncomfortable corrective clothing; (2) the smart necklace’s standard values to be quickly calibrated via the use of posture imaging; and (3) the need for complex wiring to be eliminated through the effective application of the Internet of Things as well as by implementing wireless communication between the smart necklace, notebook computer, and smartphone. | Related WorkIn recent years, the availability of increasingly smaller chips and greater computer power has accelerated the pace of development for wearable sensing devices. These developments have increased the applicability of such devices in the health monitoring and medical care field. Furthermore, wearable sensing devices also offer a considerably wide variety of application possibilities in other fields, such as force feedback devices, solutions for communication between people, environmental obstacle detection, and human-machine interface control.In the health monitoring and medical care field, a study by Fallahzadeh et al [1] has examined the use of socks (embedded with accelerators and flexible stretch sensors) to detect ankle edema. In a study by Dauz et al [2], sleep quality was monitored by measuring skin potential activity, body temperature changes, and heart rates, and electrocutaneous stimulation was applied to the skin during the slow wave sleep stage to improve sleep quality. Sundaravadivel et al [3] and Boateng et al [4] explored the use of triple axis accelerometer readings to monitor the daily physical activity levels of individuals. In the study by Sundaravadivel et al [3], the data that were obtained were used to generate peak, mean, and standard deviation values, which were in turn used to differentiate between activities such as walking, stair climbing, and lying down. In the study by Boateng et al [4], a lightweight machine learning algorithm was used to analyze the obtained data and differentiate between the activities performed by users, so as to monitor and encourage physical activity among users. Liu et al [5] utilized accelerometers to measure electrocardiography signals and analyze user behavior. Martin and Voix [6] measured heart and respiratory rates by detecting sounds generated in the ear canal. Surrel et al [7] proposed a set of wearable devices capable of detecting sleep apnea. Takei et al [8] developed a wearable sensing device that can provide microelectric stimulation to the muscles and monitor muscle activity. Durbhaka [9] utilized shirts and pants embedded with triple axis accelerometers to analyze and assess human posture. Moreover, Gia et al [10] proposed a low-cost health monitoring system that involves the combined application of the Internet of Things with energy-saving sensor nodes and fog layers. This system utilized fog computing to automate services such as data sorting and channel management, allowing physicians to remotely monitor their patient’s physical conditions.When wearable sensing devices are used as force feedback devices, they can enhance the virtual reality experience and convey the data collected by the sensors of robots as tactile feedback to users. For example, Chinello et al [11] designed a wearable fingertip cutaneous device that utilized 3 servo and vibration motors to create a tactile sensation at the fingertips of users. Through the use of motor-driven belts, Meli et al [12] were able to design a wearable sensing device for the upper limb that was capable of controlling the movements of a robotic arm and of receiving movement resistance feedback from the robotic arm. Wearable sensing devices can also be used to address communication problems between people. Goncharenko et al [13] analyzed images to identify sign language and enable the instant translation of the said language into a textual or auditory output, such that a user is able to communicate with a deaf-mute individual. Furthermore, wearable sensing devices can be used to detect environmental obstacles. Through the use of shoes equipped with ultrasonic sensors and vibration motors, Patil et al [14] were able to develop a navigational aid for individuals suffering from amblyopia and blindness. When a user’s foot comes too close to an obstacle, the shoes’ vibration motors will alert the user to the situation. With respect to the use of human-machine interfaces to change traditional mouse-based controls for computers, Zhang et al [15] examined the use of pressure sensors to detect the distribution of wrist strength and control cursor movements.However, although all of these studies [1-15] did make significant contributions in various respects, none examined the use of wearable sensing devices to effectively integrate information communication technologies and apply them to health care issues (particularly those pertaining to posture correction). | [
"28678697",
"29993894",
"28945602"
] | [
{
"pmid": "28678697",
"title": "In-Ear Audio Wearable: Measurement of Heart and Breathing Rates for Health and Safety Monitoring.",
"abstract": "OBJECTIVE\nThis paper examines the integration of a noninvasive vital sign monitoring feature into the workers' hearing protection devices (HPDs) by using a microphone positioned within the earcanal under the HPD.\n\n\nMETHODS\n25 test-subjects were asked to breathe at various rhythms and intensities and these realistic sound events were recorded in the earcanal. Digital signal processing algorithms were then developed to assess heart and breathing rates. Finally, to test the robustness of theses algorithms in noisy work environments, industrial noise was added to the in-ear recorded signals and an adaptive denoising filter was used.\n\n\nRESULTS\nThe developed algorithms show an absolute mean error of 4.3 beats per minute (BPM) and 2.7 cycles per minute (CPM). The mean difference estimate is -0.44 BPM with a limit of agreement (LoA) interval of -14.3 to 13.4 BPM and 2.40 CPM with a LoA interval of -2.62 to 7.48 CPM. Excellent denoising is achieved with the adaptive filter, able to cope with ambient sound pressure levels of up to 110 dB SPL, resulting in a small error for heart rate detection, but a much larger error for breathing rate detection.\n\n\nCONCLUSION\nExtraction of the heart and breathing rates from an acoustical measurement in the occluded earcanal under an HPD is possible and can even be conducted in the presence of a high level of ambient noise.\n\n\nSIGNIFICANCE\nThis proof of concept enables the development of a wide range of noninvasive health and safety monitoring audio wearables for industrial workplaces and life-critical applications where HPDs are used."
},
{
"pmid": "29993894",
"title": "Online Obstructive Sleep Apnea Detection on Medical Wearable Sensors.",
"abstract": "Obstructive Sleep Apnea (OSA) is one of the main under-diagnosed sleep disorder. It is an aggravating factor for several serious cardiovascular diseases, including stroke. There is, however, a lack of medical devices for long-term ambulatory monitoring of OSA since current systems are rather bulky, expensive, intrusive, and cannot be used for long-term monitoring in ambulatory settings. In this paper, we propose a wearable, accurate, and energy efficient system for monitoring obstructive sleep apnea on a long-term basis. As an embedded system for Internet of Things, it reduces the gap between home health-care and professional supervision. Our approach is based on monitoring the patient using a single-channel electrocardiogram signal. We develop an efficient time-domain analysis to meet the stringent resources constraints of embedded systems to compute the sleep apnea score. Our system, for a publicly available database (PhysioNet Apnea-ECG), has a classification accuracy of up to 88.2% for our new online and patient-specific analysis, which takes the distinct profile of each patient into account. While accurate, our approach is also energy efficient and can achieve a battery lifetime of 46 days for continuous screening of OSA."
},
{
"pmid": "28945602",
"title": "A Three Revolute-Revolute-Spherical Wearable Fingertip Cutaneous Device for Stiffness Rendering.",
"abstract": "We present a novel three Revolute-Revolute-Spherical (3RRS) wearable fingertip device for the rendering of stiffness information. It is composed of a static upper body and a mobile end-effector. The upper body is located on the nail side of the finger, supporting three small servo motors, and the mobile end-effector is in contact with the finger pulp. The two parts are connected by three articulated legs, actuated by the motors. The end-effector can move toward the user's fingertip and rotate it to simulate contacts with arbitrarily-oriented surfaces. Moreover, a vibrotactile motor placed below the end-effector conveys vibrations to the fingertip. The proposed device weights 25 g for 35 x 50 x 48 mm dimensions. To test the effectiveness of our wearable haptic device and its level of wearability, we carried out two experiments, enrolling 30 human subjects in total. The first experiment tested the capability of our device in differentiating stiffness information, while the second one focused on evaluating its applicability in an immersive virtual reality scenario. Results showed the effectiveness of the proposed wearable solution, with a JND for stiffness of 208.5 17.2 N/m. Moreover, all subjects preferred the virtual interaction experience when provided with wearable cutaneous feedback, even if results also showed that subjects found our device still a bit difficult to use."
}
] |
Scientific Reports | 31358874 | PMC6662829 | 10.1038/s41598-019-47535-4 | From curved spacetime to spacetime-dependent local unitaries over the honeycomb and triangular Quantum Walks | A discrete-time Quantum Walk (QW) is an operator driving the evolution of a single particle on the lattice, through local unitaries. In a previous paper, we showed that QWs over the honeycomb and triangular lattices can be used to simulate the Dirac equation. We apply a spacetime coordinate transformation upon the lattice of this QW, and show that it is equivalent to introducing spacetime-dependent local unitaries —whilst keeping the lattice fixed. By exploiting this duality between changes in geometry, and changes in local unitaries, we show that the spacetime-dependent QW simulates the Dirac equation in (2 + 1)–dimensional curved spacetime. Interestingly, the duality crucially relies on the non linear-independence of the three preferred directions of the honeycomb and triangular lattices: The same construction would fail for the square lattice. At the practical level, this result opens the possibility to simulate field theories on curved manifolds, via the quantum walk on different kinds of lattices. | Related worksIt is already well known that QW can simulate the Dirac equation3,4,8,20–23, the Klein-Gordon equation24–26 and the Schrödinger equation27,28 and that they are a minimal setting in which to simulate particles in some inhomogeneous background field29–33, with the difficult topic of interactions initiated in34,35. Eventually, the systematic study of the impact inhomogeneous local unitaries also gave rise to QW models of particles propagating in curved spacetime. This line of research was initiated by a QW simulations of the curved Dirac equation in (1 + 1)–dimensions, for synchronous coordinates30,36, and later extended by37 to any spacetime metrics, and generalized to further spatial and spin dimensions in38,39. A related work, from a slightly different perspective, can be found in40. All of these models were on the square lattice: to the best our knowledge no one had modeled fermionic transport over non-square lattices. The present paper shows that over the honeycomb and triangular lattices the problem becomes considerably simpler, and the solution elegant.In a recent work41, quantum transport over curved spacetime has been compared to electronic transport in deformed graphene, where a pseudo-magnetic field emulates an effective curvature in the tight-binding Hamiltonian (see also42). Back to the quantum computing side, the Grover search has been expressed as a QW over the honeycomb lattice43 (see also44 for continuous time approach). Reference45 evaluates the use graphene nanoribbons as a substrate to build quantum gates. | [
"22304249",
"28937180"
] | [
{
"pmid": "22304249",
"title": "Two-particle bosonic-fermionic quantum walk via integrated photonics.",
"abstract": "Quantum walk represents one of the most promising resources for the simulation of physical quantum systems, and has also emerged as an alternative to the standard circuit model for quantum computing. Here we investigate how the particle statistics, either bosonic or fermionic, influences a two-particle discrete quantum walk. Such an experiment has been realized by exploiting polarization entanglement to simulate the bunching-antibunching feature of noninteracting bosons and fermions. To this scope a novel three-dimensional geometry for the waveguide circuit is introduced, which allows accurate polarization independent behavior, maintaining remarkable control on both phase and balancement."
},
{
"pmid": "28937180",
"title": "Loop Quantum Gravity.",
"abstract": "The problem of finding the quantum theory of the gravitational field, and thus understanding what is quantum spacetime, is still open. One of the most active of the current approaches is loop quantum gravity. Loop quantum gravity is a mathematically well-defined, non-perturbative and background independent quantization of general relativity, with its conventional matter couplings. Research in loop quantum gravity today forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained are: (i)The computation of the physical spectra of geometrical quantities such as area and volume, which yields quantitative predictions on Planck-scale physics.(ii)A derivation of the Bekenstein-Hawking black hole entropy formula.(iii)An intriguing physical picture of the microstructure of quantum physical space, characterized by a polymer-like Planck scale discreteness. This discreteness emerges naturally from the quantum theory and provides a mathematically well-defined realization of Wheeler's intuition of a spacetime \"foam\". Long standing open problems within the approach (lack of a scalar product, over-completeness of the loop basis, implementation of reality conditions) have been fully solved. The weak part of the approach is the treatment of the dynamics: at present there exist several proposals, which are intensely debated. Here, I provide a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature."
}
] |
BMC Medical Informatics and Decision Making | 31357998 | PMC6664803 | 10.1186/s12911-019-0874-0 | On the interpretability of machine learning-based model for predicting hypertension | BackgroundAlthough complex machine learning models are commonly outperforming the traditional simple interpretable models, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. The aim of this study to demonstrate the utility of various model-agnostic explanation techniques of machine learning models with a case study for analyzing the outcomes of the machine learning random forest model for predicting the individuals at risk of developing hypertension based on cardiorespiratory fitness data.MethodsThe dataset used in this study contains information of 23,095 patients who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 10-year follow-up. Five global interpretability techniques (Feature Importance, Partial Dependence Plot, Individual Conditional Expectation, Feature Interaction, Global Surrogate Models) and two local interpretability techniques (Local Surrogate Models, Shapley Value) have been applied to present the role of the interpretability techniques on assisting the clinical staff to get better understanding and more trust of the outcomes of the machine learning-based predictions.ResultsSeveral experiments have been conducted and reported. The results show that different interpretability techniques can shed light on different insights on the model behavior where global interpretations can enable clinicians to understand the entire conditional distribution modeled by the trained response function. In contrast, local interpretations promote the understanding of small parts of the conditional distribution for specific instances.ConclusionsVarious interpretability techniques can vary in their explanations for the behavior of the machine learning model. The global interpretability techniques have the advantage that it can generalize over the entire population while local interpretability techniques focus on giving explanations at the level of instances. Both methods can be equally valid depending on the application need. Both methods are effective methods for assisting clinicians on the medical decision process, however, the clinicians will always remain to hold the final say on accepting or rejecting the outcome of the machine learning models and their explanations based on their domain expertise. | Related workThe volume of research in machine learning interpretability is growing rapidly over the last few years. One way to explain complex machine models is to use interpretable models such as linear models and decision trees to explain the behavior of complex models. LIME interpretability technique explains the prediction of complex machine model by fitting an interpretable model on perturbed data in the neighborhood of the instance to be explained. Decision trees have been used intensively as a proxy model to explain complex models. Decision trees have several desirable properties [29]. Firstly, due to its graphical presentation, it allows users to easily have an overview of complex models. Secondly, the most important features that affect the model prediction are shown further to the top of the tree, which show the relative importance of features in the prediction. Lots of work consider decomposing neural networks into decision trees with the main focus on shallow networks [30, 31].Decision rules have used intensively to mimic the behavior of a black-box model globally or locally given that the training data is available when providing local explanations [32]. Koh and Liang [33] used influence functions to find the most influential training examples that lead to a particular decision. This method requires access to the training dataset used in training the black-box model. Anchors [34] is an extension of LIME that uses a bandit algorithm to generate decision rules with high precision and coverage. Another notable rule-extraction technique is MofN algorithm [35], which tries to extract rules that explain single neurons by clustering and ignoring the least significant neurons. The FERNN algorithm [36] is another interpretability technique that uses a decision tree and identifies the meaningful hidden neurons and inputs to a particular network.Another common interpretability technique is saliency maps that aim to explain neural networks models by identifying the significance of individual outcomes as an overlay on the original input [37]. Saliency-based interpretability techniques are popular means for visualizing the of a large number of features such as images and text data. Saliency maps can be computed efficiently when neural network parameters can be inspected by computing the input gradient [38]. Derivatives may miss some essential aspects of information that flows through the network being explained and hence some other approaches have considered propagating quantities other than gradient through the network [39–41].Interpretability of black-box models via visualization has been used extensively [42–44]. Several tools have been designed to provide an explanation for the importance of features for random forest predictions [45], however, these tools are model-specific and cannot be generalized to other models. The authors of [46, 47] discussed several methods for extracting rules from neural networks. Poulet [48] presented a methodology for explaining the prediction model by assigned a contribution value for each feature using visualization technique. However, this work has been only able to handle linear additive models. Strumbelj et al. [49] provided insights for explaining the predictions of breast cancer recurrence by assigning a contribution value to each feature, which could be positive, negative, or zero. A positive contribution means that the feature supports the prediction of the class of interest, a negative contribution means that the feature is against the prediction of the class of interest, and zero means that the feature has no influence on the prediction of the class of interest. Caruana et al. [50] presented an explanation technique which is based on selecting the most similar instances in the training dataset to the instance to be explained. This type of explanation is called case-based explanation and uses the k-nearest neighbors (KNN) algorithm to find the k nearest examples close to the instance to be explained based on a particular distance metric such as Euclidean distance [51]. | [
"11470218",
"26572668",
"27682033",
"26864406",
"23591286",
"24076748",
"26804774",
"26044081",
"23676796",
"28657867",
"29668729",
"17079822",
"26161953",
"25138770",
"23251303",
"25520327",
"10904013",
"15581336"
] | [
{
"pmid": "11470218",
"title": "Machine learning for medical diagnosis: history, state of the art and perspective.",
"abstract": "The paper provides an overview of the development of intelligent data analysis in medicine from a machine learning perspective: a historical view, a state-of-the-art view, and a view on some future trends in this subfield of applied artificial intelligence. The paper is not intended to provide a comprehensive overview but rather describes some subareas and directions which from my personal point of view seem to be important for applying machine learning in medical diagnosis. In the historical overview, I emphasize the naive Bayesian classifier, neural networks and decision trees. I present a comparison of some state-of-the-art systems, representatives from each branch of machine learning, when applied to several medical diagnostic tasks. The future trends are illustrated by two case studies. The first describes a recently developed method for dealing with reliability of decisions of classifiers, which seems to be promising for intelligent data analysis in medicine. The second describes an approach to using machine learning in order to verify some unexplained phenomena from complementary medicine, which is not (yet) approved by the orthodox medical community but could in the future play an important role in overall medical diagnosis and treatment."
},
{
"pmid": "26572668",
"title": "Machine Learning in Medicine.",
"abstract": "Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games - tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome."
},
{
"pmid": "23591286",
"title": "An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission.",
"abstract": "BACKGROUND & AIMS\nPatients with cirrhosis have 1-month rates of readmission as high as 35%. Early identification of high-risk patients could permit interventions to reduce readmission. The aim of our study was to construct an automated 30-day readmission risk model for cirrhotic patients using electronic medical record (EMR) data available early during hospitalization.\n\n\nMETHODS\nWe identified patients with cirrhosis admitted to a large safety-net hospital from January 2008 through December 2009. A multiple logistic regression model for 30-day rehospitalization was developed using medical and socioeconomic factors available within 48 hours of admission and tested on a validation cohort. Discrimination was assessed using receiver operator characteristic curve analysis.\n\n\nRESULTS\nWe identified 836 cirrhotic patients with 1291 unique admission encounters. Rehospitalization occurred within 30 days for 27% of patients. Significant predictors of 30-day readmission included the number of address changes in the prior year (odds ratio [OR], 1.13; 95% confidence interval [CI], 1.05-1.21), number of admissions in the prior year (OR, 1.14; 95% CI, 1.05-1.24), Medicaid insurance (OR, 1.53; 95% CI, 1.10-2.13), thrombocytopenia (OR, 0.50; 95% CI, 0.35-0.72), low level of alanine aminotransferase (OR, 2.56; 95% CI, 1.09-6.00), anemia (OR, 1.63; 95% CI, 1.17-2.27), hyponatremia (OR, 1.78; 95% CI, 1.14-2.80), and Model for End-stage Liver Disease score (OR, 1.04; 95% CI, 1.01-1.06). The risk model predicted 30-day readmission, with c-statistics of 0.68 (95% CI, 0.64-0.72) and 0.66 (95% CI, 0.59-0.73) in the derivation and validation cohorts, respectively.\n\n\nCONCLUSIONS\nClinical and social factors available early during admission and extractable from an EMR predicted 30-day readmission in cirrhotic patients with moderate accuracy. Decision support tools that use EMR-automated data are useful for risk stratification of patients with cirrhosis early during hospitalization."
},
{
"pmid": "24076748",
"title": "Mining high-dimensional administrative claims data to predict early hospital readmissions.",
"abstract": "BACKGROUND\nCurrent readmission models use administrative data supplemented with clinical information. However, the majority of these result in poor predictive performance (area under the curve (AUC)<0.70).\n\n\nOBJECTIVE\nTo develop an administrative claim-based algorithm to predict 30-day readmission using standardized billing codes and basic admission characteristics available before discharge.\n\n\nMATERIALS AND METHODS\nThe algorithm works by exploiting high-dimensional information in administrative claims data and automatically selecting empirical risk factors. We applied the algorithm to index admissions in two types of hospitalized patient: (1) medical patients and (2) patients with chronic pancreatitis (CP). We trained the models on 26,091 medical admissions and 3218 CP admissions from The Johns Hopkins Hospital (a tertiary research medical center) and tested them on 16,194 medical admissions and 706 CP admissions from Johns Hopkins Bayview Medical Center (a hospital that serves a more general patient population), and vice versa. Performance metrics included AUC, sensitivity, specificity, positive predictive values, negative predictive values, and F-measure.\n\n\nRESULTS\nFrom a pool of up to 5665 International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnoses, 599 ICD-9-CM procedures, and 1815 Current Procedural Terminology codes observed, the algorithm learned a model consisting of 18 attributes from the medical patient cohort and five attributes from the CP cohort. Within-site and across-site validations had an AUC≥0.75 for the medical patient cohort and an AUC≥0.65 for the CP cohort.\n\n\nCONCLUSIONS\nWe have created an algorithm that is widely applicable to various patient cohorts and portable across institutions. The algorithm performed similarly to state-of-the-art readmission models that require clinical data."
},
{
"pmid": "26804774",
"title": "Current depressive symptoms but not history of depression predict hospital readmission or death after discharge from medical wards: a multisite prospective cohort study.",
"abstract": "OBJECTIVE\nAlthough death or readmission shortly after hospital discharge is frequent, identifying inpatients at higher risk is difficult. We evaluated whether in-hospital depressive symptoms (hereafter \"depression\") are associated with short-term readmission or mortality after discharge from medical wards.\n\n\nMETHODS\nDepression was assessed at discharge in a prospective inpatient cohort from 2 Canadian hospitals (7 medical wards) and defined as scores ≥ 11 on the 27-point Patient Health Questionnaire (PHQ-9). Primary outcome was all-cause readmission or mortality 90 days postdischarge.\n\n\nRESULTS\nOf 495 medical patients [median age 64 years, 51% women, top 3 admitting diagnoses heart failure (10%), pneumonia (10%) and chronic obstructive pulmonary disease (8%)], 127 (26%) screened positive for depression at discharge. Compared with nondepressed patients, those with depression were more frequently readmitted or died: 27/127 (21%) vs. 58/368 (16%) within 30 days and 46 (36%) vs. 91 (25%) within 90 days [adjusted odds ratio (aOR) 2.00, 95% confidence interval 1.25-3.17, P=.004, adjusted for age, sex and readmission/death prediction scores]. History of depression did not predict 90-day events (aOR 1.05, 95% CI 0.64-1.72, P=.84). Depression persisted in 40% of patients at 30 days and 17% at 90 days.\n\n\nCONCLUSIONS\nDepression was common, underrecognized and often persisted postdischarge. Current symptoms of depression, but not history, identified greater risk of short-term events independent of current risk prediction rules."
},
{
"pmid": "26044081",
"title": "A comparison of models for predicting early hospital readmissions.",
"abstract": "Risk sharing arrangements between hospitals and payers together with penalties imposed by the Centers for Medicare and Medicaid (CMS) are driving an interest in decreasing early readmissions. There are a number of published risk models predicting 30day readmissions for particular patient populations, however they often exhibit poor predictive performance and would be unsuitable for use in a clinical setting. In this work we describe and compare several predictive models, some of which have never been applied to this task and which outperform the regression methods that are typically applied in the healthcare literature. In addition, we apply methods from deep learning to the five conditions CMS is using to penalize hospitals, and offer a simple framework for determining which conditions are most cost effective to target."
},
{
"pmid": "23676796",
"title": "Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait--a cohort study.",
"abstract": "OBJECTIVE\nWe build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment.\n\n\nDESIGN\nRetrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors.\n\n\nSETTING\nKuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait.\n\n\nPARTICIPANTS\n270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates.\n\n\nOUTCOME MEASURES\nIncident type 2 diabetes, hypertension and comorbidity.\n\n\nRESULTS\nClassification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign 'high' risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned 'low' risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models.\n\n\nCONCLUSIONS\nData on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case-control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait."
},
{
"pmid": "29668729",
"title": "Using machine learning on cardiorespiratory fitness data for predicting hypertension: The Henry Ford ExercIse Testing (FIT) Project.",
"abstract": "This study evaluates and compares the performance of different machine learning techniques on predicting the individuals at risk of developing hypertension, and who are likely to benefit most from interventions, using the cardiorespiratory fitness data. The dataset of this study contains information of 23,095 patients who underwent clinician- referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 10-year follow-up. The variables of the dataset include information on vital signs, diagnosis and clinical laboratory measurements. Six machine learning techniques were investigated: LogitBoost (LB), Bayesian Network classifier (BN), Locally Weighted Naive Bayes (LWB), Artificial Neural Network (ANN), Support Vector Machine (SVM) and Random Tree Forest (RTF). Using different validation methods, the RTF model has shown the best performance (AUC = 0.93) and outperformed all other machine learning techniques examined in this study. The results have also shown that it is critical to carefully explore and evaluate the performance of the machine learning models using various model evaluation methods as the prediction accuracy can significantly differ."
},
{
"pmid": "17079822",
"title": "Linear regression model for predicting patient-specific total skeletal spongiosa volume for use in molecular radiotherapy dosimetry.",
"abstract": "UNLABELLED\nThe toxicity of red bone marrow is widely considered to be a key factor in restricting the activity administered in molecular radiotherapy to suboptimal levels. The assessment of marrow toxicity requires an assessment of the dose absorbed by red bone marrow which, in many cases, requires knowledge of the total red bone marrow mass in a given patient. Previous studies demonstrated, however, that a close surrogate-spongiosa volume (combined tissues of trabecular bone and marrow)-can be used to accurately scale reference patient red marrow dose estimates and that these dose estimates are predictive of marrow toxicity. Consequently, a predictive model of the total skeletal spongiosa volume (TSSV) would be a clinically useful tool for improving patient specificity in skeletal dosimetry.\n\n\nMETHODS\nIn this study, 10 male and 10 female cadavers were subjected to whole-body CT scans. Manual image segmentation was used to estimate the TSSV in all 13 active marrow-containing skeletal sites within the adult skeleton. The age, total body height, and 14 CT-based skeletal measurements were obtained for each cadaver. Multiple regression was used with the dependent variables to develop a model to predict the TSSV.\n\n\nRESULTS\nOs coxae height and width were the 2 skeletal measurements that proved to be the most important parameters for prediction of the TSSV. The multiple R(2) value for the statistical model with these 2 parameters was 0.87. The analysis revealed that these 2 parameters predicted the estimated the TSSV to within approximately +/-10% for 15 of the 20 cadavers and to within approximately +/-20% for all 20 cadavers in this study.\n\n\nCONCLUSION\nAlthough the utility of spongiosa volume in estimating patient-specific active marrow mass has been shown, estimation of the TSSV in active marrow-containing skeletal sites via patient-specific image segmentation is not a simple endeavor. However, the alternate approach demonstrated in this study is fairly simple to implement in a clinical setting, as the 2 input measurements (os coxae height and width) can be made with either pelvic CT scanning or skeletal radiography."
},
{
"pmid": "26161953",
"title": "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.",
"abstract": "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package."
},
{
"pmid": "25138770",
"title": "Rationale and design of the Henry Ford Exercise Testing Project (the FIT project).",
"abstract": "Although physical fitness is a powerful prognostic marker in clinical medicine, most cardiovascular population-based studies do not have a direct measurement of cardiorespiratory fitness. In line with the call from the National Heart Lung and Blood Institute for innovative, low-cost, epidemiologic studies leveraging electronic medical record (EMR) data, we describe the rationale and design of the Henry Ford ExercIse Testing Project (The FIT Project). The FIT Project is unique in its combined use of directly measured clinical exercise data retrospective collection of medical history and medication treatment data at the time of the stress test, retrospective supplementation of supporting clinical data using the EMR and administrative databases and epidemiologic follow-up for cardiovascular events and total mortality via linkage with claims files and the death registry. The FIT Project population consists of 69 885 consecutive physician-referred patients (mean age, 54 ± 10 years; 54% males) who underwent Bruce protocol treadmill stress testing at Henry Ford Affiliated Hospitals between 1991 and 2009. Patients were followed for the primary outcomes of death, myocardial infarction, and need for coronary revascularization. The median estimated peak metabolic equivalent (MET) level was 10, with 17% of the patients having a severely reduced fitness level (METs < 6). At the end of the follow-up duration, 15.9%, 5.6%, and 6.7% of the patients suffered all-cause mortality, myocardial infarction, or revascularization procedures, respectively. The FIT Project is the largest study of physical fitness to date. With its use of modern electronic clinical epidemiologic techniques, it is poised to answer many clinically relevant questions related to exercise capacity and prognosis."
},
{
"pmid": "23251303",
"title": "Blood pressure in relation to age and frailty.",
"abstract": "BACKGROUND AND PURPOSE\nOn average, systolic blood pressure (SBP) rises with age, while diastolic blood pressure (DBP) increases to age 50 and then declines. As elevated blood pressure is associated with cardiovascular disease and mortality, it also might be linked to frailty. We assessed the association between blood pressure, age, and frailty in a representative population-based cohort.\n\n\nMETHODS\nIndividuals from the second clinical examination of the Canadian Study of Health and Aging (n = 2305, all 70+ years) were separated into four groups: history of hypertension ± antihypertensive medication, and no history of hypertension ± antihypertensive medication. Frailty was quantified as deficits accumulated in a frailty index (FI).\n\n\nRESULTS\nSBP and DBP changed little in relation to age, except in untreated hypertension, where SBP declined in individuals >85 years. In contrast, SBP declined in all groups up to an FI of 0.55, and then rose sharply. DBP changed little in relation to FI. The slope of the line relating FI and age was highest in untreated individuals without a history of hypertension, indicating the highest physiological reserve.\n\n\nCONCLUSIONS\nSBP declined as frailty increased in older adults, except at the highest FI levels. SBP and age had little or no relationship."
},
{
"pmid": "25520327",
"title": "Physical fitness and hypertension in a population at risk for cardiovascular disease: the Henry Ford ExercIse Testing (FIT) Project.",
"abstract": "BACKGROUND\nIncreased physical fitness is protective against cardiovascular disease. We hypothesized that increased fitness would be inversely associated with hypertension.\n\n\nMETHODS AND RESULTS\nWe examined the association of fitness with prevalent and incident hypertension in 57 284 participants from The Henry Ford ExercIse Testing (FIT) Project (1991–2009). Fitness was measured during a clinician‐referred treadmill stress test. Incident hypertension was defined as a new diagnosis of hypertension on 3 separate consecutive encounters derived from electronic medical records or administrative claims files. Analyses were performed with logistic regression or Cox proportional hazards models and were adjusted for hypertension risk factors. The mean age overall was 53 years, with 49% women and 29% black. Mean peak metabolic equivalents (METs) achieved was 9.2 (SD, 3.0). Fitness was inversely associated with prevalent hypertension even after adjustment (≥12 METs versus <6 METs; OR: 0.73; 95% CI: 0.67, 0.80). During a median follow‐up period of 4.4 years (interquartile range: 2.2 to 7.7 years), there were 8053 new cases of hypertension (36.4% of 22 109 participants without baseline hypertension). The unadjusted 5‐year cumulative incidences across categories of METs (<6, 6 to 9, 10 to 11, and ≥12) were 49%, 41%, 30%, and 21%. After adjustment, participants achieving ≥12 METs had a 20% lower risk of incident hypertension compared to participants achieving <6 METs (HR: 0.80; 95% CI: 0.72, 0.89). This relationship was preserved across strata of age, sex, race, obesity, resting blood pressure, and diabetes.\n\n\nCONCLUSIONS\nHigher fitness is associated with a lower probability of prevalent and incident hypertension independent of baseline risk factors."
},
{
"pmid": "10904013",
"title": "Hypertension in black patients: an emerging role of the endothelin system in salt-sensitive hypertension.",
"abstract": "The prevalence of essential hypertension in blacks is much higher than that in whites. In addition, the pathogenesis of hypertension appears to be different in black patients. For example, black patients present with a salt-sensitive hypertension characterized by low renin levels. Racial differences in renal physiology and socioeconomic factors have been suggested as possible causes of this difference, but reasons for this difference remain unclear. Endothelial cells are important in the regulation of vascular tonus and homeostasis, in part through the secretion of vasoactive substances. One of these factors, endothelin-1 (ET-1), is a 21 amino acid residue peptide with potent vasopressor actions. In addition to its contractile effects, it has been shown to stimulate mitogenesis in a number of cell types. Moreover, ET-1 displays modulatory effects on the endocrine system, including stimulation of angiotensin II and aldosterone production and inhibition of antidiuretic hormone in the kidney. Recent data from several laboratories indicate that ET-1 is overexpressed in the vasculature in several salt-sensitive models of experimental hypertension. Moreover, circulating plasma ET-1 levels are significantly increased in black hypertensives compared with white hypertensives. Thus, the ET system might be particularly important in the development or maintenance of hypertension in this population."
},
{
"pmid": "15581336",
"title": "Is hypertensive response in treadmill testing better identified with correction for working capacity? A study with clinical, echocardiographic and ambulatory blood pressure correlates.",
"abstract": "Hypertensive response in treadmill testing is associated with the development of hypertension, but it is still unclear if it is better identified by systolic or diastolic response, and measured directly or corrected by working capacity. We investigated 75 patients with normal office blood pressure through a treadmill testing, ambulatory blood pressure (ABP) monitoring, and two-dimensional Doppler echocardiogram. Characteristics associated with systolic blood pressure (SBP) response corrected by the estimated metabolic equivalent (MET) were identified in multiple linear regression models. SBP response was associated more consistently with age, body mass index (BMI), systolic ABP and left ventricular posterior wall thickness (p < 0.001) than diastolic response in the bivariate analysis, especially when corrected by MET. Age, BMI and nightly SBP were independently associated with SBP response corrected by MET in the multivariate analysis. Individuals from the top tertile of SBP response corrected by MET (> or =11.3 mmHg/MET) were older and had higher BMI, ABP and left ventricular septal and posterior wall thickness than individuals classified in the lower tertiles. These differences were more pronounced than the differences observed between individuals with and without a peak exercise blood pressure higher than 210 mmHg. We concluded that individuals with a high blood pressure response in treadmill testing have higher BMI, left ventricular posterior wall thickness and SBP measured by ABP monitoring than individuals without such a response. These differences were stronger when the variation of blood pressure during exercise was corrected by the amount of work performed."
}
] |
Frontiers in Neurorobotics | 31396071 | PMC6668554 | 10.3389/fnbot.2019.00056 | Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States | Our daily environments are complex, composed of objects with different features. These features can be categorized into low-level features, e.g., an object position or temperature, and high-level features resulting from a pre-processing of low-level features for decision purposes, e.g., a binary value saying if it is too hot to be grasped. Besides, our environments are dynamic, i.e., object states can change at any moment. Therefore, robots performing tasks in these environments must have the capacity to (i) identify the next action to execute based on the available low-level and high-level object states, and (ii) dynamically adapt their actions to state changes. We introduce a method named Interaction State-based Skill Learning (IS2L), which builds skills to solve tasks in realistic environments. A skill is a Bayesian Network that infers actions composed of a sequence of movements of the robot's end-effector, which locally adapt to spatio-temporal perturbations using a dynamical system. In the current paper, an external agent performs one or more kinesthetic demonstrations of an action generating a dataset of high-level and low-level states of the robot and the environment objects. First, the method transforms each interaction to represent (i) the relationship between the robot and the object and (ii) the next robot end-effector movement to perform at consecutive instants of time. Then, the skill is built, i.e., the Bayesian network is learned. While generating an action this skill relies on the robot and object states to infer the next movement to execute. This movement selection gets inspired by a type of predictive models for action selection usually called affordances. The main contribution of this paper is combining the main features of dynamical systems and affordances in a unique method to build skills that solve tasks in realistic scenarios. More precisely, combining the low-level movement generation of the dynamical systems, to adapt to local perturbations, with the next movement selection simultaneously based on high-level and low-level states. This contribution was assessed in three experiments in realistic environments using both high-level and low-level states. The built skills solved the respective tasks relying on both types of states, and adapting to external perturbations. | 2. Related WorkThere is certainly a lack of works in the robotics literature combining action selection (using high-level states) with adaptive action execution (using low-level states). To the best of the authors' knowledge, Kroemer et al. (2012) is the only work combining these features. In this work, a pouring task experiment is executed, in which a robotic arm grasps a watering can and pours water into a glass. The main objective of this experiment is to use affordance knowledge to learn predictive models mapping subparts of objects to motion primitives based on direct perception. The main different between our work and the one presented by Kroemer et al. consists in that they focus on the low-level features of an object, i.e., its shape acquired using a point cloud, to select the next action to apply; whereas our work uses a simpler low-level representation of the object, i.e., its location represented as a position, combined with other high-level object features for the action selection. A positive aspect of their work is that the method directly uses a sensor information as input, providing richer object information, which can help to generate accurate interactions with the objects. However, in order to handle high-level features the method should be combined with another method working in parallel, adding a relevant complexity to the symtem.The remainder of the section introduces works related to either selecting the next action to perform (based on predictive models) or building a skill to reproduce an action (based on imitation learning and motor control techniques) using either anthropomorphic robots or robotics arms.2.1. Selecting the Next Action To PerformIn the works introduced in this section action selection either relies on affordance knowledge or are based on non-linear mappings from raw images to robot motor actions. Actions are usually considered as built-in knowledge, externally tailored by a designer, and they are executed in an open-loop. These works are only robust to spatial perturbations before the execution of an action, i.e., to the object position, not adapting the action to spatial and/or temporal perturbations during its execution. This offline spatial adaptation is usually externally hard-coded by the experiment designer. This low adaptation capability can result in the inability to scale up the executed experiments to realistic setups.The works depicted in Table 1 are categorized based on the classification available in Jamone et al. (2016). The relevant categories for the current work are Pioneering works representing those first studies where the initial insights to learn the relation between objects and actions were identified; Representing the effects is the category with more related works, including IS2L, and extends the previous action-object relations to take into account the corresponding effect; Multi-object interaction represents affordances among several objects; and finally Multi-step prediction represents the use of affordances in high-level task planners to solve complex tasks.Table 1Comparison of actions used within the affordance literature, where *represents ambiguous information.TypePublicationAffordance learning methodAAOffSPOnSPTPPARAPioneeringworksKrotkov, 1995––NoNoNoYesPokeMay et al., 2007––NoNoNoNoRandomMetta and Fitzpatrick, 2003,Fitzpatrick and Metta, 2003––Object positionNoNoYesTapFitzpatrick et al., 2003PI–Object positionNoNoYesTapStoytchev, 2005DT–Object positionNoNoNoRandomRepresentingthe effectsDemiris and Dearden, 2005BN–Object positionNoNoNoRandomHart et al., 2005DRN–Object positionNoNoYesGraspLopes et al., 2007,Montesano et al., 2008,Osório et al., 2010BN–Object positionNoNoYesGrasp, Tap, TouchUgur et al., 2009, 2011SVM–Object positionNoNoYesPushRidge et al., 2010NN–NoNoNoYesPushKopicki et al., 2011LWPR–Object positionNoNoNoPushUgur et al., 2012, 2015aSVM–Object positionNoNoNoGrasp, Hit, Drop, TapMugan and Kuipers, 2012DBN–Object positionNoNoYesGraspHermans et al., 2013SVR–Object positionand orientationNoNoYesPushFinn et al., 2016,Finn and Levine, 2017LSTM–Object positionand orientationNoNoNoPushEbert et al., 2017LSTM–Object positionand orientationNoNoYes, NoLift, PushHangl et al., 2016MMR–Object positionand orientationNoNoYesPush, FlipChavez-Garcia et al., 2017GBN–Object positionNoNoYesPush, GraspThis workBNLHObject positionYesYesNoPush, Grasp, PressMulti-objectinteractionJain and Inamura, 2013BN–Object positionNoNoYesPush, PullGoncalves et al., 2014BN–No*NoNoYesTap, Push, PullDehban et al., 2016, 2017DA–No*NoNoYesPush, PullMulti-steppredictionsOmrčen et al., 2008,Krüger et al., 2011NN–Object positionand orientationNoNoYesPoke, Push, GraspUgur et al., 2015b,Ugur and Piater, 2015SVM–Object positionNoNoYesPick, Place, Poke, StackAntunes et al., 2016BN–No*NoNoYesGrasp, Release, PullWorks are categorized based on the classification available in Jamone et al. (2016) (see column Type). They are described based on the following features: Affordance learning method, AA, Action adaptation; OffSP, Offline Spatial Perturbation; OnSP, Online Spatial Perturbation; TP, Temporal Perturbation; BA, Built-in actions; RA, Repertoire of actions. The affordance learning methods are PI, Probabilistic Inference; DT, Decision Tree; BN, Bayesian Network; DRN, Relational Dependency Network; SVM, Support Vector Machine; NN, Neural Network; LWPR, Locally Weighted Projection Regression; DBN, Dynamic Bayesian Network; SVR, Support Vector Regression; LSTM, Long Short-term Memory; MMR, Maximum Margin Regression; GBN, Gaussian Bayesian Network; DA, Denoisy autoencoder.The goal of the pioneering works (Krotkov, 1995; Fitzpatrick and Metta, 2003; Metta and Fitzpatrick, 2003; May et al., 2007) was identifying affordances observing the result obtained when applying an action on an object, e.g., rollability. Posterior works (Fitzpatrick et al., 2003; Stoytchev, 2005) made the first attempts to learn the relation between the action and the obtained result, trying to choose the best action to reproduce it. However, actions and effects were very simple. In contrast, the works representing the effects focus on learning an inverse model to reproduce a previously observed effect on an object. Dearden and Demiris (2005), Demiris and Dearden (2005), and Hart et al. (2005) are the first works to propose representing the forward and inverse models using Bayesian Networks (BN) in this context, used to play imitation games. Inspired by the previous works, Lopes et al. (2007), Montesano et al. (2008), Osório et al. (2010), and Chavez-Garcia et al. (2016) define an affordance as a BN representing the relation between action, object and effect. They provide built-in grasp, tap, and touch actions to also play imitation games. Similarly, other works also use built-in actions using different methods to learn affordances, as classification techniques (Ugur et al., 2009, 2011; Hermans et al., 2013), regression methods (Kopicki et al., 2011; Hermans et al., 2013; Hangl et al., 2016), neural networks (Ridge et al., 2010), dynamical BN (Mugan and Kuipers, 2012), among others. Multi-object interactions has gathered many research attention during the last years, mainly focused on the use of tools to reproduce effects on objects. Jain and Inamura (2011), Jain and Inamura (2013), Goncalves et al. (2014), and Goncalves et al. (2014) use a BN to model affordances to push and pull objects using tools with different features, whereas Dehban et al. (2016) and Dehban et al. (2017) use Denoising Autoencoders. Conversely to tool use, Szedmak et al. (2014) proposes to model the interactions of 83 objects with different features assisted by a human expert. In the previous works a repertoire of built-in actions was available for the affordance learning. Nevertheless, a couple of works by Ugur and his collaborators built this repertoire beforehand (Ugur et al., 2012, 2015a). In these works a built-in generic swipe action is available, which executes a trajectory of a robot's end-effector from a fixed initial position to the position of a close object. Therefore, for different object positions different trajectories are built. Nevertheless, the shape of these trajectories does not differ much among them, because of the use of the same heuristic to generate them. Other works in the same vein are Finn et al. (2016), Finn and Levine (2017), and Ebert et al. (2017), which use a deep learning technique called convolutional LSTM (Hochreiter and Schmidhuber, 1997) in order to predict the visual output of an action. These works build a repertoire of continuous push actions based on an exploration performing thousands of interactions of a robotic arm with a set of objects (see Wong, 2016 for a recent survey about applying deep learning techniques in robotics).2.2. Reproducing an ActionA robot can learn from demonstration all the actions required to reach a task goal. This section presents some of the most relevant works building skills, also called motion primitives, reproducing an action from one or more demonstrations. In Table 2 there is a comparison of these works. The variables selected for the comparison represent the capability of a skill to adapt to low-level (L) and high-level states (H), together with the main features studied within the motor control literature: mechanisms to be robust to spatio-temporal low-level perturbations, the stability of a motion primitive, the number of examples needed for the learning, and the combination of different primitives to reproduce an unseen action.Table 2Comparison of methods generating adaptive skills.TypePublicationMP learning methodAASpatial perturbationTemporal perturbationTDStNECTrajectory-basedIjspeert et al., 2002,Ijspeert et al., 2013DMPLFinal positionNoYesYes1NoPastor et al., 2009,Kober et al., 2010DMPLFinal positionand velocityNoYesYes1NoMuelling et al., 2013MoMPLFinal positionand velocityNoYesYes1YesParaschos et al., 2013,Paraschos et al., 2017ProMPLAll positionsand velocitiesYesNoYesMYesPerrin and Schlehuber-Caissier, 2016DiffeomorphismLFinal positionYesNoYes1NoState-basedCalinon et al., 2007GMR-DSLNoYesNoNoM–Calinon et al., 2010,Calinon et al., 2011HMM + GMRLFinal positionYesNoNoM–Khansari-Zadeh and Billard, 2011,Khansari-Zadeh and Billard, 2014,Kim et al., 2014SEDSLFinal positionYesNoYesM–Calinon, 2016TP-GMMLAll positionsand orientationsYesNoYesM–This workIS2LHLAll positionsYesNoNoMYesWorks are categorized based on the classification available in Paraschos et al. (2017) (see column Type). They are described based on the following features: MP, Motion primitive; AA, Action adaptation; SP, Spatial perturbation; TP, Temporal perturbation; TD, Time-dependency; St, Stable; NE, Number of examples; C, Combination of MPs.Paraschos categorizes motion primitives as trajectory-based representations, which typically use time as the driving force of the movement requiring simple controllers, and state-based representations, which do not require the knowledge of a time step but often need to use more complex, non-linear policies. Paraschos et al. (2017, p. 2). On the one hand, trajectory-based primitives are based on dynamical systems representing motion as time-independent functions. The principal disadvantage of dynamical systems is that they do not ensure the stability of the system. In order to address this issue, an external stabilizer based on time to generate stable motion is used (e.g., DMPs, Ijspeert et al., 2002, 2013; Pastor et al., 2009; Muelling et al., 2013). Therefore, actions are always executed following a specific time frame. A more recent approach called ProMP (Paraschos et al., 2013, 2017) avoids the previous constraint by generating time-independent stable primitives.On the other hand, state-based motion primitives are time-independent by definition, in which the states use continuous values and are represented by Gaussian functions. For a specific position of the robot's end-effector, weights are computed using Hidden Markov Models (HMM) to identify the next state based on the current state. Once the state is available, the motion is computed using Gaussian Mixture Regression (GMR). The initial works (Calinon et al., 2007, 2010, 2011) do not generate stable actions, but it has been solved in posterior studies by a method called Stable Estimator of Dynamical Systems (SEDS) (Khansari-Zadeh and Billard, 2011, 2014; Kim et al., 2014), which ensures stability through a computation of Lyapunov candidates (Slotine and Li, 1991). However, SEDS can only handle spatial perturbations at the final position of the demonstrated trajectories. This feature is improved in Calinon (2016) handling spatial perturbations at any position of the trajectory, through the generation of a set of waypoints around the trajectory with different reference frames.As aforementioned, works in the literature focus on either selecting the next action to perform a task based on high-level states using predefined or constrained actions; or in the reproduction with local adaptation of the trajectories of a complex action using low-level object states. Therefore, the skills built by IS2L are unique to infer actions with local adaptation simultaneously based on both types of states. | [
"17416157",
"14599314",
"9377276",
"23148415"
] | [
{
"pmid": "17416157",
"title": "On learning, representing, and generalizing a task in a humanoid robot.",
"abstract": "We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts."
},
{
"pmid": "14599314",
"title": "Grounding vision through experimental manipulation.",
"abstract": "Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experimental manipulation. The experiments are oriented towards determining what parts of the environment are physically coherent--that is, which parts will move together, and which are more or less independent. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "23148415",
"title": "Dynamical movement primitives: learning attractor models for motor behaviors.",
"abstract": "Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics."
}
] |
Frontiers in Oncology | 31403032 | PMC6669791 | 10.3389/fonc.2019.00677 | A Full-Image Deep Segmenter for CT Images in Breast Cancer Radiotherapy Treatment | Radiation therapy is one of the key cancer treatment options. To avoid adverse effects in the healthy tissue, the treatment plan needs to be based on accurate anatomical models of the patient. In this work, an automatic segmentation solution for both female breasts and the heart is constructed using deep learning. Our newly developed deep neural networks perform better than the current state-of-the-art neural networks while improving inference speed by an order of magnitude. While manual segmentation by clinicians takes around 20 min, our automatic segmentation takes less than a second with an average of 3 min manual correction time. Thus, our proposed solution can have a huge impact on the workload of clinical staff and on the standardization of care. | 1.1. Related WorkAtlas methods are successful in image segmentation of the brain (7, 8) and the breast (5). In atlas methods, the patient is registered to an atlas patient and the segmentations of the atlas patient are transformed to the patient coordinate space. One hindrance here is that for each segmentation a patient needs to be chosen from a patient library whose anatomy is similar. In addition to that, 5 to 10% of the volume still needs editing (5).Another approach is the combination of locally adaptive filters in combination with heuristic rules (6). This has been done using a tunable Gabor filter yielding a robust and accurate segmentation on axial MR slices. One advantage of their method over deep learning is that it is independent of any training data. With deep learning, the network is generally only able to be applied to cases which are similar to the data it has seen whereby robustness cannot be ensured. In addition, machine learning approaches are always in need of a large number of training cases, whereas tunable filters are designed independent of the training cases. One advantage of our approach, however, is that the model architecture can be easily applied to different organs and different anatomical sites, provided the training data are available. Thus, the tedious work of defining heuristic rules and exploring filter options for each organ can be omitted.Deep neural networks for segmentation typically use a structure similar to auto-encoders, in the sense that a dimension reduction is followed by a reconstruction network. Differences exist, however, in whether the spatial information is completely omitted such as in the anatomically constrained neural network (ACNN) (9), or if the spatial resolution is only reduced, as for example in the U-Net (10). The latter has been used for segmentation of CT images of pancreatic tumor (11) and liver (12). However, those approaches either use a 2D U-Net or are in need of another neural network on top of the U-Net. The skip connections from the downward path to the reconstruction path of the U-Net are an important improvement over convolutional neural networks (CNNs) as they help preserve more detailed spatial information for the reconstruction. Additionally, Drozdal et al. (13) propose the use of short skip connections, which improve the segmentation quality.In a recent approach, the proposed network uses the shape of a U-Net but includes residual blocks both in the downward path and the reconstruction part (14). Additionally, a fully connected layer is constructed parallel to the lowest resolution level. The advantage is that through this approach the benefits from Ronneberger's U-Net and the Oktay's ACNN are combined. One downside however is that, due to the fully connected layer, the input size cannot be adapted during inference to the size of the CT image. Additionally, the inference is performed slice-wise. Even though this allows processing the image in full resolution, it deteriorates the inference speed compared to full-image processing. However, this particular model has a large capacity and can handle 21 organs with one inference.The goal of this work is to improve the inference speed of a deep neural network for the segmentation of the organs needed for radiotherapy treatment planning, while maintaining state-of-the-art segmentation quality. We focus on the ipsi and contra-lateral breasts and the heart. The approach, here, is to replace the patch-wise or slice-wise processing by a full-image processing approach. The proposed network structure is a combination of the U-Net and the ResNet. | [
"30322819",
"11483349",
"16165237",
"19215827",
"18804333",
"25912987",
"11832223",
"19857578",
"28961105",
"29891089"
] | [
{
"pmid": "30322819",
"title": "Contouring workload in adjuvant breast cancer radiotherapy.",
"abstract": "PURPOSE\nTo measure the impact of contouring on worktime in the adjuvant radiation treatment of breast cancer, and to identify factors that might affect the measurements.\n\n\nMATERIAL AND METHODS\nThe dates and times of contouring clinical target volumes and organs at risk were recorded by a senior and by two junior radiation oncologists. Outcome measurements were contour times and the time from start to approval. The factors evaluated were patient age, type of surgery, radiation targets and setup, operator, planning station, part of the day and day of the week on which the contouring started. The Welch test was used to comparatively assess the measurements.\n\n\nRESULTS\nTwo hundred and three cases were included in the analysis. The mean contour time per patient was 34minutes for a mean of 4.72 structures, with a mean of 7.1minutes per structure. The clinical target volume and organs at risk times did not differ significantly. The mean time from start to approval per patient was 29.4hours. Factors significantly associated with longer contour times were breast-conserving surgery (P=0.026), prone setup (P=0.002), junior operator (P<0.0001), Pinnacle planning station (P=0.026), contouring start in the morning (P=0.001), and contouring start by the end of the week (P<0.0001). Factors significantly associated with time from start to approval were age (P=0.038), junior operator (P<0.0001), planning station (P=0.016), and contouring start by the end of the week (P=0.004).\n\n\nCONCLUSION\nContouring is a time-consuming process. Each delineated structure influences worktime, and many factors may be targeted for optimization of the workflow. These preliminary data will serve as basis for future prospective studies to determine how to establish a cost-effective solution."
},
{
"pmid": "11483349",
"title": "Variability in target volume delineation on CT scans of the breast.",
"abstract": "PURPOSE\nTo determine the intra- and interobserver variation in delineation of the target volume of breast tumors on computed tomography (CT) scans in order to perform conformal radiotherapy.\n\n\nMATERIALS AND METHODS\nThe clinical target volume (CTV) of the breast was delineated in CT slices by four radiation oncologists on our clinically used delineation system. The palpable glandular breast tissue was marked with a lead wire on 6 patients before CT scanning, whereas 4 patients were scanned without a lead wire. The CTV was drawn by each observer on three separate occasions. Planning target volumes (PTVs) were constructed by expanding the CTV by 7 mm in each direction, except toward the skin. The deviation in the PTV extent from the average extent was quantified in each orthogonal direction for each patient to find a possible directional dependence in the observer variations. In addition, the standard deviation of the intra- and interobserver variation in the PTV volume was quantified. For each patient, the common volumes delineated by all observers and the smallest volume encompassing all PTVs were also calculated.\n\n\nRESULTS\nThe patient-averaged deviations in PTV extent were larger in the posterior (42 mm), cranial (28 mm), and medial (24 mm) directions than in the anterior (6 mm), caudal (15 mm), and lateral (8 mm) directions. The mean intraobserver variation in volume percentage (5.5%, 1 SD) was much smaller than the interobserver variation (17.5%, 1 SD). The average ratio between the common and encompassing volume for the four observers separately was 0.82, 0.74, 0.82, and 0.80. A much lower combined average ratio of 0.43 was found because of the large interobserver variations. For the observer who placed the lead wire, the intraobserver variation in volume was decreased by a factor of 4 on scans made with a lead wire in comparison to scans made without a lead wire. For the other observers, no improvement was seen. Based on these results, an improved delineation protocol was designed.\n\n\nCONCLUSIONS\nIntra- and especially interobserver variation in the delineation of breast target volume on CT scans can be rather large. A detailed delineation protocol making use of CT scans with lead wires placed on the skin around the palpable breast by the delineating observer reduces the intraobserver variation. To reduce the interobserver variation, better imaging techniques and pathology studies relating glandular breast tissue to imaging may be needed to provide more information on the extent of the clinical target volume."
},
{
"pmid": "16165237",
"title": "Interobserver variability of clinical target volume delineation of glandular breast tissue and of boost volume in tangential breast irradiation.",
"abstract": "BACKGROUND AND PURPOSE\nTo determine the interobserver variability of clinical target volume delineation of glandular breast tissue and of boost volume in tangential breast irradiation.\n\n\nPATIENTS AND METHODS\nEighteen consecutive patients with left sided breast cancer treated by breast conserving surgery agreed to participate in our study. Volumes of the glandular breast tissue (CTV breast) and of the boost (CTV boost) were delineated by five observers. We determined 'conformity indices' (CI) and the ratio between the volume of each CTV and the mean volume of all CTVs (CTV ratio). Subsequently we determined the most medial, lateral, anterior, posterior, cranial and caudal extensions both of CTV breast and CTV boost for all observers separately.\n\n\nRESULTS\nThe mean CI breast was 0.87. For one observer we noted the highest CTV ratio in 17 out of 18 cases. No association was noted between CI breast and menopausal status. The mean CI boost was 0.56. We did not find a relation between the presence or absence of clips and the CI boost. For another observer we noted the lowest CTV boost ratio in 10 out of 17 cases.\n\n\nCONCLUSIONS\nWe recommend that each institute should determine its interobserver variability with respect to CTV breast and CTV boost before implementing the delineation of target volumes by planning CT in daily practice."
},
{
"pmid": "19215827",
"title": "Variability of target and normal structure delineation for breast cancer radiotherapy: an RTOG Multi-Institutional and Multiobserver Study.",
"abstract": "PURPOSE\nTo quantify the multi-institutional and multiobserver variability of target and organ-at-risk (OAR) delineation for breast-cancer radiotherapy (RT) and its dosimetric impact as the first step of a Radiation Therapy Oncology Group effort to establish a breast cancer atlas.\n\n\nMETHODS AND MATERIALS\nNine radiation oncologists specializing in breast RT from eight institutions independently delineated targets (e.g., lumpectomy cavity, boost planning target volume, breast, supraclavicular, axillary and internal mammary nodes, chest wall) and OARs (e.g., heart, lung) on the same CT images of three representative breast cancer patients. Interobserver differences in structure delineation were quantified regarding volume, distance between centers of mass, percent overlap, and average surface distance. Mean, median, and standard deviation for these quantities were calculated for all possible combinations. To assess the impact of these variations on treatment planning, representative dosimetric plans based on observer-specific contours were generated.\n\n\nRESULTS\nVariability in contouring the targets and OARs between the institutions and observers was substantial. Structure overlaps were as low as 10%, and volume variations had standard deviations up to 60%. The large variability was related both to differences in opinion regarding target and OAR boundaries and approach to incorporation of setup uncertainty and dosimetric limitations in target delineation. These interobserver differences result in substantial variations in dosimetric planning for breast RT.\n\n\nCONCLUSIONS\nDifferences in target and OAR delineation for breast irradiation between institutions/observers appear to be clinically and dosimetrically significant. A systematic consensus is highly desirable, particularly in the era of intensity-modulated and image-guided RT."
},
{
"pmid": "18804333",
"title": "Automatic segmentation of whole breast using atlas approach and deformable image registration.",
"abstract": "PURPOSE\nTo compare interobserver variations in delineating the whole breast for treatment planning using two contouring methods.\n\n\nMETHODS AND MATERIALS\nAutosegmented contours were generated by a deformable image registration-based breast segmentation method (DEF-SEG) by mapping the whole breast clinical target volume (CTVwb) from a template case to a new patient case. Eight breast radiation oncologists modified the autosegmented contours as necessary to achieve a clinically appropriate CTVwb and then recontoured the same case from scratch for comparison. The times to complete each approach, as well as the interobserver variations, were analyzed. The template case was also mapped to 10 breast cancer patients with a body mass index of 19.1-35.9 kg/m(2). The three-dimensional surface-to-surface distances and volume overlapping analyses were computed to quantify contour variations.\n\n\nRESULTS\nThe median time to edit the DEF-SEG-generated CTVwb was 12.9 min (range, 3.4-35.9) compared with 18.6 min (range, 8.9-45.2) to contour the CTVwb from scratch (30% faster, p = 0.028). The mean surface-to-surface distance was noticeably reduced from 1.6 mm among the contours generated from scratch to 1.0 mm using the DEF-SEG method (p = 0.047). The deformed contours in 10 patients achieved 94% volume overlap before correction and required editing of 5% (range, 1-10%) of the contoured volume.\n\n\nCONCLUSION\nSignificant interobserver variations suggested a lack of consensus regarding the CTVwb, even among breast cancer specialists. Using the DEF-SEG method produced more consistent results and required less time. The DEF-SEG method can be successfully applied to patients with different body mass indexes."
},
{
"pmid": "25912987",
"title": "Automated breast-region segmentation in the axial breast MR images.",
"abstract": "PURPOSE\nThe purpose of this study was to develop a robust breast-region segmentation method independent from the visible contrast between the breast region and surrounding chest wall and skin.\n\n\nMATERIALS AND METHODS\nA fully-automated method for segmentation of the breast region in the axial MR images is presented relying on the edge map (EM) obtained by applying a tunable Gabor filter which sets its parameters according to the local MR image characteristics to detect non-visible transitions between different tissues having a similar MRI signal intensity. The method applies the shortest-path search technique by incorporating a novel cost function using the EM information within the border-search area obtained based on the border information from the adjacent slice. It is validated on 52 MRI scans covering the full American College of Radiology Breast Imaging-Reporting and Data System (BI-RADS) breast-density range.\n\n\nRESULTS\nThe obtained results indicate that the method is robust and applicable for the challenging cases where a part of the fibroglandular tissue is connected to the chest wall and/or skin with no visible contrast, i.e. no fat presence, between them compared to the literature methods proposed for the axial MR images. The overall agreement between automatically- and manually-obtained breast-region segmentations is 96.1% in terms of the Dice Similarity Coefficient, and for the breast-chest wall and breast-skin border delineations it is 1.9mm and 1.2mm, respectively, in terms of the Mean-Deviation Distance.\n\n\nCONCLUSION\nThe accuracy, robustness and applicability for the challenging cases of the proposed method show its potential to be incorporated into computer-aided analysis systems to support physicians in their decision making."
},
{
"pmid": "11832223",
"title": "Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain.",
"abstract": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease."
},
{
"pmid": "19857578",
"title": "Fast and robust multi-atlas segmentation of brain magnetic resonance images.",
"abstract": "We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity modelling based on segmentation using expectation maximisation (EM) and optimisation via graph cuts. The segmentation pipeline is evaluated with two data cohorts: IBSR data (N=18, six subcortial structures: thalamus, caudate, putamen, pallidum, hippocampus, amygdala) and ADNI data (N=60, hippocampus). The average similarity index between automatically and manually generated volumes was 0.849 (IBSR, six subcortical structures) and 0.880 (ADNI, hippocampus). The correlation coefficient for hippocampal volumes was 0.95 with the ADNI data. The computation time using a standard multicore PC computer was about 3-4 min. Our results compare favourably with other recently published results."
},
{
"pmid": "28961105",
"title": "Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation.",
"abstract": "Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies."
},
{
"pmid": "29891089",
"title": "Fully automatic and robust segmentation of the clinical target volume for radiotherapy of breast cancer using big data and deep learning.",
"abstract": "PURPOSE\nTo train and evaluate a very deep dilated residual network (DD-ResNet) for fast and consistent auto-segmentation of the clinical target volume (CTV) for breast cancer (BC) radiotherapy with big data.\n\n\nMETHODS\nDD-ResNet was an end-to-end model enabling fast training and testing. We used big data comprising 800 patients who underwent breast-conserving therapy for evaluation. The CTV were validated by experienced radiation oncologists. We performed a fivefold cross-validation to test the performance of the model. The segmentation accuracy was quantified by the Dice similarity coefficient (DSC) and the Hausdorff distance (HD). The performance of the proposed model was evaluated against two different deep learning models: deep dilated convolutional neural network (DDCNN) and deep deconvolutional neural network (DDNN).\n\n\nRESULTS\nMean DSC values of DD-ResNet (0.91 and 0.91) were higher than the other two networks (DDCNN: 0.85 and 0.85; DDNN: 0.88 and 0.87) for both right-sided and left-sided BC. It also has smaller mean HD values of 10.5 mm and 10.7 mm compared with DDCNN (15.1 mm and 15.6 mm) and DDNN (13.5 mm and 14.1 mm). Mean segmentation time was 4 s, 21 s and 15 s per patient with DDCNN, DDNN and DD-ResNet, respectively. The DD-ResNet was also superior with regard to results in the literature.\n\n\nCONCLUSIONS\nThe proposed method could segment the CTV accurately with acceptable time consumption. It was invariant to the body size and shape of patients and could improve the consistency of target delineation and streamline radiotherapy workflows."
}
] |
Journal of Clinical Medicine | 31284687 | PMC6678612 | 10.3390/jcm8070986 | Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis | Various techniques using artificial intelligence (AI) have resulted in a significant contribution to field of medical image and video-based diagnoses, such as radiology, pathology, and endoscopy, including the classification of gastrointestinal (GI) diseases. Most previous studies on the classification of GI diseases use only spatial features, which demonstrate low performance in the classification of multiple GI diseases. Although there are a few previous studies using temporal features based on a three-dimensional convolutional neural network, only a specific part of the GI tract was involved with the limited number of classes. To overcome these problems, we propose a comprehensive AI-based framework for the classification of multiple GI diseases by using endoscopic videos, which can simultaneously extract both spatial and temporal features to achieve better classification performance. Two different residual networks and a long short-term memory model are integrated in a cascaded mode to extract spatial and temporal features, respectively. Experiments were conducted on a combined dataset consisting of one of the largest endoscopic videos with 52,471 frames. The results demonstrate the effectiveness of the proposed classification framework for multi-GI diseases. The experimental results of the proposed model (97.057% area under the curve) demonstrate superior performance over the state-of-the-art methods and indicate its potential for clinical applications. | 2. Related WorksIn recent years, the strength of deep learning-based algorithms has been utilized in the field of endoscopy, including capsule endoscopy (CE), esophagogastroduodenoscopy (EGD), and colonoscopy [6,7,8,9,10,11,12,13,14,15]. To facilitate the physicians with the effective diagnosis of different GI lesions, several CNN-based CAD tools have been proposed in the literature. These CAD tools are capable of detecting and classifying even small lesions in the GI tract, which often remain imperceptible to the human visual system. Before the advent of deep learning methods, many previous studies have focused on the handcrafted feature-based methods, which mainly consider texture and color information.Most of the previous studies have been carried out to perform the detection and classification of different type of GI polyps in the field of CE. Generally, these methods followed a common approach of the feature extracting and then classification to detect and classify the GI polyps. In [19], Karargyris et al. proposed a geometric and texture features based method for the detection of small bowel polyps and ulcers in CE. Log Gabor filters and the SUSAN edge detector was used to preprocess the images and, finally, the geometric features were extracted to detect the polyp and ulcer region. Li et al. [20] utilized the advantages of a discrete wavelet transform and uniform local binary pattern (LBP) with a support vector machine (SVM) to classify the normal and abnormal tissues. In this feature extraction approach, wavelet transform combines the capability of multiresolution analysis and uniform LBP to provide robustness to illumination changes, which results in better performance.Similarly, another texture features-based automatic tumor recognition framework was proposed in [6] for wireless CE images. In this framework, a similar integrated approach was adopted based on LBP and discrete wavelet transform to extract the texture features of the scale and rotation invariants. Finally, the selected features were classified by using an SVM. Yuan et al. [21] proposed an integrated polyps detection algorithm by combing the Bag of Features (BoF) method with the saliency map. In the first step, the BoF method characterizes the local features by using a scale-invariant feature transform (SIFT) feature vectors with k-means clustering. Then saliency features were obtained by generating saliency map histogram. Finally, both BoF and saliency features were fed into the SVM to perform classification. Later, Yuan et al. [22] extended this approach with the addition of LBP, uniform LBP (ULBP), complete LBP (CLBP), and histogram of oriented gradients (HoG) features along with SIFT features for capturing more discriminative texture information. Finally, these features were classified by using SVM and Fisher’s linear discriminant analysis (FLDA) classifiers by considering different combinations of local features. The combination of SIFT and CLBP features with SVM classifier resulted in top classification accuracy.Seguí et al. presented a deep CNN system for small intestine motility characterization [7]. This CNN-based method exploited the general representation of six different intestinal motility events by extracting deep features, which resulted in superior classification performance when compared to the other handcrafted features-based methods. Another CNN-based CAD tool was presented in [15] to quantitatively analyze the celiac disease in a fully automated approach by using CE videos. This proposed method utilized the advantages of a well-known CNN model (i.e., GoogLeNet) to distinguish between the normal and abnormal (i.e., diagnosed with celiac disease) patients. Thus, the effective characterization of the celiac disease resulted in better diagnosis and treatment when compared to the manual analysis of CE videos. In [12], a multistage deep CNN-based framework for hookworm (i.e., intestinal parasite) detection was proposed using CE images. Two different CNN networks, named as edge extraction network and hookworm classification network, were unified, which simultaneously characterized the visual and tubular patterns of hookworms.In the field of EGD, a deep learning-based CAD tool was proposed for the diagnosis of Helicobacter pylori (H. pylori) infection [9]. In this proposed framework, two-stage CNN models were used. In the first stage, a 22-layers deep CNN was fine-tuned for the classification (i.e., positive or negative) of H. pylori infection. Then, in the second stage, another CNN was used to further classify the dataset (EGD images) according to eight different anatomical locations. Similarly, Takiyama et al. proposed another CNN-based classification model to categorize the anatomical location of the human GI tract [8]. This technique could categorize the EGD images into four major anatomical locations (i.e., larynx, esophagus, stomach, and duodenum) and three subcategories for the stomach images (upper, middle, and lower regions). A pretrained CNN architecture, named as GoogLeNet, was used for this classification problem, which demonstrated high classification performance. In a recent study by Hirasawa et al. [13], a fully automated diagnostic tool for gastric cancer was proposed by utilizing the detecting capability of deep CNN-based architectures. A single-shot multibox detector (SSD) architecture was used to detect early and advanced stages of gastric cancer from EGD images. The proposed method demonstrated substantial detection capability even for small lesions when compared to the conventional methods. The results of this study illustrated its practical usability in clinical practice for better diagnosis and treatment. However, it demonstrated certain limitations as only high-quality EGD images could be used from the same type of endoscope and endoscopic video system. Generally, the various deep learning-based methods demonstrate either the problem of over-fitting or under-fitting owing to the utilization of a large number of network parameters and the limited amount of data available in the training dataset. This problem degrades the system performance in a real-world scenario. A similar problem also occurs in the domain of medical image analysis owing to the unavailability of a sufficiently large training dataset. To address this issue, a transfer learning mechanism is often adopted in this domain. In the field of colonoscopy, Zhang et al. [10] used this approach for automatic detection and classification of colorectal polyps. A novel transfer learning approach was applied to train the two different CNN models for the source domain (i.e., nonmedical dataset) and then fine-tuning was performed for the target domain (i.e., medical dataset). Their method performed the polyp detection and classification tasks in two different stages. In the first stage, an image of interest (i.e., polyp image) was selected by using the CNN-based polyp detection model. In the second stage, another CNN model was further used to categorize the detected polyp image into either a hyperplastic polyp or an adenomatous colorectal polyp. The results of this study demonstrated that the CNN-based diagnoses achieved a higher accuracy and recall rate than endoscopist diagnoses. However, their method is not applicable for real-time colonoscopy image analysis owing to the use of multistage CNN models. Another study by Byrne et al. [14], presented a single deep CNN-based real-time colorectal polyp classification framework using the colonoscopy video images. In this study, a simple CNN model was trained to classify each input frame into one of four different categories, i.e., hyperplastic polyp, adenomatous polyp, no polyp, or unsuitable. The end-to-end processing time of this CNN model was 50 ms per frame, resulting in its applicability for the real-time classification of polyps. In another study [11], an offline and online three-dimensional (3D) deep CNN framework was proposed for automatic polyp detection. Two different 3D-CNNs, named as offline 3D-CNN and online 3D-CNN, were simultaneously used to exploit the more general representation of features for the task of effective polyp detection. In this complete framework, the offline 3D-CNN effectively reduced the number of false positives, whereas the online 3D-CNN was used to further improve the polyp detection. The experimental results showed that the 3D fully convolutional network was capable of learning more representative spatiotemporal features from colonoscopy videos in comparison with the handcrafted or two-dimensional (2D) CNN features-based methods.Endoscopy is a direct imaging modality, which captures the internal structure of the human GI tract in the form of videos rather than a still image. Therefore, it is possible to extract both spatial and temporal information from endoscopic data to enhance the diagnostic capability of different deep CNN-based CAD tools. Most of the previous studies considered only the spatial information for classification and detection of different GI diseases without considering the temporal information. The loss of temporal information affects the overall performance of the CAD tools. In addition, the maximum number of classes to be managed in the previous studies were also limited to eight [9], which only considered limited GI diseases, such as a tumor or cancer.To address these issues from previous researches, we considered 37 different categories in our proposed work, which included both normal and diseased cases related to different parts of the human GI tract. We proposed a novel two-stage deep learning-based framework to enhance the classification performance of different GI diseases by considering both spatial and temporal information. Two different models named as ResNet and LSTM were trained separately to extract the spatial and temporal features, respectively. In Table 1, the strengths and weaknesses of previous studies and our proposed method are summarized. | [
"26742998",
"30717268",
"30875745",
"30959798",
"29843416",
"22287246",
"27810622",
"29311619",
"29056541",
"28114049",
"29470172",
"29335825",
"29066576",
"28412572",
"21592915",
"9377276",
"19565683",
"17944619",
"18244442"
] | [
{
"pmid": "26742998",
"title": "Cancer statistics, 2016.",
"abstract": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the National Cancer Institute (Surveillance, Epidemiology, and End Results [SEER] Program), the Centers for Disease Control and Prevention (National Program of Cancer Registries), and the North American Association of Central Cancer Registries. Mortality data were collected by the National Center for Health Statistics. In 2016, 1,685,210 new cancer cases and 595,690 cancer deaths are projected to occur in the United States. Overall cancer incidence trends (13 oldest SEER registries) are stable in women, but declining by 3.1% per year in men (from 2009-2012), much of which is because of recent rapid declines in prostate cancer diagnoses. The cancer death rate has dropped by 23% since 1991, translating to more than 1.7 million deaths averted through 2012. Despite this progress, death rates are increasing for cancers of the liver, pancreas, and uterine corpus, and cancer is now the leading cause of death in 21 states, primarily due to exceptionally large reductions in death from heart disease. Among children and adolescents (aged birth-19 years), brain cancer has surpassed leukemia as the leading cause of cancer death because of the dramatic therapeutic advances against leukemia. Accelerating progress against cancer requires both increased national investment in cancer research and the application of existing cancer control knowledge across all segments of the population."
},
{
"pmid": "30717268",
"title": "Artificial Intelligence vs. Natural Stupidity: Evaluating AI readiness for the Vietnamese Medical Information System.",
"abstract": "This review paper presents a framework to evaluate the artificial intelligence (AI) readiness for the healthcare sector in developing countries: a combination of adequate technical or technological expertise, financial sustainability, and socio-political commitment embedded in a healthy psycho-cultural context could bring about the smooth transitioning toward an AI-powered healthcare sector. Taking the Vietnamese healthcare sector as a case study, this paper attempts to clarify the negative and positive influencers. With only about 1500 publications about AI from 1998 to 2017 according to the latest Elsevier AI report, Vietnamese physicians are still capable of applying the state-of-the-art AI techniques in their research. However, a deeper look at the funding sources suggests a lack of socio-political commitment, hence the financial sustainability, to advance the field. The AI readiness in Vietnam's healthcare also suffers from the unprepared information infrastructure-using text mining for the official annual reports from 2012 to 2016 of the Ministry of Health, the paper found that the frequency of the word \"database\" actually decreases from 2012 to 2016, and the word has a high probability to accompany words such as \"lacking\", \"standardizing\", \"inefficient\", and \"inaccurate.\" Finally, manifestations of psycho-cultural elements such as the public's mistaken views on AI or the non-transparent, inflexible and redundant of Vietnamese organizational structures can impede the transition to an AI-powered healthcare sector."
},
{
"pmid": "30875745",
"title": "Global Evolution of Research in Artificial Intelligence in Health and Medicine: A Bibliometric Study.",
"abstract": "The increasing application of Artificial Intelligence (AI) in health and medicine has attracted a great deal of research interest in recent decades. This study aims to provide a global and historical picture of research concerning AI in health and medicine. A total of 27,451 papers that were published between 1977 and 2018 (84.6% were dated 2008⁻2018) were retrieved from the Web of Science platform. The descriptive analysis examined the publication volume, and authors and countries collaboration. A global network of authors' keywords and content analysis of related scientific literature highlighted major techniques, including Robotic, Machine learning, Artificial neural network, Artificial intelligence, Natural language process, and their most frequent applications in Clinical Prediction and Treatment. The number of cancer-related publications was the highest, followed by Heart Diseases and Stroke, Vision impairment, Alzheimer's, and Depression. Moreover, the shortage in the research of AI application to some high burden diseases suggests future directions in AI research. This study offers a first and comprehensive picture of the global efforts directed towards this increasingly important and prolific field of research and suggests the development of global and national protocols and regulations on the justification and adaptation of medical AI products."
},
{
"pmid": "30959798",
"title": "Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence.",
"abstract": "Medical-image-based diagnosis is a tedious task' and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%)."
},
{
"pmid": "29843416",
"title": "Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.",
"abstract": "Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods."
},
{
"pmid": "22287246",
"title": "Tumor recognition in wireless capsule endoscopy images using textural features and SVM-based feature selection.",
"abstract": "Tumor in digestive tract is a common disease and wireless capsule endoscopy (WCE) is a relatively new technology to examine diseases for digestive tract especially for small intestine. This paper addresses the problem of automatic recognition of tumor for WCE images. Candidate color texture feature that integrates uniform local binary pattern and wavelet is proposed to characterize WCE images. The proposed features are invariant to illumination change and describe multiresolution characteristics of WCE images. Two feature selection approaches based on support vector machine, sequential forward floating selection and recursive feature elimination, are further employed to refine the proposed features for improving the detection accuracy. Extensive experiments validate that the proposed computer-aided diagnosis system achieves a promising tumor recognition accuracy of 92.4% in WCE images on our collected data."
},
{
"pmid": "27810622",
"title": "Generic feature learning for wireless capsule endoscopy analysis.",
"abstract": "The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase)."
},
{
"pmid": "29311619",
"title": "In situ immune response and mechanisms of cell damage in central nervous system of fatal cases microcephaly by Zika virus.",
"abstract": "Zika virus (ZIKV) has recently caused a pandemic disease, and many cases of ZIKV infection in pregnant women resulted in abortion, stillbirth, deaths and congenital defects including microcephaly, which now has been proposed as ZIKV congenital syndrome. This study aimed to investigate the in situ immune response profile and mechanisms of neuronal cell damage in fatal Zika microcephaly cases. Brain tissue samples were collected from 15 cases, including 10 microcephalic ZIKV-positive neonates with fatal outcome and five neonatal control flavivirus-negative neonates that died due to other causes, but with preserved central nervous system (CNS) architecture. In microcephaly cases, the histopathological features of the tissue samples were characterized in three CNS areas (meninges, perivascular space, and parenchyma). The changes found were mainly calcification, necrosis, neuronophagy, gliosis, microglial nodules, and inflammatory infiltration of mononuclear cells. The in situ immune response against ZIKV in the CNS of newborns is complex. Despite the predominant expression of Th2 cytokines, other cytokines such as Th1, Th17, Treg, Th9, and Th22 are involved to a lesser extent, but are still likely to participate in the immunopathogenic mechanisms of neural disease in fatal cases of microcephaly caused by ZIKV."
},
{
"pmid": "29056541",
"title": "Application of Convolutional Neural Networks in the Diagnosis of Helicobacter pylori Infection Based on Endoscopic Images.",
"abstract": "BACKGROUND AND AIMS\nThe role of artificial intelligence in the diagnosis of Helicobacter pylori gastritis based on endoscopic images has not been evaluated. We constructed a convolutional neural network (CNN), and evaluated its ability to diagnose H. pylori infection.\n\n\nMETHODS\nA 22-layer, deep CNN was pre-trained and fine-tuned on a dataset of 32,208 images either positive or negative for H. pylori (first CNN). Another CNN was trained using images classified according to 8 anatomical locations (secondary CNN). A separate test data set (11,481 images from 397 patients) was evaluated by the CNN, and 23 endoscopists, independently.\n\n\nRESULTS\nThe sensitivity, specificity, accuracy, and diagnostic time were 81.9%, 83.4%, 83.1%, and 198s, respectively, for the first CNN, and 88.9%, 87.4%, 87.7%, and 194s, respectively, for the secondary CNN. These values for the 23 endoscopists were 79.0%, 83.2%, 82.4%, and 230±65min (85.2%, 89.3%, 88.6%, and 253±92min by 6 board-certified endoscopists), respectively. The secondary CNN had a significantly higher accuracy than endoscopists (by 5.3%; 95% CI, 0.3-10.2).\n\n\nCONCLUSION\nH. pylori gastritis could be diagnosed based on endoscopic images using CNN with higher accuracy and in a considerably shorter time compared to manual diagnosis by endoscopists."
},
{
"pmid": "28114049",
"title": "Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos.",
"abstract": "Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors."
},
{
"pmid": "29470172",
"title": "Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning.",
"abstract": "As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application."
},
{
"pmid": "29335825",
"title": "Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images.",
"abstract": "BACKGROUND\nImage recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images.\n\n\nMETHODS\nA CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN.\n\n\nRESULTS\nThe CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface.\n\n\nCONCLUSION\nThe constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists."
},
{
"pmid": "29066576",
"title": "Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model.",
"abstract": "BACKGROUND\nIn general, academic but not community endoscopists have demonstrated adequate endoscopic differentiation accuracy to make the 'resect and discard' paradigm for diminutive colorectal polyps workable. Computer analysis of video could potentially eliminate the obstacle of interobserver variability in endoscopic polyp interpretation and enable widespread acceptance of 'resect and discard'.\n\n\nSTUDY DESIGN AND METHODS\nWe developed an artificial intelligence (AI) model for real-time assessment of endoscopic video images of colorectal polyps. A deep convolutional neural network model was used. Only narrow band imaging video frames were used, split equally between relevant multiclasses. Unaltered videos from routine exams not specifically designed or adapted for AI classification were used to train and validate the model. The model was tested on a separate series of 125 videos of consecutively encountered diminutive polyps that were proven to be adenomas or hyperplastic polyps.\n\n\nRESULTS\nThe AI model works with a confidence mechanism and did not generate sufficient confidence to predict the histology of 19 polyps in the test set, representing 15% of the polyps. For the remaining 106 diminutive polyps, the accuracy of the model was 94% (95% CI 86% to 97%), the sensitivity for identification of adenomas was 98% (95% CI 92% to 100%), specificity was 83% (95% CI 67% to 93%), negative predictive value 97% and positive predictive value 90%.\n\n\nCONCLUSIONS\nAn AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy. Additional study of this programme in a live patient clinical trial setting to address resect and discard is planned."
},
{
"pmid": "28412572",
"title": "Quantitative analysis of patients with celiac disease by video capsule endoscopy: A deep learning method.",
"abstract": "BACKGROUND\nCeliac disease is one of the most common diseases in the world. Capsule endoscopy is an alternative way to visualize the entire small intestine without invasiveness to the patient. It is useful to characterize celiac disease, but hours are need to manually analyze the retrospective data of a single patient. Computer-aided quantitative analysis by a deep learning method helps in alleviating the workload during analysis of the retrospective videos.\n\n\nMETHOD\nCapsule endoscopy clips from 6 celiac disease patients and 5 controls were preprocessed for training. The frames with a large field of opaque extraluminal fluid or air bubbles were removed automatically by using a pre-selection algorithm. Then the frames were cropped and the intensity was corrected prior to frame rotation in the proposed new method. The GoogLeNet is trained with these frames. Then, the clips of capsule endoscopy from 5 additional celiac disease patients and 5 additional control patients are used for testing. The trained GoogLeNet was able to distinguish the frames from capsule endoscopy clips of celiac disease patients vs controls. Quantitative measurement with evaluation of the confidence was developed to assess the severity level of pathology in the subjects.\n\n\nRESULTS\nRelying on the evaluation confidence, the GoogLeNet achieved 100% sensitivity and specificity for the testing set. The t-test confirmed the evaluation confidence is significant to distinguish celiac disease patients from controls. Furthermore, it is found that the evaluation confidence may also relate to the severity level of small bowel mucosal lesions.\n\n\nCONCLUSIONS\nA deep convolutional neural network was established for quantitative measurement of the existence and degree of pathology throughout the small intestine, which may improve computer-aided clinical techniques to assess mucosal atrophy and other etiologies in real-time with videocapsule endoscopy."
},
{
"pmid": "21592915",
"title": "Detection of small bowel polyps and ulcers in wireless capsule endoscopy videos.",
"abstract": "Over the last decade, wireless capsule endoscopy (WCE) technology has become a very useful tool for diagnosing diseases within the human digestive tract. Physicians using WCE can examine the digestive tract in a minimally invasive way searching for pathological abnormalities such as bleeding, polyps, ulcers, and Crohn's disease. To improve effectiveness of WCE, researchers have developed software methods to automatically detect these diseases at a high rate of success. This paper proposes a novel synergistic methodology for automatically discovering polyps (protrusions) and perforated ulcers in WCE video frames. Finally, results of the methodology are given and statistical comparisons are also presented relevant to other works."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "19565683",
"title": "A power primer.",
"abstract": "One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contingency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation."
},
{
"pmid": "17944619",
"title": "Effect size, confidence interval and statistical significance: a practical guide for biologists.",
"abstract": "Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non-independent. Although interpretations of effect sizes are often difficult, we provide some pointers to help researchers. This paper serves both as a beginner's instruction manual and a stimulus for changing statistical practice for the better in the biological sciences."
},
{
"pmid": "18244442",
"title": "A comparison of methods for multiclass support vector machines.",
"abstract": "Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such \"all-together\" methods. We then compare their performance with three methods based on binary classifications: \"one-against-all,\" \"one-against-one,\" and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the \"one-against-one\" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors."
}
] |
JMIR Public Health and Surveillance | 31165711 | PMC6682293 | 10.2196/11036 | Identifying Key Topics Bearing Negative Sentiment on Twitter: Insights Concerning the 2015-2016 Zika Epidemic | BackgroundTo understand the public sentiment regarding the Zika virus, social media can be leveraged to understand how positive, negative, and neutral sentiments are expressed in society. Specifically, understanding the characteristics of negative sentiment could help inform federal disease control agencies’ efforts to disseminate relevant information to the public about Zika-related issues.ObjectiveThe purpose of this study was to analyze the public sentiment concerning Zika using posts on Twitter and determine the qualitative characteristics of positive, negative, and neutral sentiments expressed.MethodsMachine learning techniques and algorithms were used to analyze the sentiment of tweets concerning Zika. A supervised machine learning classifier was built to classify tweets into 3 sentiment categories: positive, neutral, and negative. Tweets in each category were then examined using a topic-modeling approach to determine the main topics for each category, with focus on the negative category.ResultsA total of 5303 tweets were manually annotated and used to train multiple classifiers. These performed moderately well (F1 score=0.48-0.68) with text-based feature extraction. All 48,734 tweets were then categorized into the sentiment categories. Overall, 10 topics for each sentiment category were identified using topic modeling, with a focus on the negative sentiment category.ConclusionsOur study demonstrates how sentiment expressed within discussions of epidemics on Twitter can be discovered. This allows public health officials to understand public sentiment regarding an epidemic and enables them to address specific elements of negative sentiment in real time. Our negative sentiment classifier was able to identify tweets concerning Zika with 3 broad themes: neural defects,Zika abnormalities, and reports and findings. These broad themes were based on domain expertise and from topics discussed in journals such as Morbidity and Mortality Weekly Report and Vaccine. As the majority of topics in the negative sentiment category concerned symptoms, officials should focus on spreading information about prevention and treatment research. | Related WorksIdentifying sentiment on a specific topic was pioneered by Chen et al [5,6]. Since then, several studies have looked at sentiment analysis on a variety of topics. Overall, 2 studies focused on personal communication tweets only [7,8]. The study by Daniulaityte et al [7] collected 15,623,869 tweets from May to November 2015 using keywords related to synthetic cannabinoids, marijuana concentrates, marijuana edibles, and cannabis. They found that using personal communication tweets only, compared with all tweets, improved binary sentiment classification (negative and positive) but not multiclass classification (positive, negative, and neutral). A study by Ji et al [8] collected tweets concerning listeria from September 26 to 28 and October 9 to 10 in 2011. They also focused on personal communication tweets only for sentiment classification (negative and not negative) and also found that the classifiers performed well after excluding nonpersonal communication (with a classification of F1 score=0.82-0.88). Instead of focusing on personal communication tweets alone, we included all relevant tweets after the BBC article about scientists declaring Zika scarier than initially thought [3] in our previous study [4]. A study by Househ collected approximately 26 million tweets and Google News Trends concerning the Ebola virus from September 30 to October 29, 2014 [9]. This study also influenced the decision to use all tweets and not just personal communication when they found that news feeds were the largest Twitter influencers during the Ebola outbreak.Ghenai and Mejova [10] collected 13,728,215 tweets concerning Zika from January to August 2016. Tweets were annotated as debunking a rumor, supporting a rumor, or neither. They concluded that mainstream news websites may help spread misinformation and fear. A study by Seltzer et al [11] collected 500 images from Instagram from May to August 2016 using the keyword Zika. Of those 500 images, only 342 were related to Zika. Of those 342 images, 193 were coded as health and 299 were coded as public interest. Of the health images, the majority related to transmission and prevention, which is similar to what we found in our previous study on Twitter [4]. This shows results can be corroborated across different social media platforms. Seltzer et al also found that many of the images portrayed negative sentiment and fear. Their study was limited to using images and was only concerned with negative sentiment. Our study will use tweets and will include positive, neutral, and negative sentiment.In many of these studies, the main topical content within each sentiment category was not explored. We take this additional step in our study to determine the topics of public concern regarding the Zika virus. We also used all tweets including personal communication as well as news articles because news articles can go viral and include negative sentiment, as seen with the BBC article briefly described in the background section [3]. The phenomenon of news articles going viral and including negative sentiment is also discussed in our previous study [4]. | [
"28630032",
"27777215",
"25656678",
"28806618",
"23092060",
"27911847",
"27216759",
"27544795",
"27100826",
"26948433",
"28536443",
"2139079",
"14692574",
"25445654",
"9500320",
"28360135"
] | [
{
"pmid": "28630032",
"title": "What Are People Tweeting About Zika? An Exploratory Study Concerning Its Symptoms, Treatment, Transmission, and Prevention.",
"abstract": "BACKGROUND\nIn order to harness what people are tweeting about Zika, there needs to be a computational framework that leverages machine learning techniques to recognize relevant Zika tweets and, further, categorize these into disease-specific categories to address specific societal concerns related to the prevention, transmission, symptoms, and treatment of Zika virus.\n\n\nOBJECTIVE\nThe purpose of this study was to determine the relevancy of the tweets and what people were tweeting about the 4 disease characteristics of Zika: symptoms, transmission, prevention, and treatment.\n\n\nMETHODS\nA combination of natural language processing and machine learning techniques was used to determine what people were tweeting about Zika. Specifically, a two-stage classifier system was built to find relevant tweets about Zika, and then the tweets were categorized into 4 disease categories. Tweets in each disease category were then examined using latent Dirichlet allocation (LDA) to determine the 5 main tweet topics for each disease characteristic.\n\n\nRESULTS\nOver 4 months, 1,234,605 tweets were collected. The number of tweets by males and females was similar (28.47% [351,453/1,234,605] and 23.02% [284,207/1,234,605], respectively). The classifier performed well on the training and test data for relevancy (F1 score=0.87 and 0.99, respectively) and disease characteristics (F1 score=0.79 and 0.90, respectively). Five topics for each category were found and discussed, with a focus on the symptoms category.\n\n\nCONCLUSIONS\nWe demonstrate how categories of discussion on Twitter about an epidemic can be discovered so that public health officials can understand specific societal concerns within the disease-specific categories. Our two-stage classifier was able to identify relevant tweets to enable more specific analysis, including the specific aspects of Zika that were being discussed as well as misinformation being expressed. Future studies can capture sentiments and opinions on epidemic outbreaks like Zika virus in real time, which will likely inform efforts to educate the public at large."
},
{
"pmid": "27777215",
"title": "\"When 'Bad' is 'Good'\": Identifying Personal Communication and Sentiment in Drug-Related Tweets.",
"abstract": "BACKGROUND\nTo harness the full potential of social media for epidemiological surveillance of drug abuse trends, the field needs a greater level of automation in processing and analyzing social media content.\n\n\nOBJECTIVES\nThe objective of the study is to describe the development of supervised machine-learning techniques for the eDrugTrends platform to automatically classify tweets by type/source of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis- and synthetic cannabinoid-related tweets.\n\n\nMETHODS\nTweets were collected using Twitter streaming Application Programming Interface and filtered through the eDrugTrends platform using keywords related to cannabis, marijuana edibles, marijuana concentrates, and synthetic cannabinoids. After creating coding rules and assessing intercoder reliability, a manually labeled data set (N=4000) was developed by coding several batches of randomly selected subsets of tweets extracted from the pool of 15,623,869 collected by eDrugTrends (May-November 2015). Out of 4000 tweets, 25% (1000/4000) were used to build source classifiers and 75% (3000/4000) were used for sentiment classifiers. Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM) were used to train the classifiers. Source classification (n=1000) tested Approach 1 that used short URLs, and Approach 2 where URLs were expanded and included into the bag-of-words analysis. For sentiment classification, Approach 1 used all tweets, regardless of their source/type (n=3000), while Approach 2 applied sentiment classification to personal communication tweets only (2633/3000, 88%). Multiclass and binary classification tasks were examined, and machine-learning sentiment classifier performance was compared with Valence Aware Dictionary for sEntiment Reasoning (VADER), a lexicon and rule-based method. The performance of each classifier was assessed using 5-fold cross validation that calculated average F-scores. One-tailed t test was used to determine if differences in F-scores were statistically significant.\n\n\nRESULTS\nIn multiclass source classification, the use of expanded URLs did not contribute to significant improvement in classifier performance (0.7972 vs 0.8102 for SVM, P=.19). In binary classification, the identification of all source categories improved significantly when unshortened URLs were used, with personal communication tweets benefiting the most (0.8736 vs 0.8200, P<.001). In multiclass sentiment classification Approach 1, SVM (0.6723) performed similarly to NB (0.6683) and LR (0.6703). In Approach 2, SVM (0.7062) did not differ from NB (0.6980, P=.13) or LR (F=0.6931, P=.05), but it was over 40% more accurate than VADER (F=0.5030, P<.001). In multiclass task, improvements in sentiment classification (Approach 2 vs Approach 1) did not reach statistical significance (eg, SVM: 0.7062 vs 0.6723, P=.052). In binary sentiment classification (positive vs negative), Approach 2 (focus on personal communication tweets only) improved classification results, compared with Approach 1, for LR (0.8752 vs 0.8516, P=.04) and SVM (0.8800 vs 0.8557, P=.045).\n\n\nCONCLUSIONS\nThe study provides an example of the use of supervised machine learning methods to categorize cannabis- and synthetic cannabinoid-related tweets with fairly high accuracy. Use of these content analysis tools along with geographic identification capabilities developed by the eDrugTrends platform will provide powerful methods for tracking regional changes in user opinions related to cannabis and synthetic cannabinoids use over time and across different regions."
},
{
"pmid": "25656678",
"title": "Communicating Ebola through social media and electronic news media outlets: A cross-sectional study.",
"abstract": "Social media and electronic news media activity are an important source of information for the general public. Yet, there is a dearth of research exploring the use of Twitter and electronic news outlets during significant worldly events such as the recent Ebola Virus scare. The purpose of this article is to investigate the use of Twitter and electronic news media outlets in communicating Ebola Virus information. A cross-sectional survey of Twitter data and Google News Trend data from 30 September till 29 October, 2014 was conducted. Between 30 September and 29 October, there were approximately 26 million tweets (25,925,152) that contained the word Ebola. The highest number of correlated activity for Twitter and electronic news outlets occurred on 16 October 2014. Other important peaks in Twitter data occurred on 1 October, 6 October, 8 October, and 12 October, 2014. The main influencers of the Twitter feeds were news media outlets. The study reveals a relationship between electronic news media publishing and Twitter activity around significant events such as Ebola. Healthcare organizations should take advantage of the relationship between electronic news media and trending events on social media sites such as Twitter and should work on developing social media campaigns in co-operation with leading electronic news media outlets (e.g. CNN, Yahoo, Reuters) that can have an influence on social media activity."
},
{
"pmid": "28806618",
"title": "Public sentiment and discourse about Zika virus on Instagram.",
"abstract": "OBJECTIVE\nSocial media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak.\n\n\nSTUDY DESIGN\nThis was a retrospective review of publicly posted images about Zika on Instagram.\n\n\nMETHODS\nUsing the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement.\n\n\nRESULTS\nOf 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%).\n\n\nCONCLUSION\nInstagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences."
},
{
"pmid": "23092060",
"title": "Interrater reliability: the kappa statistic.",
"abstract": "The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested."
},
{
"pmid": "27911847",
"title": "Zika virus cell tropism in the developing human brain and inhibition by azithromycin.",
"abstract": "The rapid spread of Zika virus (ZIKV) and its association with abnormal brain development constitute a global health emergency. Congenital ZIKV infection produces a range of mild to severe pathologies, including microcephaly. To understand the pathophysiology of ZIKV infection, we used models of the developing brain that faithfully recapitulate the tissue architecture in early to midgestation. We identify the brain cell populations that are most susceptible to ZIKV infection in primary human tissue, provide evidence for a mechanism of viral entry, and show that a commonly used antibiotic protects cultured brain cells by reducing viral proliferation. In the brain, ZIKV preferentially infected neural stem cells, astrocytes, oligodendrocyte precursor cells, and microglia, whereas neurons were less susceptible to infection. These findings suggest mechanisms for microcephaly and other pathologic features of infants with congenital ZIKV infection that are not explained by neural stem cell infection alone, such as calcifications in the cortical plate. Furthermore, we find that blocking the glia-enriched putative viral entry receptor AXL reduced ZIKV infection of astrocytes in vitro, and genetic knockdown of AXL in a glial cell line nearly abolished infection. Finally, we evaluate 2,177 compounds, focusing on drugs safe in pregnancy. We show that the macrolide antibiotic azithromycin reduced viral proliferation and virus-induced cytopathic effects in glial cell lines and human astrocytes. Our characterization of infection in the developing human brain clarifies the pathogenesis of congenital ZIKV infection and provides the basis for investigating possible therapeutic strategies to safely alleviate or prevent the most severe consequences of the epidemic."
},
{
"pmid": "27544795",
"title": "Identifying the public's concerns and the Centers for Disease Control and Prevention's reactions during a health crisis: An analysis of a Zika live Twitter chat.",
"abstract": "The arrival of the Zika virus in the United States caused much concern among the public because of its ease of transmission and serious consequences for pregnant women and their newborns. We conducted a text analysis to examine original tweets from the public and responses from the Centers for Disease Control and Prevention (CDC) during a live Twitter chat hosted by the CDC. Both the public and the CDC expressed concern about the spread of Zika virus, but the public showed more concern about the consequences it had for women and babies, whereas the CDC focused more on symptoms and education."
},
{
"pmid": "26948433",
"title": "Guillain-Barré Syndrome outbreak associated with Zika virus infection in French Polynesia: a case-control study.",
"abstract": "BACKGROUND\nBetween October, 2013, and April, 2014, French Polynesia experienced the largest Zika virus outbreak ever described at that time. During the same period, an increase in Guillain-Barré syndrome was reported, suggesting a possible association between Zika virus and Guillain-Barré syndrome. We aimed to assess the role of Zika virus and dengue virus infection in developing Guillain-Barré syndrome.\n\n\nMETHODS\nIn this case-control study, cases were patients with Guillain-Barré syndrome diagnosed at the Centre Hospitalier de Polynésie Française (Papeete, Tahiti, French Polynesia) during the outbreak period. Controls were age-matched, sex-matched, and residence-matched patients who presented at the hospital with a non-febrile illness (control group 1; n=98) and age-matched patients with acute Zika virus disease and no neurological symptoms (control group 2; n=70). Virological investigations included RT-PCR for Zika virus, and both microsphere immunofluorescent and seroneutralisation assays for Zika virus and dengue virus. Anti-glycolipid reactivity was studied in patients with Guillain-Barré syndrome using both ELISA and combinatorial microarrays.\n\n\nFINDINGS\n42 patients were diagnosed with Guillain-Barré syndrome during the study period. 41 (98%) patients with Guillain-Barré syndrome had anti-Zika virus IgM or IgG, and all (100%) had neutralising antibodies against Zika virus compared with 54 (56%) of 98 in control group 1 (p<0.0001). 39 (93%) patients with Guillain-Barré syndrome had Zika virus IgM and 37 (88%) had experienced a transient illness in a median of 6 days (IQR 4-10) before the onset of neurological symptoms, suggesting recent Zika virus infection. Patients with Guillain-Barré syndrome had electrophysiological findings compatible with acute motor axonal neuropathy (AMAN) type, and had rapid evolution of disease (median duration of the installation and plateau phases was 6 [IQR 4-9] and 4 days [3-10], respectively). 12 (29%) patients required respiratory assistance. No patients died. Anti-glycolipid antibody activity was found in 13 (31%) patients, and notably against GA1 in eight (19%) patients, by ELISA and 19 (46%) of 41 by glycoarray at admission. The typical AMAN-associated anti-ganglioside antibodies were rarely present. Past dengue virus history did not differ significantly between patients with Guillain-Barré syndrome and those in the two control groups (95%, 89%, and 83%, respectively).\n\n\nINTERPRETATION\nThis is the first study providing evidence for Zika virus infection causing Guillain-Barré syndrome. Because Zika virus is spreading rapidly across the Americas, at risk countries need to prepare for adequate intensive care beds capacity to manage patients with Guillain-Barré syndrome.\n\n\nFUNDING\nLabex Integrative Biology of Emerging Infectious Diseases, EU 7th framework program PREDEMICS. and Wellcome Trust."
},
{
"pmid": "28536443",
"title": "Diagnostic Accuracy of Ultrasound Scanning for Prenatal Microcephaly in the context of Zika Virus Infection: A Systematic Review and Meta-analysis.",
"abstract": "To assess the accuracy of ultrasound measurements of fetal biometric parameters for prenatal diagnosis of microcephaly in the context of Zika virus (ZIKV) infection, we searched bibliographic databases for studies published until March 3rd, 2016. We extracted the numbers of true positives, false positives, true negatives, and false negatives and performed a meta-analysis to estimate group sensitivity and specificity. Predictive values for ZIKV-infected pregnancies were extrapolated from those obtained for pregnancies unrelated to ZIKV. Of 111 eligible full texts, nine studies met our inclusion criteria. Pooled estimates from two studies showed that at 3, 4 and 5 standard deviations (SDs) <mean, sensitivities were 84%, 68% and 58% for head circumference (HC); 76%, 58% and 58% for occipitofrontal diameter (OFD); and 94%, 85% and 59% for biparietal diameter (BPD). Specificities at 3, 4 and 5 SDs below the mean were 70%, 91% and 97% for HC; 84%, 97% and 97% for OFD; and 16%, 46% and 80% for BPD. No study including ZIKV-infected pregnant women was identified. OFD and HC were more consistent in specificity and sensitivity at lower thresholds compared to higher thresholds. Therefore, prenatal ultrasound appears more accurate in detecting the absence of microcephaly than its presence."
},
{
"pmid": "2139079",
"title": "Human IgG Fc receptor II mediates antibody-dependent enhancement of dengue virus infection.",
"abstract": "It is known that anti-dengue virus antibodies at subneutralizing concentrations augment dengue virus infection of IgG FcR (Fc gamma R)-positive cells, and this phenomenon is called antibody-dependent enhancement. This is caused by the uptake of dengue virus-antibody complexes by Fc gamma R. We previously reported that Fc gamma RI can mediate antibody-dependent enhancement. In this study we use an erythroleukemia cell line, K562, which has Fc gamma RII, but does not have Fc gamma RI or Fc gamma RIII, to determine if Fc gamma RII can mediate infection by dengue virus-antibody complexes. Polyclonal mouse anti-dengue virus antibody significantly augments dengue virus infection of K562 cells, whereas normal mouse serum does not. A mAb IV.3, which is specific for Fc gamma RII and is known to inhibit the binding of Ag-antibody complex to Fc gamma RII, inhibits dengue antibody-mediated augmentation of dengue virus infection. It has been reported that Fc gamma RII binds to mouse IgG1, but not to mouse IgG2a. A mouse IgG1 anti-dengue virus mAb (3H5) augments dengue virus infection of K562 cells, but a mouse IgG2a anti-dengue virus mAb (4G2) does not. 4G2 augments dengue virus infection of a human monocytic cell line, U937, which has Fc gamma RI. Based on these results we conclude that Fc gamma RII mediate antibody-dependent enhancement of dengue virus infection in addition to Fc gamma RI."
},
{
"pmid": "14692574",
"title": "Public perceptions of information sources concerning bioterrorism before and after anthrax attacks: an analysis of national survey data.",
"abstract": "This study examined data from six national surveys before and after the bioterrorist anthrax attacks in the fall of 2001. Public perceptions of information sources regarding bioterrorism were examined. The findings highlighted the importance of local television and radio and of cable and network news channels as information sources. The findings also showed the importance of national and local health officials as spokespersons in the event of bioterrorist incidents. Periodic surveys of public attitudes provide important, timely information for understanding audiences in communication planning."
},
{
"pmid": "25445654",
"title": "The hidden face of academic researches on classified highly pathogenic microorganisms.",
"abstract": "Highly pathogenic microorganisms and toxins are manipulated in academic laboratories for fundamental research purposes, diagnostics, drugs and vaccines development. Obviously, these infectious pathogens represent a potential risk for human and/or animal health and their accidental or intentional release (biosafety and biosecurity, respectively) is a major concern of governments. In the past decade, several incidents have occurred in laboratories and reported by media causing fear and raising a sense of suspicion against biologists. Some scientists have been ordered by US government to leave their laboratory for long periods of time following the occurrence of an incident involving infectious pathogens; in other cases laboratories have been shut down and universities have been forced to pay fines and incur a long-term ban on funding after gross negligence of biosafety/biosecurity procedures. Measures of criminal sanctions have also been taken to minimize the risk that such incidents can reoccur. As United States and many other countries, France has recently strengthened its legal measures for laboratories' protection. During the past two decades, France has adopted a series of specific restriction measures to better protect scientific discoveries with a potential economic/social impact and prevent their misuse by ill-intentioned people without affecting the progress of science through fundamental research. French legal regulations concerning scientific discoveries have progressively strengthened since 2001, until the publication in November 2011 of a decree concerning the \"PPST\" (for \"Protection du Potentiel Scientifique et Technique de la nation\", the protection of sensitive scientific data). Following the same logic of protection of sensitive scientific researches, regulations were also adopted in an order published in April 2012 concerning the biology and health field. The aim was to define the legal framework that precise the conditions for authorizing microorganisms and toxins experimentation in France; these regulations apply for any operation of production, manufacturing, transportation, import, export, possession, supply, transfer, acquisition and use of highly pathogenic microorganisms and toxins, referred to as \"MOT\" (for \"MicroOrganismes et Toxines hautement pathogènes\") by the French law. Finally, laboratories conducting researches on such infectious pathogens are henceforth classified restricted area or ZRR (for \"Zone à Régime Restrictif\"), according an order of July 2012. In terms of economic protection, biosafety and biosecurity, these regulations represent an undeniable progress as compared to the previous condition. However, the competitiveness of research laboratories handling MOTs is likely to suffer the side effects of these severe constraints. For example research teams working on MOTs can be drastically affected both by (i) the indirect costs generated by the security measure to be applied; (ii) the working time devoted to samples recording; (iii) the establishment of traceability and reporting to national security agency ANSM, (iv) the latency period required for staff members being officially authorized to conduct experiments on MOTs; (v) the consequent reduced attractiveness for recruiting new trainees whose work would be significantly hampered by theses administrative constraints; and (vi) the limitations in the exchange of material with external laboratories and collaborators. Importantly, there is a risk that French academic researchers gradually abandon research on MOTs in favor of other projects that are less subject to legal restrictions. This would reduce the acquisition of knowledge in the field of MOTs which, in the long term, could be highly detrimental to the country by increasing its vulnerability to natural epidemics due to pathogenic microorganisms that are classified as MOTs and, by reducing its preparedness against possible bioterrorist attacks that would use such microorganisms."
},
{
"pmid": "9500320",
"title": "Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children.",
"abstract": "BACKGROUND\nWe investigated a consecutive series of children with chronic enterocolitis and regressive developmental disorder.\n\n\nMETHODS\n12 children (mean age 6 years [range 3-10], 11 boys) were referred to a paediatric gastroenterology unit with a history of normal development followed by loss of acquired skills, including language, together with diarrhoea and abdominal pain. Children underwent gastroenterological, neurological, and developmental assessment and review of developmental records. Ileocolonoscopy and biopsy sampling, magnetic-resonance imaging (MRI), electroencephalography (EEG), and lumbar puncture were done under sedation. Barium follow-through radiography was done where possible. Biochemical, haematological, and immunological profiles were examined.\n\n\nFINDINGS\nOnset of behavioural symptoms was associated, by the parents, with measles, mumps, and rubella vaccination in eight of the 12 children, with measles infection in one child, and otitis media in another. All 12 children had intestinal abnormalities, ranging from lymphoid nodular hyperplasia to aphthoid ulceration. Histology showed patchy chronic inflammation in the colon in 11 children and reactive ileal lymphoid hyperplasia in seven, but no granulomas. Behavioural disorders included autism (nine), disintegrative psychosis (one), and possible postviral or vaccinal encephalitis (two). There were no focal neurological abnormalities and MRI and EEG tests were normal. Abnormal laboratory results were significantly raised urinary methylmalonic acid compared with age-matched controls (p=0.003), low haemoglobin in four children, and a low serum IgA in four children.\n\n\nINTERPRETATION\nWe identified associated gastrointestinal disease and developmental regression in a group of previously normal children, which was generally associated in time with possible environmental triggers."
},
{
"pmid": "28360135",
"title": "Enhancement of Zika virus pathogenesis by preexisting antiflavivirus immunity.",
"abstract": "Zika virus (ZIKV) is spreading rapidly into regions around the world where other flaviviruses, such as dengue virus (DENV) and West Nile virus (WNV), are endemic. Antibody-dependent enhancement has been implicated in more severe forms of flavivirus disease, but whether this also applies to ZIKV infection is unclear. Using convalescent plasma from DENV- and WNV-infected individuals, we found substantial enhancement of ZIKV infection in vitro that was mediated through immunoglobulin G engagement of Fcγ receptors. Administration of DENV- or WNV-convalescent plasma into ZIKV-susceptible mice resulted in increased morbidity-including fever, viremia, and viral loads in spinal cord and testes-and increased mortality. Antibody-dependent enhancement may explain the severe disease manifestations associated with recent ZIKV outbreaks and highlights the need to exert great caution when designing flavivirus vaccines."
}
] |
Frontiers in Psychology | 31417452 | PMC6684742 | 10.3389/fpsyg.2019.01688 | Monocular Presentation Attenuates Change Blindness During the Use of Augmented Reality | Augmented reality (AR) is an emerging technology in which information is superimposed onto the real world directly in front of observers. AR images may behave as distractors because they are inside the observer’s field of view and may cause observers to overlook important information in the real world. This kind of overlooking of events or objects is known as “change blindness.” In change blindness, a distractor may cause someone to overlook a change between an original image and a modified image. In the present study, we investigated whether change blindness occurs when AR is used and whether the AR presentation method influences change blindness. An AR image was presented binocularly or monocularly as a distractor in a typical flicker paradigm. In the binocular presentation, the AR image was presented to the both of the participants’ eyes, so, it was not different from the typical flicker paradigm. By contrast, in the monocular presentation, the AR image was presented to only one eye. Therefore, it was hypothesized that if participants could observe the real-world image through the eye to which the AR image was not presented, change blindness would be avoided because the moment of change itself could be observed. In addition, the luminance of the AR image was expected to influence the ease to observe the real world because the AR image is somewhat translucent. Hence, the AR distractor had three luminance conditions (high, medium, and low), and we compared how many alternations were needed to detect changes among the conditions. Result revealed that more alternations were needed in the binocular presentation and in the high luminance condition. However, in all luminance conditions in the monocular presentation, the number of alternations needed to detect the change was not significantly different from that when the AR distractor was not presented. This result indicates that the monocular presentation could attenuate change blindness, and this might be because the observers’ visual attention is attracted to the location where the change has occurred automatically. | Related WorkAs mentioned above, the flicker paradigm and the situations in which AR is used share some similarities. Hence, change blindness when the AR is used should be investigated. In related work, Dixon et al. (2013) investigated how AR presentation influences “inattentional blindness.” Inattentional blindness (Mack and Rock, 2000; Jensen, et al., 2011) is the phenomenon in which observers miss some distinct stimulus when they concentrate on another task, especially a visual task (Simons and Chabris, 1999). This overlooking occurs because of lack of attention to the object or place, so there are some similarities to change blindness. Dixon et al. (2013) presented AR images to support medical surgery training, and during the training, some critical events occurred. They revealed that participants missed the critical events more often in the condition in which the AR information was superimposed onto the body image than in the control (no AR information) condition. This result indicates that AR information attracts attention and causes practical problems.However, in their study, Dixon et al. (2013) investigated only inattentional blindness not change blindness. Even though participants miss critical events because their attention is distributed elsewhere in both inattentional blindness and change blindness, their experimental procedures have some differences. In a typical inattentional blindness task, participants concentrate on the other main task and do not expect something unusual to occur; hence, they do not distribute their attention to seeking the event actively. In addition, no blank or distractor is presented during the main task, so participants can observe just the presentation of the event. On the other hand, in the typical flicker change blindness task, participants are fully aware that a change will occur, so they actively look out for something unusual. In addition, distractors or blanks are presented with the change, unlike in an inattentional blindness task. Therefore, the attention distribution strategies and presentation method of event and distractors are vastly different between inattentional blindness and change blindness. These differences mean that inattentional blindness and change blindness occur in different situations and for different reasons in actual AR use. For example, if a driver concentrates on reading information in AR, the driver may overlook a pedestrian because of not paying enough attention to the road. This is related to inattentional blindness. On the other hand, even if a driver concentrates on the driving task, the driver may still overlook the pedestrian if AR images pop up for notification. This situation is related to change blindness. Therefore, change blindness, not inattentional blindness, in AR use should be investigated. In addition, the difference between the binocular and monocular presentations was not addressed by Dixon et al. (2013), whereas the comparison between the observation conditions is one of the main topics in the present study.
Steinicke and Hinrichs (2011) researched change blindness in virtual reality (VR) environments. They used a head-mounted display (HMD) to present stimuli and distractors for change blindness. They compared a monoscopic flicker condition, which is almost the same as the typical flicker paradigm, with a stereoscopic flicker condition, in which the stimuli and distractor were presented to the right eye and left eye in turn. However, the time course was the same as in the monoscopic flicker condition, so participants could not observe the moment of the change. In addition, there was a phase-shifted flicker condition, in which the stimuli were presented to the right eye and left eye in turn. As a result, stimuli were always presented to one of their eyes, so participants could observe the change itself. The result revealed that the alternation to detect the change was less in the phase-shifted flicker condition because participants could observe stimuli when the change occurred. This indicates that the observation of the period of change is very important to detect the change, and the binocular observation of the stimuli is not needed to detect it. Nevertheless, although Steinicke and Hinrichs (2011) investigated the difference between the binocular presentation (stereoscopic condition) and monocular presentation (phase-shifted condition) in change blindness, the situation is somewhat far from that in the AR presentation, because in optical see-through AR, AR images are somewhat translucent. Therefore, users can see the real world through the AR images even in the binocular condition. In addition, in actual AR use, the eye to which the AR image is presented hardly ever switches, like it does in a phase shifted condition.Therefore, comparison between the binocular and monocular AR presentations in change blindness has still not been investigated enough. | [
"16368126",
"22833264",
"17402669",
"15161384",
"26302304",
"8986842",
"10078528",
"22232586",
"22167536",
"7367577",
"11752486",
"24436635",
"28189059",
"14769575",
"10694957",
"24303751",
"21301028",
"18456930",
"6238122",
"21791293"
] | [
{
"pmid": "16368126",
"title": "Exogenous attention and endogenous attention influence initial dominance in binocular rivalry.",
"abstract": "We investigated the influence of exogenous and endogenous attention on initial selection in binocular rivalry. Experiment 1 used superimposed +/-45 degrees gratings viewed dioptically for 3s, followed by a brief contrast increment in one of the gratings to direct exogenous attention to that grating. After a brief blank period, dichoptic stimuli were presented for various durations (100-700 ms). Exogenous attention strongly influenced which stimulus was initially dominant in binocular rivalry, replicating an earlier report (Mitchell, Stoner, & Reynolds. (2004). Object-based attention determines dominance in binocular rivalry. Nature, 429, 410-413). In Experiment 2, endogenous attention was manipulated by having participants track one of two oblique gratings both of which independently and continuously changed their orientations and spatial frequencies during a 5s period. The initially dominant grating was most often the one whose orientation matched the grating correctly tracked using endogenous attention. In Experiment 3, we measured the strength of both exogenous and endogenous attention by varying the contrast of one of two rival gratings when attention was previously directed to that grating. The contrast of the attended grating had to be reduced by an amount in the neighborhood of 0.3 log-units, to counteract attention's boost to initial dominance. Evidently both exogenous and endogenous attention can influence initial dominance of binocular rivalry, effectively boosting the stimulus strength of the attended rival stimulus."
},
{
"pmid": "22833264",
"title": "Surgeons blinded by enhanced navigation: the effect of augmented reality on attention.",
"abstract": "BACKGROUND\nAdvanced image-guidance systems allowing presentation of three-dimensional navigational data in real time are being developed enthusiastically for many medical procedures. Other industries, including aviation and the military, have noted that shifting attention toward such compelling assistance has detrimental effects. Using the detection rate of unexpected findings, we assess whether inattentional blindness is significant in a surgical context and evaluate the impact of on-screen navigational cuing with augmented reality.\n\n\nMETHODS\nSurgeons and trainees performed an endoscopic navigation exercise on a cadaveric specimen. The subjects were randomized to either a standard endoscopic view (control) or an AR view consisting of an endoscopic video fused with anatomic contours. Two unexpected findings were presented in close proximity to the target point: one critical complication and one foreign body (screw). Task completion time, accuracy, and recognition of findings were recorded.\n\n\nRESULTS\nDetection of the complication was 0/15 in the AR group versus 7/17 in the control group (p = 0.008). Detection of the screw was 1/15 (AR) and 7/17 (control) (p = 0.041). Recognition of either finding was 12/17 for the control group and 1/15 for the AR group (p < 0.001). Accuracy was greater for the AR group than for the control group, with the median distance from the target point measuring respectively 2.10 mm (interquartile range [IQR], 1.29-2.37) and 4.13 (IQR, 3.11-7.39) (p < 0.001).\n\n\nCONCLUSION\nInattentional blindness was evident in both groups. Although more accurate, the AR group was less likely to identify significant unexpected findings clearly within view. Advanced navigational displays may increase precision, but strategies to mitigate attentional costs need further investigation to allow safe implementation."
},
{
"pmid": "17402669",
"title": "The role of voluntary and involuntary attention in selecting perceptual dominance during binocular rivalry.",
"abstract": "When incompatible images are presented to corresponding regions of each eye, perception alternates between the two monocular views (binocular rivalry). In this study, we have investigated how involuntary (exogenous) and voluntary (endogenous) attention can influence the perceptual dominance of one rival image or the other during contour rivalry. Subjects viewed two orthogonal grating stimuli that were presented to both eyes. Involuntary attention was directed to one of the grating stimuli with a brief change in orientation. After a short period, the cued grating was removed from the image in one eye and the uncued grating was removed from the image in the other eye, generating binocular rivalry. Subjects usually reported dominance of the cued grating during the rivalry period. We found that the influence of the cue declined with the interval between its onset and the onset of binocular rivalry in a manner consistent with the effect of involuntary attention. Finally, we demonstrated that voluntary attention to a grating stimulus could also influence the ongoing changes in perceptual dominance that accompany longer periods of binocular rivalry Voluntary attention did not increase the mean dominance period of the attended grating, but rather decreased the mean dominance period of the non-attended grating. This pattern is analogous to increasing the perceived contrast of the attended grating. These results suggest that the competition during binocular rivalry might be an example of a more general attentional mechanism within the visual system."
},
{
"pmid": "15161384",
"title": "Constructing visual representations of natural scenes: the roles of short- and long-term visual memory.",
"abstract": "A \"follow-the-dot\" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation."
},
{
"pmid": "26302304",
"title": "Change blindness and inattentional blindness.",
"abstract": "Change blindness and inattentional blindness are both failures of visual awareness. Change blindness is the failure to notice an obvious change. Inattentional blindness is the failure to notice the existence of an unexpected item. In each case, we fail to notice something that is clearly visible once we know to look for it. Despite similarities, each type of blindness has a unique background and distinct theoretical implications. Here, we discuss the central paradigms used to explore each phenomenon in a historical context. We also outline the central findings from each field and discuss their implications for visual perception and attention. In addition, we examine the impact of task and observer effects on both types of blindness as well as common pitfalls and confusions people make while studying these topics. WIREs Cogni Sci 2011 2 529-546 DOI: 10.1002/wcs.130 For further resources related to this article, please visit the WIREs website."
},
{
"pmid": "8986842",
"title": "When the brain changes its mind: interocular grouping during binocular rivalry.",
"abstract": "The prevalent view of binocular rivalry holds that it is a competition between the two eyes mediated by reciprocal inhibition among monocular neurons. This view is largely due to the nature of conventional rivalry-inducing stimuli, which are pairs of dissimilar images with coherent patterns within each eye's image. Is it the eye of origin or the coherency of patterns that determines perceptual alternations between coherent percepts in binocular rivalry? We break the coherency of conventional stimuli and replace them by complementary patchworks of intermingled rivalrous images. Can the brain unscramble the pieces of the patchwork arriving from different eyes to obtain coherent percepts? We find that pattern coherency in itself can drive perceptual alternations, and the patchworks are reassembled into coherent forms by most observers. This result is in agreement with recent neurophysiological and psychophysical evidence demonstrating that there is more to binocular rivalry than mere eye competition."
},
{
"pmid": "22232586",
"title": "The riddle of style changes in the visual arts after interference with the right brain.",
"abstract": "We here analyze the paintings and films of several visual artists, who suffered from a well-defined neuropsychological deficit, visuo-spatial hemineglect, following vascular stroke to the right brain. In our analysis we focus in particular on the oeuvre of Lovis Corinth and Luchino Visconti as both major artists continued to be highly productive over many years after their right brain damage. We analyzed their post-stroke paintings and films, indicate several aspects that differ from their pre-stroke work (omissions, use of color, perseveration, deformation), and propose-although both artists come from different times, countries, genres, and styles-that their post-stroke oeuvre reveals important similarities in style. We argue that these changes may be associated with visuo-spatial hemineglect and the right brain. We discuss future avenues of how the neuropsychological investigation of visual artists with and without neglect may allow us to investigate the relationship between brain and art."
},
{
"pmid": "22167536",
"title": "Interocular conflict attracts attention.",
"abstract": "During binocular rivalry, perception alternates.between dissimilar images presented dichoptically. Since.its discovery, researchers have debated whether the phenomenon is subject to attentional control. While it is now clear that attentional control over binocular rivalry is possible, the opposite is less evident: Is interocular conflict (i.e., the situation leading to binocular rivalry) able to attract attention?In order to answer this question, we used a change blindness paradigm in which observers looked for salient changes in two alternating frames depicting natural scenes. Each frame contained two images: one for the left and one for the right eye. Changes occurring in a single image (monocular) were detected faster than those occurring in both images (binocular). In addition,monocular change detection was also faster than detection in fused versions of the changed and unchanged regions. These results show that interocular conflict is capable of attracting attention, since it guides visual attention toward salient changes that otherwise would remain unnoticed for longer. The results of a second experiment indicated that interocular conflict attracts attention during the first phase of presentation, a phase during which the stimulus is abnormally fused [added]."
},
{
"pmid": "11752486",
"title": "Change detection.",
"abstract": "Five aspects of visual change detection are reviewed. The first concerns the concept of change itself, in particular the ways it differs from the related notions of motion and difference. The second involves the various methodological approaches that have been developed to study change detection; it is shown that under a variety of conditions observers are often unable to see large changes directly in their field of view. Next, it is argued that this \"change blindness\" indicates that focused attention is needed to detect change, and that this can help map out the nature of visual attention. The fourth aspect concerns how these results affect our understanding of visual perception-for example, the implication that a sparse, dynamic representation underlies much of our visual experience. Finally, a brief discussion is presented concerning the limits to our current understanding of change detection."
},
{
"pmid": "24436635",
"title": "Directing driver attention with augmented reality cues.",
"abstract": "This simulator study evaluated the effects of augmented reality (AR) cues designed to direct the attention of experienced drivers to roadside hazards. Twenty-seven healthy middle-aged licensed drivers with a range of attention capacity participated in a 54 mile (1.5 hour) drive in an interactive fixed-base driving simulator. Each participant received AR cues to potential roadside hazards in six simulated straight (9 mile long) rural roadway segments. Drivers were evaluated on response time for detecting a potentially hazardous event, detection accuracy for target (hazard) and non-target objects, and headway with respect to the hazards. Results showed no negative outcomes associated with interference. AR cues did not impair perception of non-target objects, including for drivers with lower attentional capacity. Results showed near significant response time benefits for AR cued hazards. AR cueing increased response rate for detecting pedestrians and warning signs but not vehicles. AR system false alarms and misses did not impair driver responses to potential hazards."
},
{
"pmid": "28189059",
"title": "Augmented reality warnings in vehicles: Effects of modality and specificity on effectiveness.",
"abstract": "In the future, vehicles will be able to warn drivers of hidden dangers before they are visible. Specific warning information about these hazards could improve drivers' reactions and the warning effectiveness, but could also impair them, for example, by additional cognitive-processing costs. In a driving simulator study with 88 participants, we investigated the effects of modality (auditory vs. visual) and specificity (low vs. high) on warning effectiveness. For the specific warnings, we used augmented reality as an advanced technology to display the additional auditory or visual warning information. Part one of the study concentrates on the effectiveness of necessary warnings and part two on the drivers' compliance despite false alarms. For the first warning scenario, we found several positive main effects of specificity. However, subsequent effects of specificity were moderated by the modality of the warnings. The specific visual warnings were observed to have advantages over the three other warning designs concerning gaze and braking reaction times, passing speeds and collision rates. Besides the true alarms, braking reaction times as well as subjective evaluation after these warnings were still improved despite false alarms. The specific auditory warnings were revealed to have only a few advantages, but also several disadvantages. The results further indicate that the exact coding of additional information, beyond its mere amount and modality, plays an important role. Moreover, the observed advantages of the specific visual warnings highlight the potential benefit of augmented reality coding to improve future collision warnings."
},
{
"pmid": "14769575",
"title": "Augmented reality in surgery.",
"abstract": "OBJECTIVE\nTo evaluate the history and current knowledge of computer-augmented reality in the field of surgery and its potential goals in education, surgeon training, and patient treatment.\n\n\nDATA SOURCES\nNational Library of Medicine's database and additional library searches.\n\n\nSTUDY SELECTION\nOnly articles suited to surgical sciences with a well-defined aim of study, methodology, and precise description of outcome were included.\n\n\nDATA SYNTHESIS\nAugmented reality is an effective tool in executing surgical procedures requiring low-performance surgical dexterity; it remains a science determined mainly by stereotactic registration and ergonomics. Strong evidence was found that it is an effective teaching tool for training residents. Weaker evidence was found to suggest a significant influence on surgical outcome, both morbidity and mortality. No evidence of cost-effectiveness was found.\n\n\nCONCLUSIONS\nAugmented reality is a new approach in executing detailed surgical operations. Although its application is in a preliminary stage, further research is needed to evaluate its long-term clinical impact on patients, surgeons, and hospital administrators. Its widespread use and the universal transfer of such technology remains limited until there is a better understanding of registration and ergonomics."
},
{
"pmid": "10694957",
"title": "Gorillas in our midst: sustained inattentional blindness for dynamic events.",
"abstract": "With each eye fixation, we experience a richly detailed visual world. Yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our environment from one view to the next: we often do not detect large changes to objects and scenes ('change blindness'). Furthermore, without attention, we may not even perceive objects ('inattentional blindness'). Taken together, these findings suggest that we perceive and remember only those objects and details that receive focused attention. In this paper, we briefly review and discuss evidence for these cognitive forms of 'blindness'. We then present a new study that builds on classic studies of divided visual attention to examine inattentional blindness for complex objects and events in dynamic scenes. Our results suggest that the likelihood of noticing an unexpected object depends on the similarity of that object to other objects in the display and on how difficult the priming monitoring task is. Interestingly, spatial proximity of the critical unattended object to attended locations does not appear to affect detection, suggesting that observers attend to objects and events, not spatial positions. We discuss the implications of these results for visual representations and awareness of our visual environment."
},
{
"pmid": "24303751",
"title": "Change blindness in a dynamic scene due to endogenous override of exogenous attentional cues.",
"abstract": "Change blindness is a failure to detect changes if the change occurs during a mask or distraction. Without distraction, it is assumed that the visual transients associated with the change will automatically capture attention (exogenous control), leading to detection. However, visual transients are a defining feature of naturalistic dynamic scenes. Are artificial distractions needed to hide changes to a dynamic scene? Do the temporal demands of the scene instead lead to greater endogenous control that may result in viewers missing a change in plain sight? In the present study we pitted endogenous and exogenous factors against each other during a card trick. Complete change blindness was demonstrated even when a salient highlight was inserted coincident with the change. These results indicate strong endogenous control of attention during dynamic scene viewing and its ability to override exogenous influences even when it is to the detriment of accurate scene representation."
},
{
"pmid": "21301028",
"title": "Change Blindness Phenomena for Virtual Reality Display Systems.",
"abstract": "In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment."
},
{
"pmid": "18456930",
"title": "Persisting effect of prior experience of change blindness.",
"abstract": "Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so."
},
{
"pmid": "6238122",
"title": "Abrupt visual onsets and selective attention: evidence from visual search.",
"abstract": "The effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them. We hypothesized that an abrupt onset in a visual display would capture visual attention, giving this item a processing advantage over items lacking an abrupt leading edge. This prediction was confirmed in Experiment 1. We designed a second experiment to ensure that this finding was due to attentional factors rather than to sensory or perceptual ones. Experiment 3 replicated Experiment 1 and demonstrated that the procedure used to avoid abrupt onset--camouflage removal--did not require a gradual waveform. Implications of these findings for theories of attention are discussed."
},
{
"pmid": "21791293",
"title": "Binocular rivalry requires visual attention.",
"abstract": "An interocular conflict arises when different images are presented to each eye at the same spatial location. The visual system resolves this conflict through binocular rivalry: observers consciously perceive spontaneous alternations between the two images. Visual attention is generally important for resolving competition between neural representations. However, given the seemingly spontaneous and automatic nature of binocular rivalry, the role of attention in resolving interocular competition remains unclear. Here we test whether visual attention is necessary to produce rivalry. Using an EEG frequency-tagging method to track cortical representations of the conflicting images, we show that when attention was diverted away, rivalry stopped. The EEG data further suggested that the neural representations of the dichoptic images combined without attention. Thus, attention is necessary for dichoptic images to be engaged in sustained rivalry and may be generally required for resolving conflicting, potentially ambiguous input and giving a single interpretation access to consciousness."
}
] |
Frontiers in Neurorobotics | 31417391 | PMC6684762 | 10.3389/fnbot.2019.00060 | Attention Based Visual Analysis for Fast Grasp Planning With a Multi-Fingered Robotic Hand | We present an attention based visual analysis framework to compute grasp-relevant information which helps to guide grasp planning using a multi-fingered robotic hand. Our approach uses a computational visual attention model to locate regions of interest in a scene and employ a deep convolutional neural network to detect grasp type and grasp attention point for a sub-region of the object in a region of interest. We demonstrate the proposed framework with object grasping tasks, in which the information generated from the proposed framework is used as prior information to guide grasp planning. The effectiveness of the proposed approach is evaluated in both simulation experiments and real-world experiments. Experimental results show that the proposed framework can not only speed up grasp planning with more stable configurations, but also handle unknown objects. Furthermore, our framework can handle cluttered scenarios. A new Grasp Type Dataset (GTD) which includes six commonly used grasp types and covers 12 household objects is also presented. | 2. Related WorkStable grasping is still a challenge for the robotic hands, espectically multi-fingered robotic hand, since it usually require to solve a complex non-conex optimization problem (Roa and Suárez, 2015; Zhang et al., 2018). Information extracted from visual analysis can be used to define heuristics or constraints for grasp planning. Previous grasp planning methods can be divided into geometric-based grasping and similarity-based grasping. In geometric-based grasping (Hsiao et al., 2010; Laga et al., 2013; Vahrenkamp et al., 2018), geometric information of the object is obtained from color or depth images, and it is used to define a set of heuristics to guide grasp planning. Hsiao et al. (2010) proposed a heuristic which maps partial shape information of objects to grasp configuration. The direct mapping from object geometric to candidate grasps is also used in Harada et al. (2008) and Vahrenkamp et al. (2018). Aleotti and Caselli (2012) proposed a 3D shape segmentation algorithm which firstly oversegments the target object, and candidate grasps are chosen based on the shape of the resulted segments (Laga et al., 2013). In similarity-based approaches (Dang and Allen, 2014; Herzog et al., 2014; Kopicki et al., 2016), the similarity measure is calculated between the target object and the corresponding object model from human demonstrations or simulation. The candidate grasp is then queried from datasets based on similarity measures. Herzog et al. (2014) defined an object shape template as the similarity measure. This template encodes heightmaps of the object observed from various viewpoints. The object properties can also be presented with semantic affordance maps (Dang and Allen, 2014) or probability models (Kroemer and Peters, 2014; Kopicki et al., 2016). Geometric-based approaches usually require a multiple-stage pipeline to gather handcrafted features through visual data analysis. Due to sensor noise, the performance of the geometric-based grasping is often unstable. Meanwhile, similarity-based methods are limited to known objects and can not handle unknown objects. In contrast to previous methods, our method increases grasp stability by extracting more reliable features from visual data using deep networks, meanwhile, it is able to handle unknown objects.Many saliency approaches have been proposed in the last two decades. Traditional models are usually based on the feature integration theory (FIT) (Treisman and Gelade, 1980) to compute several handcrafted features which were fused to a saliency map, e.g., the iNVT (Itti et al., 1998; Walther and Koch, 2006) and the VOCUS system (Frintrop, 2006). Frintrop et al. (2015) proposed a simple and efficient system which computes multi-scale feature maps using Difference-of-Gaussian (DoG) filters for center-surround contrast and produces a pixel-precise saliency map. Deep learning based saliency detection mostly relies on high-level pre-trained features for object detection tasks. Those learning-based approaches require massive amounts of training data (Huang et al., 2015; Li et al., 2016; Liu and Han, 2016). Kümmerer et al. (2015) used an AlexNet (Krizhevsky et al., 2012) pretrained on Imagenet (Deng et al., 2009) for object recognition tasks. The resulting high-dimensional features are used for fixation prediction and saliency map generation. Since most of the deep-learning based approaches have a central photographer bias which is not desired in robotic applications, we choose to use a handcrafted feature based approach which gathers local visual attributes by combing low-level visual features (Frintrop et al., 2015). | [
"22795563",
"28060704",
"28463186",
"27305676",
"13376678",
"26074671",
"20507828",
"7351125",
"17098563"
] | [
{
"pmid": "22795563",
"title": "Top-down versus bottom-up attentional control: a failed theoretical dichotomy.",
"abstract": "Prominent models of attentional control assert a dichotomy between top-down and bottom-up control, with the former determined by current selection goals and the latter determined by physical salience. This theoretical dichotomy, however, fails to explain a growing number of cases in which neither current goals nor physical salience can account for strong selection biases. For example, equally salient stimuli associated with reward can capture attention, even when this contradicts current selection goals. Thus, although 'top-down' sources of bias are sometimes defined as those that are not due to physical salience, this conception conflates distinct--and sometimes contradictory--sources of selection bias. We describe an alternative framework, in which past selection history is integrated with current goals and physical salience to shape an integrated priority map."
},
{
"pmid": "28060704",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.",
"abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet."
},
{
"pmid": "28463186",
"title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.",
"abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."
},
{
"pmid": "27305676",
"title": "DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection.",
"abstract": "A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches."
},
{
"pmid": "26074671",
"title": "Grasp quality measures: review and performance.",
"abstract": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures."
},
{
"pmid": "20507828",
"title": "Top-down and bottom-up control of visual selection.",
"abstract": "The present paper argues for the notion that when attention is spread across the visual field in the first sweep of information through the brain visual selection is completely stimulus-driven. Only later in time, through recurrent feedback processing, volitional control based on expectancy and goal set will bias visual selection in a top-down manner. Here we review behavioral evidence as well as evidence from ERP, fMRI, TMS and single cell recording consistent with stimulus-driven selection. Alternative viewpoints that assume a large role for top-down processing are discussed. It is argued that in most cases evidence supporting top-down control on visual selection in fact demonstrates top-down control on processes occurring later in time, following initial selection. We conclude that top-down knowledge regarding non-spatial features of the objects cannot alter the initial selection priority. Only by adjusting the size of the attentional window, the initial sweep of information through the brain may be altered in a top-down way."
},
{
"pmid": "17098563",
"title": "Modeling attention to salient proto-objects.",
"abstract": "Selective visual attention is believed to be responsible for serializing visual information for recognizing one object at a time in a complex scene. But how can we attend to objects before they are recognized? In coherence theory of visual cognition, so-called proto-objects form volatile units of visual information that can be accessed by selective attention and subsequently validated as actual objects. We propose a biologically plausible model of forming and attending to proto-objects in natural scenes. We demonstrate that the suggested model can enable a model of object recognition in cortex to expand from recognizing individual objects in isolation to sequentially recognizing all objects in a more complex scene."
}
] |
Frontiers in Aging Neuroscience | 31417397 | PMC6685087 | 10.3389/fnagi.2019.00194 | Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification | Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimer's disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation (“Which change in voxels would change the outcome most?”), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals (“Why does this person have AD?”) with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual “fingerprints” of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data. | 4.3. Related WorkVisualization of deep neural networks is a fairly new research area and different attempts have been made to provide intuitive explanations for neural network decisions. However, there is not yet a state-of-the art visualization method as saliency maps for example have been shown to be misleading (Adebayo et al., 2018). In Alzheimer's research, there are only a couple of studies that looked into different visualization methods based on MRI and/or PET data. Most of these studies either visualized filters and activations of the first or last layer (Sarraf and Tofighi, 2016; Lu et al., 2018; Ding et al., 2019) or used the occlusion method to exclude some parts (e.g., with a black patch) of the input image and recalculate the classifier output (Korolev et al., 2017; Esmaeilzadeh et al., 2018; Liu et al., 2018). Based on visual impression, they found that the networks focus primarily on areas known to be involved in AD, such as hippocampus, amygdala or ventricles, but occasionally also other areas such as thalamus or parietal lobe appear. Importantly, in contrast to our study, they did not quantitatively analyze the data, e.g., with respect to brain areas contained in an atlas or underlying neurobiological markers. Additionally, they did not compare different visualization methods or looked for inter-individual differences. One study, however, used gradient-weighted classification activation mapping (grad-CAM) and compared it to sensitivity analysis for AD classification (Yang et al., 2018). They demonstrate that these different visualization methods capture different aspects of the data and show high variability depending e.g., on the resolution of the convolutional layers. In Rieke et al. (2018), gradient-based and occlusion methods (standard patch occlusion and brain area occlusion) were qualitatively and quantitatively compared for AD classification. High regional overlaps between the methods, mostly inferior and middle temporal gyrus, were found but for gradient-based methods the importance was more widely distributed. Regarding the LRP method, we are only aware of one application in the neuroimaging field: Thomas et al. (2018) introduce interpretable recurrent networks for decoding cognitive states based on functional MRI data and demonstrate that the LRP method is capable of identifying relevant brain areas for the different tasks and different levels of data granularity. | [
"26161953",
"25682754",
"29198280",
"1759558",
"22016732",
"27708329",
"19460794",
"30398430",
"11561025",
"28417965",
"20139996",
"24634656",
"26879092",
"24484275",
"18202106",
"26017442",
"30050078",
"28778026",
"29572601",
"28264071",
"29632364",
"8232972",
"21802369",
"25344382",
"22305994",
"28276464",
"23134660",
"28414186",
"27567842",
"27239505",
"25783437",
"25042445",
"23047370",
"28087243",
"23932184",
"27702899"
] | [
{
"pmid": "26161953",
"title": "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.",
"abstract": "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package."
},
{
"pmid": "25682754",
"title": "The Scalable Brain Atlas: Instant Web-Based Access to Public Brain Atlases and Related Content.",
"abstract": "The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when a new slice, coordinate or region is selected. It contains 20 atlas templates in six species, and plugins to compute coordinate transformations, display anatomical connectivity and fiducial points, and retrieve properties, descriptions, definitions and 3d reconstructions of brain regions. The ambition of SBA is to provide a unified representation of all publicly available brain atlases directly in the web browser, while remaining a responsive and light weight resource that specializes in atlas comparisons, searches, coordinate transformations and interactive displays."
},
{
"pmid": "29198280",
"title": "Alzheimer's Disease: Past, Present, and Future.",
"abstract": "Although dementia has been described in ancient texts over many centuries (e.g., \"Be kind to your father, even if his mind fail him.\" - Old Testament: Sirach 3:12), our knowledge of its underlying causes is little more than a century old. Alzheimer published his now famous case study only 110 years ago, and our modern understanding of the disease that bears his name, and its neuropsychological consequences, really only began to accelerate in the 1980s. Since then we have witnessed an explosion of basic and translational research into the causes, characterizations, and possible treatments for Alzheimer's disease (AD) and other dementias. We review this lineage of work beginning with Alzheimer's own writings and drawings, then jump to the modern era beginning in the 1970s and early 1980s and provide a sampling of neuropsychological and other contextual work from each ensuing decade. During the 1980s our field began its foundational studies of profiling the neuropsychological deficits associated with AD and its differentiation from other dementias (e.g., cortical vs. subcortical dementias). The 1990s continued these efforts and began to identify the specific cognitive mechanisms affected by various neuropathologic substrates. The 2000s ushered in a focus on the study of prodromal stages of neurodegenerative disease before the full-blown dementia syndrome (i.e., mild cognitive impairment). The current decade has seen the rise of imaging and other biomarkers to characterize preclinical disease before the development of significant cognitive decline. Finally, we suggest future directions and predictions for dementia-related research and potential therapeutic interventions. (JINS, 2017, 23, 818-831)."
},
{
"pmid": "1759558",
"title": "Neuropathological stageing of Alzheimer-related changes.",
"abstract": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-alpha (transentorhinal stages I-II). The two forms of limbic stages (stages III-IV) were marked by a conspicuous affection of layer Pre-alpha in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V-VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations."
},
{
"pmid": "22016732",
"title": "High dimensional classification of structural MRI Alzheimer's disease data based on large scale regularization.",
"abstract": "In this work we use a large scale regularization approach based on penalized logistic regression to automatically classify structural MRI images (sMRI) according to cognitive status. Its performance is illustrated using sMRI data from the Alzheimer Disease Neuroimaging Initiative (ADNI) clinical database. We downloaded sMRI data from 98 subjects (49 cognitive normal and 49 patients) matched by age and sex from the ADNI website. Images were segmented and normalized using SPM8 and ANTS software packages. Classification was performed using GLMNET library implementation of penalized logistic regression based on coordinate-wise descent optimization techniques. To avoid optimistic estimates classification accuracy, sensitivity, and specificity were determined based on a combination of three-way split of the data with nested 10-fold cross-validations. One of the main features of this approach is that classification is performed based on large scale regularization. The methodology presented here was highly accurate, sensitive, and specific when automatically classifying sMRI images of cognitive normal subjects and Alzheimer disease (AD) patients. Higher levels of accuracy, sensitivity, and specificity were achieved for gray matter (GM) volume maps (85.7, 82.9, and 90%, respectively) compared to white matter volume maps (81.1, 80.6, and 82.5%, respectively). We found that GM and white matter tissues carry useful information for discriminating patients from cognitive normal subjects using sMRI brain data. Although we have demonstrated the efficacy of this voxel-wise classification method in discriminating cognitive normal subjects from AD patients, in principle it could be applied to any clinical population."
},
{
"pmid": "19460794",
"title": "Automated MRI measures identify individuals with mild cognitive impairment and Alzheimer's disease.",
"abstract": "Mild cognitive impairment can represent a transitional state between normal ageing and Alzheimer's disease. Non-invasive diagnostic methods are needed to identify mild cognitive impairment individuals for early therapeutic interventions. Our objective was to determine whether automated magnetic resonance imaging-based measures could identify mild cognitive impairment individuals with a high degree of accuracy. Baseline volumetric T1-weighted magnetic resonance imaging scans of 313 individuals from two independent cohorts were examined using automated software tools to identify the volume and mean thickness of 34 neuroanatomic regions. The first cohort included 49 older controls and 48 individuals with mild cognitive impairment, while the second cohort included 94 older controls and 57 mild cognitive impairment individuals. Sixty-five patients with probable Alzheimer's disease were also included for comparison. For the discrimination of mild cognitive impairment, entorhinal cortex thickness, hippocampal volume and supramarginal gyrus thickness demonstrated an area under the curve of 0.91 (specificity 94%, sensitivity 74%, positive likelihood ratio 12.12, negative likelihood ratio 0.29) for the first cohort and an area under the curve of 0.95 (specificity 91%, sensitivity 90%, positive likelihood ratio 10.0, negative likelihood ratio 0.11) for the second cohort. For the discrimination of Alzheimer's disease, these three measures demonstrated an area under the curve of 1.0. The three magnetic resonance imaging measures demonstrated significant correlations with clinical and neuropsychological assessments as well as with cerebrospinal fluid levels of tau, hyperphosphorylated tau and abeta 42 proteins. These results demonstrate that automated magnetic resonance imaging measures can serve as an in vivo surrogate for disease severity, underlying neuropathology and as a non-invasive diagnostic method for mild cognitive impairment and Alzheimer's disease."
},
{
"pmid": "30398430",
"title": "A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain.",
"abstract": "Purpose To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers. Materials and Methods Prospective 18F-FDG PET brain images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (2109 imaging studies from 2005 to 2017, 1002 patients) and retrospective independent test set (40 imaging studies from 2006 to 2016, 40 patients) were collected. Final clinical diagnosis at follow-up was recorded. Convolutional neural network of InceptionV3 architecture was trained on 90% of ADNI data set and tested on the remaining 10%, as well as the independent test set, with performance compared to radiologic readers. Model was analyzed with sensitivity, specificity, receiver operating characteristic (ROC), saliency map, and t-distributed stochastic neighbor embedding. Results The algorithm achieved area under the ROC curve of 0.98 (95% confidence interval: 0.94, 1.00) when evaluated on predicting the final clinical diagnosis of AD in the independent test set (82% specificity at 100% sensitivity), an average of 75.8 months prior to the final diagnosis, which in ROC space outperformed reader performance (57% [four of seven] sensitivity, 91% [30 of 33] specificity; P < .05). Saliency map demonstrated attention to known areas of interest but with focus on the entire brain. Conclusion By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Larvie in this issue."
},
{
"pmid": "11561025",
"title": "Magnetic resonance imaging of the entorhinal cortex and hippocampus in mild cognitive impairment and Alzheimer's disease.",
"abstract": "OBJECTIVES\nTo explore volume changes of the entorhinal cortex (ERC) and hippocampus in mild cognitive impairment (MCI) and Alzheimer's disease (AD) compared with normal cognition (NC); to determine the powers of the ERC and the hippocampus for discrimination between these groups.\n\n\nMETHODS\nThis study included 40 subjects with NC, 36 patients with MCI, and 29 patients with AD. Volumes of the ERC and hippocampus were manually measured based on coronal T1 weighted MR images. Global cerebral changes were assessed using semiautomatic image segmentation.\n\n\nRESULTS\nBoth ERC and hippocampal volumes were reduced in MCI (ERC 13%, hippocampus 11%, p<0.05) and AD (ERC 39%, hippocampus 27%, p<0.01) compared with NC. Furthermore, AD showed greater volume losses in the ERC than in the hippocampus (p<0.01). In addition, AD and MCI also had cortical grey matter loss (p< 0.01) and ventricular enlargement (p<0.01) when compared with NC. There was a significant correlation between ERC and hippocampal volumes in MCI and AD (both p<0.001), but not in NC. Using ERC and hippocampus together improved discrimination between AD and CN but did not improve discrimination between MCI and NC. The ERC was better than the hippocampus for distinguishing MCI from AD. In addition, loss of cortical grey matter significantly contributed to the hippocampus for discriminating MCI and AD from NC.\n\n\nCONCLUSIONS\nVolume reductions in the ERC and hippocampus may be early signs of AD pathology that can be measured using MRI."
},
{
"pmid": "28417965",
"title": "Distinct subtypes of Alzheimer's disease based on patterns of brain atrophy: longitudinal trajectories and clinical applications.",
"abstract": "Atrophy patterns on MRI can reliably predict three neuropathological subtypes of Alzheimer's disease (AD): typical, limbic-predominant, or hippocampal-sparing. A method to enable their investigation in the clinical routine is still lacking. We aimed to (1) validate the combined use of visual rating scales for identification of AD subtypes; (2) characterise these subtypes at baseline and over two years; and (3) investigate how atrophy patterns and non-memory cognitive domains contribute to memory impairment. AD patients were classified as either typical AD (n = 100), limbic-predominant (n = 33), or hippocampal-sparing (n = 35) by using the Scheltens' scale for medial temporal lobe atrophy (MTA), the Koedam's scale for posterior atrophy (PA), and the Pasquier's global cortical atrophy scale for frontal atrophy (GCA-F). A fourth group with no atrophy was also identified (n = 30). 230 healthy controls were also included. There was great overlap among subtypes in demographic, clinical, and cognitive variables. Memory performance was more dependent on non-memory cognitive functions in hippocampal-sparing and the no atrophy group. Hippocampal-sparing and the no atrophy group showed less aggressive disease progression. Visual rating scales can be used to identify distinct AD subtypes. Recognizing AD heterogeneity is important and visual rating scales may facilitate investigation of AD heterogeneity in clinical routine."
},
{
"pmid": "20139996",
"title": "The clinical use of structural MRI in Alzheimer disease.",
"abstract": "Structural imaging based on magnetic resonance is an integral part of the clinical assessment of patients with suspected Alzheimer dementia. Prospective data on the natural history of change in structural markers from preclinical to overt stages of Alzheimer disease are radically changing how the disease is conceptualized, and will influence its future diagnosis and treatment. Atrophy of medial temporal structures is now considered to be a valid diagnostic marker at the mild cognitive impairment stage. Structural imaging is also included in diagnostic criteria for the most prevalent non-Alzheimer dementias, reflecting its value in differential diagnosis. In addition, rates of whole-brain and hippocampal atrophy are sensitive markers of neurodegeneration, and are increasingly used as outcome measures in trials of potentially disease-modifying therapies. Large multicenter studies are currently investigating the value of other imaging and nonimaging markers as adjuncts to clinical assessment in diagnosis and monitoring of progression. The utility of structural imaging and other markers will be increased by standardization of acquisition and analysis methods, and by development of robust algorithms for automated assessment."
},
{
"pmid": "24634656",
"title": "Regions of interest computed by SVM wrapped method for Alzheimer's disease examination from segmented MRI.",
"abstract": "Accurate identification of the most relevant brain regions linked to Alzheimer's disease (AD) is crucial in order to improve diagnosis techniques and to better understand this neurodegenerative process. For this purpose, statistical classification is suitable. In this work, a novel method based on support vector machine recursive feature elimination (SVM-RFE) is proposed to be applied on segmented brain MRI for detecting the most discriminant AD regions of interest (ROIs). The analyses are performed both on gray and white matter tissues, achieving up to 100% accuracy after classification and outperforming the results obtained by the standard t-test feature selection. The present method, applied on different subject sets, permits automatically determining high-resolution areas surrounding the hippocampal area without needing to divide the brain images according to any common template."
},
{
"pmid": "26879092",
"title": "Parallel Atrophy of Cortex and Basal Forebrain Cholinergic System in Mild Cognitive Impairment.",
"abstract": "The basal forebrain cholinergic system (BFCS) is the major source of acetylcholine for the cerebral cortex in humans. The aim was to analyze the pattern of BFCS and cortical atrophy in MCI patients to find evidence for a parallel atrophy along corticotopic organization of BFCS projections. BFCS volume and cortical thickness were analyzed using high-definition 3D structural magnetic resonance imaging data from 1.5-T and 3.0-T scanners of 64 MCI individuals and 62 cognitively healthy elderly controls from the European DTI study in dementia. BFCS volume reduction was correlated with thinning of cortical areas with known BFCS projections, such as Ch2 and parahippocampal gyrus in the MCI group, but not in the control group. Additionally, we found correlations between BFCS and cortex atrophy beyond the known corticotopic projections, such as between Ch4p and the cingulate gyrus. BFCS volume reduction was associated with regional thinning of cortical areas that included, but was not restricted to, the pattern of corticotopic projections of the BFCS as derived from animal studies. Our in vivo results may indicate the existence of more extended projections from the BFCS to the cerebral cortex in humans than that known from prior studies with animals."
},
{
"pmid": "24484275",
"title": "Amygdalar atrophy in early Alzheimer's disease.",
"abstract": "Current research suggests that amygdalar volumes in patients with Alzheimer's disease (AD) may be a relevant measure for its early diagnosis. However, findings are still inconclusive and controversial, partly because studies did not focus on the earliest stage of the disease. In this study, we measured amygdalar atrophy in 48 AD patients and 82 healthy controls (HC) by using a multi-atlas procedure, MAPER. Both hippocampal and amygdalar volumes, normalized by intracranial volume, were significantly reduced in AD compared with HC. The volume loss in the two structures was of similar magnitude (~24%). Amygdalar volume loss in AD predicted memory impairment after we controlled for age, gender, education, and, more important, hippocampal volume, indicating that memory decline correlates with amygdalar atrophy over and above hippocampal atrophy. Amygdalar volume may thus be as useful as hippocampal volume for the diagnosis of early AD. In addition, it could be an independent marker of cognitive decline. The role of the amygdala in AD should be reconsidered to guide further research and clinical practice."
},
{
"pmid": "18202106",
"title": "Automatic classification of MR scans in Alzheimer's disease.",
"abstract": "To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "30050078",
"title": "Structural brain imaging in Alzheimer's disease and mild cognitive impairment: biomarker analysis and shared morphometry database.",
"abstract": "Magnetic resonance (MR) imaging is a powerful technique for non-invasive in-vivo imaging of the human brain. We employed a recently validated method for robust cross-sectional and longitudinal segmentation of MR brain images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. Specifically, we segmented 5074 MR brain images into 138 anatomical regions and extracted time-point specific structural volumes and volume change during follow-up intervals of 12 or 24 months. We assessed the extracted biomarkers by determining their power to predict diagnostic classification and by comparing atrophy rates to published meta-studies. The approach enables comprehensive analysis of structural changes within the whole brain. The discriminative power of individual biomarkers (volumes/atrophy rates) is on par with results published by other groups. We publish all quality-checked brain masks, structural segmentations, and extracted biomarkers along with this article. We further share the methodology for brain extraction (pincram) and segmentation (MALPEM, MALPEM4D) as open source projects with the community. The identified biomarkers hold great potential for deeper analysis, and the validated methodology can readily be applied to other imaging cohorts."
},
{
"pmid": "28778026",
"title": "A survey on deep learning in medical image analysis.",
"abstract": "Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research."
},
{
"pmid": "29572601",
"title": "Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.",
"abstract": "Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance."
},
{
"pmid": "28264071",
"title": "Prediction and classification of Alzheimer disease based on quantification of MRI deformation.",
"abstract": "Detecting early morphological changes in the brain and making early diagnosis are important for Alzheimer's disease (AD). High resolution magnetic resonance imaging can be used to help diagnosis and prediction of the disease. In this paper, we proposed a machine learning method to discriminate patients with AD or mild cognitive impairment (MCI) from healthy elderly and to predict the AD conversion in MCI patients by computing and analyzing the regional morphological differences of brain between groups. Distance between each pair of subjects was quantified from a symmetric diffeomorphic registration, followed by an embedding algorithm and a learning approach for classification. The proposed method obtained accuracy of 96.5% in differentiating mild AD from healthy elderly with the whole-brain gray matter or temporal lobe as region of interest (ROI), 91.74% in differentiating progressive MCI from healthy elderly and 88.99% in classifying progressive MCI versus stable MCI with amygdala or hippocampus as ROI. This deformation-based method has made full use of the pair-wise macroscopic shape difference between groups and consequently increased the power for discrimination."
},
{
"pmid": "29632364",
"title": "Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer's Disease using structural MR and FDG-PET images.",
"abstract": "Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature."
},
{
"pmid": "21802369",
"title": "Neuropathologically defined subtypes of Alzheimer's disease with distinct clinical characteristics: a retrospective study.",
"abstract": "BACKGROUND\nNeurofibrillary pathology has a stereotypical progression in Alzheimer's disease (AD) that is encapsulated in the Braak staging scheme; however, some AD cases are atypical and do not fit into this scheme. We aimed to compare clinical and neuropathological features between typical and atypical AD cases.\n\n\nMETHODS\nAD cases with a Braak neurofibrillary tangle stage of more than IV were identified from a brain bank database. By use of thioflavin-S fluorescence microscopy, we assessed the density and the distribution of neurofibrillary tangles in three cortical regions and two hippocampal sectors. These data were used to construct an algorithm to classify AD cases into typical, hippocampal sparing, or limbic predominant. Classified cases were then compared for clinical, demographic, pathological, and genetic characteristics. An independent cohort of AD cases was assessed to validate findings from the initial cohort.\n\n\nFINDINGS\n889 cases of AD, 398 men and 491 women with age at death of 37-103 years, were classified with the algorithm as hippocampal sparing (97 cases [11%]), typical (665 [75%]), or limbic predominant (127 [14%]). By comparison with typical AD, neurofibrillary tangle counts per 0.125 mm(2) in hippocampal sparing cases were higher in cortical areas (median 13, IQR 11-16) and lower in the hippocampus (7.5, 5.2-9.5), whereas counts in limbic-predominant cases were lower in cortical areas (4.3, 3.0-5.7) and higher in the hippocampus (27, 22-35). Hippocampal sparing cases had less hippocampal atrophy than did typical and limbic-predominant cases. Patients with hippocampal sparing AD were younger at death (mean 72 years [SD 10]) and a higher proportion of them were men (61 [63%]), whereas those with limbic-predominant AD were older (mean 86 years [SD 6]) and a higher proportion of them were women (87 [69%]). Microtubule-associated protein tau (MAPT) H1H1 genotype was more common in limbic-predominant AD (54 [70%]) than in hippocampal sparing AD (24 [46%]; p=0.011), but did not differ significantly between limbic-predominant and typical AD (204 [59%]; p=0.11). Apolipoprotein E (APOE) ɛ4 allele status differed between AD subtypes only when data were stratified by age at onset. Clinical presentation, age at onset, disease duration, and rate of cognitive decline differed between the AD subtypes. These findings were confirmed in a validation cohort of 113 patients with AD.\n\n\nINTERPRETATION\nThese data support the hypothesis that AD has distinct clinicopathological subtypes. Hippocampal sparing and limbic-predominant AD subtypes might account for about 25% of cases, and hence should be considered when designing clinical, genetic, biomarker, and treatment studies in patients with AD.\n\n\nFUNDING\nUS National Institutes of Health via Mayo Alzheimer's Disease Research Center, Mayo Clinic Study on Aging, Florida Alzheimer's Disease Research Center, and Einstein Aging Study; and State of Florida Alzheimer's Disease Initiative."
},
{
"pmid": "25344382",
"title": "Anatomical heterogeneity of Alzheimer disease: based on cortical thickness on MRIs.",
"abstract": "OBJECTIVE\nBecause the signs associated with dementia due to Alzheimer disease (AD) can be heterogeneous, the goal of this study was to use 3-dimensional MRI to examine the various patterns of cortical atrophy that can be associated with dementia of AD type, and to investigate whether AD dementia can be categorized into anatomical subtypes.\n\n\nMETHODS\nHigh-resolution T1-weighted volumetric MRIs were taken of 152 patients in their earlier stages of AD dementia. The images were processed to measure cortical thickness, and hierarchical agglomerative cluster analysis was performed using Ward's clustering linkage. The identified clusters of patients were compared with an age- and sex-matched control group using a general linear model.\n\n\nRESULTS\nThere were several distinct patterns of cortical atrophy and the number of patterns varied according to the level of cluster analyses. At the 3-cluster level, patients were divided into (1) bilateral medial temporal-dominant atrophy subtype (n = 52, ∼ 34.2%), (2) parietal-dominant subtype (n = 28, ∼ 18.4%) in which the bilateral parietal lobes, the precuneus, along with bilateral dorsolateral frontal lobes, were atrophic, and (3) diffuse atrophy subtype (n = 72, ∼ 47.4%) in which nearly all association cortices revealed atrophy. These 3 subtypes also differed in their demographic and clinical features.\n\n\nCONCLUSIONS\nThis cluster analysis of cortical thickness of the entire brain showed that AD dementia in the earlier stages can be categorized into various anatomical subtypes, with distinct clinical features."
},
{
"pmid": "22305994",
"title": "Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review.",
"abstract": "Standard univariate analysis of neuroimaging data has revealed a host of neuroanatomical and functional differences between healthy individuals and patients suffering a wide range of neurological and psychiatric disorders. Significant only at group level however these findings have had limited clinical translation, and recent attention has turned toward alternative forms of analysis, including Support-Vector-Machine (SVM). A type of machine learning, SVM allows categorisation of an individual's previously unseen data into a predefined group using a classification algorithm, developed on a training data set. In recent years, SVM has been successfully applied in the context of disease diagnosis, transition prediction and treatment prognosis, using both structural and functional neuroimaging data. Here we provide a brief overview of the method and review those studies that applied it to the investigation of Alzheimer's disease, schizophrenia, major depression, bipolar disorder, presymptomatic Huntington's disease, Parkinson's disease and autistic spectrum disorder. We conclude by discussing the main theoretical and practical challenges associated with the implementation of this method into the clinic and possible future directions."
},
{
"pmid": "28276464",
"title": "Robust Identification of Alzheimer's Disease subtypes based on cortical atrophy patterns.",
"abstract": "Accumulating evidence suggests that Alzheimer's disease (AD) is heterogenous and can be classified into several subtypes. Here, we propose a robust subtyping method for AD based on cortical atrophy patterns and graph theory. We calculated similarities between subjects in their atrophy patterns throughout the whole brain, and clustered subjects with similar atrophy patterns using the Louvain method for modular organization extraction. We applied our method to AD patients recruited at Samsung Medical Center and externally validated our method by using the AD Neuroimaging Initiative (ADNI) dataset. Our method categorized very mild AD into three clinically distinct subtypes with high reproducibility (>90%); the parietal-predominant (P), medial temporal-predominant (MT), and diffuse (D) atrophy subtype. The P subtype showed the worst clinical presentation throughout the cognitive domains, while the MT and D subtypes exhibited relatively mild presentation. The MT subtype revealed more impaired language and executive function compared to the D subtype."
},
{
"pmid": "23134660",
"title": "Cortical atrophy in presymptomatic Alzheimer's disease presenilin 1 mutation carriers.",
"abstract": "BACKGROUND\nSporadic late-onset Alzheimer's disease (AD) dementia has been associated with a 'signature' of cortical atrophy in paralimbic and heteromodal association regions measured with MRI.\n\n\nOBJECTIVE\nTo investigate whether a similar pattern of cortical atrophy is present in presymptomatic presenilin 1 E280A mutation carriers an average of 6 years before clinical symptom onset.\n\n\nMETHODS\n40 cognitively normal volunteers from a Colombian population with familial AD were included; 18 were positive for the AD-associated presenilin 1 mutation (carriers, mean age=38) whereas 22 were non-carriers. T1-weighted volumetric MRI images were acquired and cortical thickness was measured. A priori regions of interest from our previous work were used to obtain thickness from AD-signature regions.\n\n\nRESULTS\nCompared to non-carriers, presymptomatic presenilin 1 mutation carriers exhibited thinner cortex within the AD-signature summary measure (p<0.008). Analyses of individual regions demonstrated thinner angular gyrus, precuneus and superior parietal lobule in carriers compared to non-carriers, with trend-level effects in the medial temporal lobe.\n\n\nCONCLUSION\nResults demonstrate that cognitively normal individuals genetically determined to develop AD have a thinner cerebral cortex than non-carriers in regions known to be affected by typical late-onset sporadic AD. These findings provide further support for the hypothesis that cortical atrophy is present in preclinical AD more than 5 years prior to symptom onset. Further research is needed to determine whether this method could be used to characterise the age-dependent trajectory of cortical atrophy in presymptomatic stages of AD."
},
{
"pmid": "28414186",
"title": "A review on neuroimaging-based classification studies and associated feature extraction methods for Alzheimer's disease and its prodromal stages.",
"abstract": "Neuroimaging has made it possible to measure pathological brain changes associated with Alzheimer's disease (AD) in vivo. Over the past decade, these measures have been increasingly integrated into imaging signatures of AD by means of classification frameworks, offering promising tools for individualized diagnosis and prognosis. We reviewed neuroimaging-based studies for AD and mild cognitive impairment classification, selected after online database searches in Google Scholar and PubMed (January, 1985-June, 2016). We categorized these studies based on the following neuroimaging modalities (and sub-categorized based on features extracted as a post-processing step from these modalities): i) structural magnetic resonance imaging [MRI] (tissue density, cortical surface, and hippocampal measurements), ii) functional MRI (functional coherence of different brain regions, and the strength of the functional connectivity), iii) diffusion tensor imaging (patterns along the white matter fibers), iv) fluorodeoxyglucose positron emission tomography (FDG-PET) (metabolic rate of cerebral glucose), and v) amyloid-PET (amyloid burden). The studies reviewed indicate that the classification frameworks formulated on the basis of these features show promise for individualized diagnosis and prediction of clinical progression. Finally, we provided a detailed account of AD classification challenges and addressed some future research directions."
},
{
"pmid": "27567842",
"title": "Combination of Structural MRI and FDG-PET of the Brain Improves Diagnostic Accuracy in Newly Manifested Cognitive Impairment in Geriatric Inpatients.",
"abstract": "BACKGROUND\nThe cause of cognitive impairment in acutely hospitalized geriatric patients is often unclear. The diagnostic process is challenging but important in order to treat potentially life-threatening etiologies or identify underlying neurodegenerative disease.\n\n\nOBJECTIVE\nTo evaluate the add-on diagnostic value of structural and metabolic neuroimaging in newly manifested cognitive impairment in elderly geriatric inpatients.\n\n\nMETHODS\nEighty-one inpatients (55 females, 81.6±5.5 y) without history of cognitive complaints prior to hospitalization were recruited in 10 acute geriatrics clinics. Primary inclusion criterion was a clinical hypothesis of Alzheimer's disease (AD), cerebrovascular disease (CVD), or mixed AD+CVD etiology (MD), which remained uncertain after standard diagnostic workup. Additional procedures performed after enrollment included detailed neuropsychological testing and structural MRI and FDG-PET of the brain. An interdisciplinary expert team established the most probable etiologic diagnosis (non-neurodegenerative, AD, CVD, or MD) integrating all available data. Automatic multimodal classification based on Random Undersampling Boosting was used for rater-independent assessment of the complementary contribution of the additional diagnostic procedures to the etiologic diagnosis.\n\n\nRESULTS\nAutomatic 4-class classification based on all diagnostic routine standard procedures combined reproduced the etiologic expert diagnosis in 31% of the patients (p = 0.100, chance level 25%). Highest accuracy by a single modality was achieved by MRI or FDG-PET (both 45%, p≤0.001). Integration of all modalities resulted in 76% accuracy (p≤0.001).\n\n\nCONCLUSION\nThese results indicate substantial improvement of diagnostic accuracy in uncertain de novo cognitive impairment in acutely hospitalized geriatric patients with the integration of structural MRI and brain FDG-PET into the diagnostic process."
},
{
"pmid": "27239505",
"title": "Multimodal prediction of conversion to Alzheimer's disease based on incomplete biomarkers.",
"abstract": "BACKGROUND\nThis study investigates the prediction of mild cognitive impairment-to-Alzheimer's disease (MCI-to-AD) conversion based on extensive multimodal data with varying degrees of missing values.\n\n\nMETHODS\nBased on Alzheimer's Disease Neuroimaging Initiative data from MCI-patients including all available modalities, we predicted the conversion to AD within 3 years. Different ways of replacing missing data in combination with different classification algorithms are compared. The performance was evaluated on features prioritized by experts and automatically selected features.\n\n\nRESULTS\nThe conversion to AD could be predicted with a maximal accuracy of 73% using support vector machines and features chosen by experts. Among data modalities, neuropsychological, magnetic resonance imaging, and positron emission tomography data were most informative. The best single feature was the functional activities questionnaire.\n\n\nCONCLUSION\nExtensive multimodal and incomplete data can be adequately handled by a combination of missing data substitution, feature selection, and classification."
},
{
"pmid": "25783437",
"title": "The identification of cognitive subtypes in Alzheimer's disease dementia using latent class analysis.",
"abstract": "OBJECTIVE\nAlzheimer's disease (AD) is a heterogeneous disorder with complex underlying neuropathology that is still not completely understood. For better understanding of this heterogeneity, we aimed to identify cognitive subtypes using latent class analysis (LCA) in a large sample of patients with AD dementia. In addition, we explored the relationship between the identified cognitive subtypes, and their demographical and neurobiological characteristics.\n\n\nMETHODS\nWe performed LCA based on neuropsychological test results of 938 consecutive probable patients with AD dementia using Mini-Mental State Examination as the covariate. Subsequently, we performed multinomial logistic regression analysis with cluster membership as dependent variable and dichotomised demographics, APOE genotype, cerebrospinal fluid biomarkers and MRI characteristics as independent variables.\n\n\nRESULTS\nLCA revealed eight clusters characterised by distinct cognitive profile and disease severity. Memory-impaired clusters-mild-memory (MILD-MEM) and moderate-memory (MOD-MEM)-included 43% of patients. Memory-spared clusters mild-visuospatial-language (MILD-VILA), mild-executive (MILD-EXE) and moderate-visuospatial (MOD-VISP) -included 29% of patients. Memory-indifferent clusters mild-diffuse (MILD-DIFF), moderate-language (MOD-LAN) and severe-diffuse (SEV-DIFF) -included 28% of patients. Cognitive clusters were associated with distinct demographical and neurobiological characteristics. In particular, the memory-spared MOD-VISP cluster was associated with younger age, APOE e4 negative genotype and prominent atrophy of the posterior cortex.\n\n\nCONCLUSIONS\nUsing LCA, we identified eight distinct cognitive subtypes in a large sample of patients with AD dementia. Cognitive clusters were associated with distinct demographical and neurobiological characteristics."
},
{
"pmid": "25042445",
"title": "Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.",
"abstract": "For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET."
},
{
"pmid": "23047370",
"title": "Entorhinal cortex thickness predicts cognitive decline in Alzheimer's disease.",
"abstract": "Biomarkers for Alzheimer's disease (AD) based on non-invasive methods are highly desirable for diagnosis, disease progression, and monitoring therapeutics. We aimed to study the use of hippocampal volume, entorhinal cortex (ERC) thickness, and whole brain volume (WBV) as predictors of cognitive change in patients with AD. 120 AD subjects, 106 mild cognitive impairment (MCI), and 99 non demented controls (NDC) from the multi-center pan-European AddNeuroMed study underwent MRI scanning at baseline and clinical evaluations at quarterly follow-up up to 1 year. The rate of cognitive decline was estimated using cognitive outcomes, Mini-Mental State Examination (MMSE) and Alzheimer disease assessment scale-cognitive (ADAS-cog) by fitting a random intercept and slope model. AD subjects had smaller ERC thickness and hippocampal and WBV volumes compared to MCI and NDC subjects. Within the AD group, ERC > WBV was significantly associated with baseline cognition (MMSE, ADAS-cog) and disease severity (Clinical Dementia Rating). Baseline ERC thickness was associated with both longitudinal MMSE and ADAS-cog score changes and WBV with ADAS-cog decline. These data indicate that AD subjects with thinner ERC had lower baseline cognitive scores, higher disease severity, and predicted greater subsequent cognitive decline at one year follow up. ERC is a region known to be affected early in the disease. Therefore, the rate of atrophy in this structure is expected to be higher since neurodegeneration begins earlier. Focusing on structural analyses that predict decline can identify those individuals at greatest risk for future cognitive loss. This may have potential for increasing the efficacy of early intervention."
},
{
"pmid": "28087243",
"title": "Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications.",
"abstract": "Deep learning (DL) is a family of machine learning methods that has gained considerable attention in the scientific community, breaking benchmark records in areas such as speech and visual recognition. DL differs from conventional machine learning methods by virtue of its ability to learn the optimal representation from the raw data through consecutive nonlinear transformations, achieving increasingly higher levels of abstraction and complexity. Given its ability to detect abstract and complex patterns, DL has been applied in neuroimaging studies of psychiatric and neurological disorders, which are characterised by subtle and diffuse alterations. Here we introduce the underlying concepts of DL and review studies that have used this approach to classify brain-based disorders. The results of these studies indicate that DL could be a powerful tool in the current search for biomarkers of psychiatric and neurologic disease. We conclude our review by discussing the main promises and challenges of using DL to elucidate brain-based disorders, as well as possible directions for future research."
},
{
"pmid": "23932184",
"title": "The Alzheimer's Disease Neuroimaging Initiative: a review of papers published since its inception.",
"abstract": "The Alzheimer's Disease Neuroimaging Initiative (ADNI) is an ongoing, longitudinal, multicenter study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer's disease (AD). The study aimed to enroll 400 subjects with early mild cognitive impairment (MCI), 200 subjects with early AD, and 200 normal control subjects; $67 million funding was provided by both the public and private sectors, including the National Institute on Aging, 13 pharmaceutical companies, and 2 foundations that provided support through the Foundation for the National Institutes of Health. This article reviews all papers published since the inception of the initiative and summarizes the results as of February 2011. The major accomplishments of ADNI have been as follows: (1) the development of standardized methods for clinical tests, magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF) biomarkers in a multicenter setting; (2) elucidation of the patterns and rates of change of imaging and CSF biomarker measurements in control subjects, MCI patients, and AD patients. CSF biomarkers are consistent with disease trajectories predicted by β-amyloid cascade (Hardy, J Alzheimers Dis 2006;9(Suppl 3):151-3) and tau-mediated neurodegeneration hypotheses for AD, whereas brain atrophy and hypometabolism levels show predicted patterns but exhibit differing rates of change depending on region and disease severity; (3) the assessment of alternative methods of diagnostic categorization. Currently, the best classifiers combine optimum features from multiple modalities, including MRI, [(18)F]-fluorodeoxyglucose-PET, CSF biomarkers, and clinical tests; (4) the development of methods for the early detection of AD. CSF biomarkers, β-amyloid 42 and tau, as well as amyloid PET may reflect the earliest steps in AD pathology in mildly symptomatic or even nonsymptomatic subjects, and are leading candidates for the detection of AD in its preclinical stages; (5) the improvement of clinical trial efficiency through the identification of subjects most likely to undergo imminent future clinical decline and the use of more sensitive outcome measures to reduce sample sizes. Baseline cognitive and/or MRI measures generally predicted future decline better than other modalities, whereas MRI measures of change were shown to be the most efficient outcome measures; (6) the confirmation of the AD risk loci CLU, CR1, and PICALM and the identification of novel candidate risk loci; (7) worldwide impact through the establishment of ADNI-like programs in Europe, Asia, and Australia; (8) understanding the biology and pathobiology of normal aging, MCI, and AD through integration of ADNI biomarker data with clinical data from ADNI to stimulate research that will resolve controversies about competing hypotheses on the etiopathogenesis of AD, thereby advancing efforts to find disease-modifying drugs for AD; and (9) the establishment of infrastructure to allow sharing of all raw and processed data without embargo to interested scientific investigators throughout the world. The ADNI study was extended by a 2-year Grand Opportunities grant in 2009 and a renewal of ADNI (ADNI-2) in October 2010 through to 2016, with enrollment of an additional 550 participants."
},
{
"pmid": "27702899",
"title": "Bayesian model reveals latent atrophy factors with dissociable cognitive trajectories in Alzheimer's disease.",
"abstract": "We used a data-driven Bayesian model to automatically identify distinct latent factors of overlapping atrophy patterns from voxelwise structural MRIs of late-onset Alzheimer's disease (AD) dementia patients. Our approach estimated the extent to which multiple distinct atrophy patterns were expressed within each participant rather than assuming that each participant expressed a single atrophy factor. The model revealed a temporal atrophy factor (medial temporal cortex, hippocampus, and amygdala), a subcortical atrophy factor (striatum, thalamus, and cerebellum), and a cortical atrophy factor (frontal, parietal, lateral temporal, and lateral occipital cortices). To explore the influence of each factor in early AD, atrophy factor compositions were inferred in beta-amyloid-positive (Aβ+) mild cognitively impaired (MCI) and cognitively normal (CN) participants. All three factors were associated with memory decline across the entire clinical spectrum, whereas the cortical factor was associated with executive function decline in Aβ+ MCI participants and AD dementia patients. Direct comparison between factors revealed that the temporal factor showed the strongest association with memory, whereas the cortical factor showed the strongest association with executive function. The subcortical factor was associated with the slowest decline for both memory and executive function compared with temporal and cortical factors. These results suggest that distinct patterns of atrophy influence decline across different cognitive domains. Quantification of this heterogeneity may enable the computation of individual-level predictions relevant for disease monitoring and customized therapies. Factor compositions of participants and code used in this article are publicly available for future research."
}
] |
JMIR mHealth and uHealth | 31342903 | PMC6685126 | 10.2196/13209 | Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data | BackgroundFeelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.ObjectiveThe aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.MethodsData were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.ResultsThe average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).ConclusionsPassive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being. | Related WorkPulekar et al [11] studied the first question in a small study with 9 college students over 2 weeks. Data logs of social interactions, communication, and smartphone activity were analyzed to detect loneliness and its relationship with personality traits. The study reports 90% accuracy in classifying loneliness using the smartphone features that were mostly correlated with the loneliness score. However, the small sample size, the short duration of the data collection phase, and missing details in the machine learning approach, especially the classification evaluation, make the results difficult to generalize and build on. Sanchez et al [12] used machine learning to infer the level of loneliness in 12 older adults who used a mobile app for one week. Call logs and global positioning system (GPS) coordinates were collected from the phones. A total of 4 models for family loneliness, spousal loneliness, social loneliness, and existential crisis were built with a reported accuracy of 91.6%, 83.3%, 66.6%, and 83.3%, respectively. However, similar to the results of the study by Pulekar et al, these results may fail to generalize because of the small sample and short duration of data collection.A few studies have explored the second question using correlation analysis to understand relationships between single behavioral signals, such as level of physical activity, mobility, social interactions, and loneliness [13-15]. Wang et al [14] analyzed smartphone data collected from 40 students over a spring semester and found negative correlations between loneliness and activity duration for day and evening times, traveled distance, and indoor mobility during the day. A related study from the same group found statistically significant correlations (P<.01) between kinesthetic activities and change in loneliness but no relationship between loneliness and sleep duration, geospatial activity, or speech duration [13]. Gao et al [15] found that people with higher levels of loneliness made or received fewer phone calls and used certain types of apps, such as health and fitness, social media, and Web browsing, more frequently than those with low levels of loneliness. Our data mining approach, in addition to providing similar behavioral features to those reported by Wang et al [14], presents an innovative method for extracting the combined behavioral patterns in our participant population. For example, we can observe that compared with students with a low level of loneliness, students with a high level of loneliness unlock their phones in different time segments during weekends, spend less time off-campus during evening hours on weekends, and socialize less during evening hours on weekdays. To our knowledge, this study introduces, for the first time, an approach toward extracting combined behavioral patterns through data mining and their associations with a mental health outcome, such as loneliness, from passive sensing data. | [
"24067110",
"20716644",
"10414684",
"3399889",
"20668659",
"25910392",
"25844912",
"27478700",
"30093371",
"8576833",
"26180009"
] | [
{
"pmid": "24067110",
"title": "Evolutionary mechanisms for loneliness.",
"abstract": "Robert Weiss (1973) conceptualised loneliness as perceived social isolation, which he described as a gnawing, chronic disease without redeeming features. On the scale of everyday life, it is understandable how something as personally aversive as loneliness could be regarded as a blight on human existence. However, evolutionary time and evolutionary forces operate at such a different scale of organisation than we experience in everyday life that personal experience is not sufficient to understand the role of loneliness in human existence. Research over the past decade suggests a very different view of loneliness than suggested by personal experience, one in which loneliness serves a variety of adaptive functions in specific habitats. We review evidence on the heritability of loneliness and outline an evolutionary theory of loneliness, with an emphasis on its potential adaptive value in an evolutionary timescale."
},
{
"pmid": "20716644",
"title": "A meta-analysis of interventions to reduce loneliness.",
"abstract": "Social and demographic trends are placing an increasing number of adults at risk for loneliness, an established risk factor for physical and mental illness. The growing costs of loneliness have led to a number of loneliness reduction interventions. Qualitative reviews have identified four primary intervention strategies: (a) improving social skills, (b) enhancing social support, (c) increasing opportunities for social contact, and (d) addressing maladaptive social cognition. An integrative meta-analysis of loneliness reduction interventions was conducted to quantify the effects of each strategy and to examine the potential role of moderator variables. Results revealed that single-group pre-post and nonrandomized comparison studies yielded larger mean effect sizes relative to randomized comparison studies. Among studies that used the latter design, the most successful interventions addressed maladaptive social cognition. This is consistent with current theories regarding loneliness and its etiology. Theoretical and methodological issues associated with designing new loneliness reduction interventions are discussed."
},
{
"pmid": "10414684",
"title": "The effects of sense of belonging, social support, conflict, and loneliness on depression.",
"abstract": "BACKGROUND\nA number of interpersonal phenomena have been linked to depression, including sense of belonging, social support, conflict, and loneliness.\n\n\nOBJECTIVES\nTo examine the effects of the interpersonal phenomena of sense of belonging, social support, loneliness, and conflict on depression, and to describe the predictive value of sense of belonging for depression in the context of other interpersonal phenomenon.\n\n\nMETHOD\nA sample of clients with major depressive disorder and students in a midwestern community college participated in the study by completing questionnaires.\n\n\nRESULTS\nPath analysis showed significant direct paths as postulated, with 64% of the variance of depression explained by the variables in the model. Social support had only an indirect effect on depression, and this finding supported the buffer theory of social support. Sense of belonging was a better predictor of depression.\n\n\nCONCLUSIONS\nThe study findings emphasize the importance of relationship-oriented experiences as part of assessment and intervention strategies for individuals with depression."
},
{
"pmid": "3399889",
"title": "Social relationships and health.",
"abstract": "Recent scientific work has established both a theoretical basis and strong empirical evidence for a causal impact of social relationships on health. Prospective studies, which control for baseline health status, consistently show increased risk of death among persons with a low quantity, and sometimes low quality, of social relationships. Experimental and quasi-experimental studies of humans and animals also suggest that social isolation is a major risk factor for mortality from widely varying causes. The mechanisms through which social relationships affect health and the factors that promote or inhibit the development and maintenance of social relationships remain to be explored."
},
{
"pmid": "20668659",
"title": "Social relationships and mortality risk: a meta-analytic review.",
"abstract": "BACKGROUND\nThe quality and quantity of individuals' social relationships has been linked not only to mental health but also to both morbidity and mortality.\n\n\nOBJECTIVES\nThis meta-analytic review was conducted to determine the extent to which social relationships influence risk for mortality, which aspects of social relationships are most highly predictive, and which factors may moderate the risk.\n\n\nDATA EXTRACTION\nData were extracted on several participant characteristics, including cause of mortality, initial health status, and pre-existing health conditions, as well as on study characteristics, including length of follow-up and type of assessment of social relationships.\n\n\nRESULTS\nAcross 148 studies (308,849 participants), the random effects weighted average effect size was OR = 1.50 (95% CI 1.42 to 1.59), indicating a 50% increased likelihood of survival for participants with stronger social relationships. This finding remained consistent across age, sex, initial health status, cause of death, and follow-up period. Significant differences were found across the type of social measurement evaluated (p<0.001); the association was strongest for complex measures of social integration (OR = 1.91; 95% CI 1.63 to 2.23) and lowest for binary indicators of residential status (living alone versus with others) (OR = 1.19; 95% CI 0.99 to 1.44).\n\n\nCONCLUSIONS\nThe influence of social relationships on risk for mortality is comparable with well-established risk factors for mortality. Please see later in the article for the Editors' Summary."
},
{
"pmid": "25910392",
"title": "Loneliness and social isolation as risk factors for mortality: a meta-analytic review.",
"abstract": "Actual and perceived social isolation are both associated with increased risk for early mortality. In this meta-analytic review, our objective is to establish the overall and relative magnitude of social isolation and loneliness and to examine possible moderators. We conducted a literature search of studies (January 1980 to February 2014) using MEDLINE, CINAHL, PsycINFO, Social Work Abstracts, and Google Scholar. The included studies provided quantitative data on mortality as affected by loneliness, social isolation, or living alone. Across studies in which several possible confounds were statistically controlled for, the weighted average effect sizes were as follows: social isolation odds ratio (OR) = 1.29, loneliness OR = 1.26, and living alone OR = 1.32, corresponding to an average of 29%, 26%, and 32% increased likelihood of mortality, respectively. We found no differences between measures of objective and subjective social isolation. Results remain consistent across gender, length of follow-up, and world region, but initial health status has an influence on the findings. Results also differ across participant age, with social deficits being more predictive of death in samples with an average age younger than 65 years. Overall, the influence of both objective and subjective social isolation on risk for mortality is comparable with well-established risk factors for mortality."
},
{
"pmid": "25844912",
"title": "Next-generation psychiatric assessment: Using smartphone sensors to monitor behavior and mental health.",
"abstract": "OBJECTIVE\nOptimal mental health care is dependent upon sensitive and early detection of mental health problems. We have introduced a state-of-the-art method for the current study for remote behavioral monitoring that transports assessment out of the clinic and into the environments in which individuals negotiate their daily lives. The objective of this study was to examine whether the information captured with multimodal smartphone sensors can serve as behavioral markers for one's mental health. We hypothesized that (a) unobtrusively collected smartphone sensor data would be associated with individuals' daily levels of stress, and (b) sensor data would be associated with changes in depression, stress, and subjective loneliness over time.\n\n\nMETHOD\nA total of 47 young adults (age range: 19-30 years) were recruited for the study. Individuals were enrolled as a single cohort and participated in the study over a 10-week period. Participants were provided with smartphones embedded with a range of sensors and software that enabled continuous tracking of their geospatial activity (using the Global Positioning System and wireless fidelity), kinesthetic activity (using multiaxial accelerometers), sleep duration (modeled using device-usage data, accelerometer inferences, ambient sound features, and ambient light levels), and time spent proximal to human speech (i.e., speech duration using microphone and speech detection algorithms). Participants completed daily ratings of stress, as well as pre- and postmeasures of depression (Patient Health Questionnaire-9; Spitzer, Kroenke, & Williams, 1999), stress (Perceived Stress Scale; Cohen et al., 1983), and loneliness (Revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980).\n\n\nRESULTS\nMixed-effects linear modeling showed that sensor-derived geospatial activity (p < .05), sleep duration (p < .05), and variability in geospatial activity (p < .05), were associated with daily stress levels. Penalized functional regression showed associations between changes in depression and sensor-derived speech duration (p < .05), geospatial activity (p < .05), and sleep duration (p < .05). Changes in loneliness were associated with sensor-derived kinesthetic activity (p < .01).\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PRACTICE\nSmartphones can be harnessed as instruments for unobtrusive monitoring of several behavioral indicators of mental health. Creative leveraging of smartphone sensing could provide novel opportunities for close-to-invisible psychiatric assessment at a scale and efficiency that far exceeds what is currently feasible with existing assessment technologies."
},
{
"pmid": "27478700",
"title": "How smartphone usage correlates with social anxiety and loneliness.",
"abstract": "INTRODUCTION\nEarly detection of social anxiety and loneliness might be useful to prevent substantial impairment in personal relationships. Understanding the way people use smartphones can be beneficial for implementing an early detection of social anxiety and loneliness. This paper examines different types of smartphone usage and their relationships with people with different individual levels of social anxiety or loneliness.\n\n\nMETHODS\nA total of 127 Android smartphone volunteers participated in this study, all of which have agreed to install an application (MobileSens) on their smartphones, which can record user's smartphone usage behaviors and upload the data into the server. They were instructed to complete an online survey, including the Interaction Anxiousness Scale (IAS) and the University of California Los Angeles Loneliness Scale (UCLA-LS). We then separated participants into three groups (high, middle and low) based on their scores of IAS and UCLA-LS, respectively. Finally, we acquired digital records of smartphone usage from MobileSens and examined the differences in 105 types of smartphone usage behaviors between high-score and low-score group of IAS/UCLA-LS.\n\n\nRESULTS\nIndividuals with different scores on social anxiety or loneliness might use smartphones in different ways. For social anxiety, compared with users in low-score group, users in high-score group had less number of phone calls (incoming and outgoing) (Mann-Whitney U = 282.50∼409.00, p < 0.05), sent and received less number of text messages in the afternoon (Mann-Whitney U = 391.50∼411.50, p < 0.05), used health & fitness apps more frequently (Mann-Whitney U = 493.00, p < 0.05) and used camera apps less frequently (Mann-Whitney U = 472.00, p < 0.05). For loneliness, users in low-score group, users in high-score group had less number of phone calls (incoming and outgoing) (Mann-Whitney U = 305.00∼407.50, p < 0.05) and used following apps more frequently: health & fitness (Mann-Whitney U = 510.00, p < 0.05), system (Mann-Whitney U = 314.00, p < 0.01), phone beautify (Mann-Whitney U = 385.00, p < 0.05), web browser (Mann-Whitney U = 416.00, p < 0.05) and social media (RenRen) (Mann-Whitney >U = 388.50, p < 0.01).\n\n\nDISCUSSION\nThe results show that individuals with social anxiety or loneliness receive less incoming calls and use healthy applications more frequently, but they do not show differences in outgoing-call-related features. Individuals with higher levels of social anxiety also receive less SMSs and use camera apps less frequently, while lonely individuals tend to use system, beautify, browser and social media (RenRen) apps more frequently.\n\n\nCONCLUSION\nThis paper finds that there exists certain correlation among smartphone usage and social anxiety and loneliness. The result may be useful to improve social interaction for those who lack social interaction in daily lives and may be insightful for recognizing individual levels of social anxiety and loneliness through smartphone usage behaviors."
},
{
"pmid": "30093371",
"title": "Accuracy of Fitbit Devices: Systematic Review and Narrative Syntheses of Quantitative Data.",
"abstract": "BACKGROUND\nAlthough designed as a consumer product to help motivate individuals to be physically active, Fitbit activity trackers are becoming increasingly popular as measurement tools in physical activity and health promotion research and are also commonly used to inform health care decisions.\n\n\nOBJECTIVE\nThe objective of this review was to systematically evaluate and report measurement accuracy for Fitbit activity trackers in controlled and free-living settings.\n\n\nMETHODS\nWe conducted electronic searches using PubMed, EMBASE, CINAHL, and SPORTDiscus databases with a supplementary Google Scholar search. We considered original research published in English comparing Fitbit versus a reference- or research-standard criterion in healthy adults and those living with any health condition or disability. We assessed risk of bias using a modification of the Consensus-Based Standards for the Selection of Health Status Measurement Instruments. We explored measurement accuracy for steps, energy expenditure, sleep, time in activity, and distance using group percentage differences as the common rubric for error comparisons. We conducted descriptive analyses for frequency of accuracy comparisons within a ±3% error in controlled and ±10% error in free-living settings and assessed for potential bias of over- or underestimation. We secondarily explored how variations in body placement, ambulation speed, or type of activity influenced accuracy.\n\n\nRESULTS\nWe included 67 studies. Consistent evidence indicated that Fitbit devices were likely to meet acceptable accuracy for step count approximately half the time, with a tendency to underestimate steps in controlled testing and overestimate steps in free-living settings. Findings also suggested a greater tendency to provide accurate measures for steps during normal or self-paced walking with torso placement, during jogging with wrist placement, and during slow or very slow walking with ankle placement in adults with no mobility limitations. Consistent evidence indicated that Fitbit devices were unlikely to provide accurate measures for energy expenditure in any testing condition. Evidence from a few studies also suggested that, compared with research-grade accelerometers, Fitbit devices may provide similar measures for time in bed and time sleeping, while likely markedly overestimating time spent in higher-intensity activities and underestimating distance during faster-paced ambulation. However, further accuracy studies are warranted. Our point estimations for mean or median percentage error gave equal weighting to all accuracy comparisons, possibly misrepresenting the true point estimate for measurement bias for some of the testing conditions we examined.\n\n\nCONCLUSIONS\nOther than for measures of steps in adults with no limitations in mobility, discretion should be used when considering the use of Fitbit devices as an outcome measurement tool in research or to inform health care decisions, as there are seemingly a limited number of situations where the device is likely to provide accurate measurement."
},
{
"pmid": "8576833",
"title": "UCLA Loneliness Scale (Version 3): reliability, validity, and factor structure.",
"abstract": "In this article I evaluated the psychometric properties of the UCLA Loneliness Scale (Version 3). Using data from prior studies of college students, nurses, teachers, and the elderly, analyses of the reliability, validity, and factor structure of this new version of the UCLA Loneliness Scale were conducted. Results indicated that the measure was highly reliable, both in terms of internal consistency (coefficient alpha ranging from .89 to .94) and test-retest reliability over a 1-year period (r = .73). Convergent validity for the scale was indicated by significant correlations with other measures of loneliness. Construct validity was supported by significant relations with measures of the adequacy of the individual's interpersonal relationships, and by correlations between loneliness and measures of health and well-being. Confirmatory factor analyses indicated that a model incorporating a global bipolar loneliness factor along with two method factor reflecting direction of item wording provided a very good fit to the data across samples. Implications of these results for future measurement research on loneliness are discussed."
},
{
"pmid": "26180009",
"title": "Mobile Phone Sensor Correlates of Depressive Symptom Severity in Daily-Life Behavior: An Exploratory Study.",
"abstract": "BACKGROUND\nDepression is a common, burdensome, often recurring mental health disorder that frequently goes undetected and untreated. Mobile phones are ubiquitous and have an increasingly large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms.\n\n\nOBJECTIVE\nThe objective of this study was to explore the detection of daily-life behavioral markers using mobile phone global positioning systems (GPS) and usage sensors, and their use in identifying depressive symptom severity.\n\n\nMETHODS\nA total of 40 adult participants were recruited from the general community to carry a mobile phone with a sensor data acquisition app (Purple Robot) for 2 weeks. Of these participants, 28 had sufficient sensor data received to conduct analysis. At the beginning of the 2-week period, participants completed a self-reported depression survey (PHQ-9). Behavioral features were developed and extracted from GPS location and phone usage data.\n\n\nRESULTS\nA number of features from GPS data were related to depressive symptom severity, including circadian movement (regularity in 24-hour rhythm; r=-.63, P=.005), normalized entropy (mobility between favorite locations; r=-.58, P=.012), and location variance (GPS mobility independent of location; r=-.58, P=.012). Phone usage features, usage duration, and usage frequency were also correlated (r=.54, P=.011, and r=.52, P=.015, respectively). Using the normalized entropy feature and a classifier that distinguished participants with depressive symptoms (PHQ-9 score ≥5) from those without (PHQ-9 score <5), we achieved an accuracy of 86.5%. Furthermore, a regression model that used the same feature to estimate the participants' PHQ-9 scores obtained an average error of 23.5%.\n\n\nCONCLUSIONS\nFeatures extracted from mobile phone sensor data, including GPS and phone usage, provided behavioral markers that were strongly related to depressive symptom severity. While these findings must be replicated in a larger study among participants with confirmed clinical symptoms, they suggest that phone sensors offer numerous clinical opportunities, including continuous monitoring of at-risk populations with little patient burden and interventions that can provide just-in-time outreach."
}
] |
PLoS Neglected Tropical Diseases | 31356617 | PMC6687207 | 10.1371/journal.pntd.0007555 | Large scale detailed mapping of dengue vector breeding sites using street view images | Targeted environmental and ecosystem management remain crucial in control of dengue. However, providing detailed environmental information on a large scale to effectively target dengue control efforts remains a challenge. An important piece of such information is the extent of the presence of potential dengue vector breeding sites, which consist primarily of open containers such as ceramic jars, buckets, old tires, and flowerpots. In this paper we present the design and implementation of a pipeline to detect outdoor open containers which constitute potential dengue vector breeding sites from geotagged images and to create highly detailed container density maps at unprecedented scale. We implement the approach using Google Street View images which have the advantage of broad coverage and of often being two to three years old which allows correlation analyses of container counts against historical data from manual surveys. Containers comprising eight of the most common breeding sites are detected in the images using convolutional neural network transfer learning. Over a test set of images the object recognition algorithm has an accuracy of 0.91 in terms of F-score. Container density counts are generated and displayed on a decision support dashboard. Analyses of the approach are carried out over three provinces in Thailand. The container counts obtained agree well with container counts from available manual surveys. Multi-variate linear regression relating densities of the eight container types to larval survey data shows good prediction of larval index values with an R-squared of 0.674. To delineate conditions under which the container density counts are indicative of larval counts, a number of factors affecting correlation with larval survey data are analyzed. We conclude that creation of container density maps from geotagged images is a promising approach to providing detailed risk maps at large scale. | Related workIn their review of dengue risk mapping modeling tools, Louis et al. [25] showed that social predictors such as education level, occupational status, and income are often used as proxies to assess local environmental conditions and hygiene, which are normally difficult to assess directly. Housing conditions are often used as a proxy to assess type and number of mosquito breeding sites. Lack of access to running water has also been found to be a risk factor for dengue since residents in such areas tend to store water in ground-level containers [26–27]. Chang et al. [28] used satellite imagery from Google Earth to create a base map to which they added information about larval infestation, locations of tire dumps, cemeteries, large areas of standing water, and locations of homes of dengue cases, all of which were collected manually. They found the resulting system allowed public health workers to prioritize control strategies and target interventions to highest risk areas.A number of researchers have developed applications for reporting or detecting mosquito breeding sites, as well as other information related to dengue outbreaks. Agrawal et al. [29] use a support vector machine and scale-invariant feature transform (SIFT) generated features to classify individual images as being breeding sites or not. Their approach relies on users to take photos of individual sites. On a test set of 78 images they achieved a binary classification accuracy of 82%. Mehra et al. [30] present a technique for classifying images into those containing puddles or not. They evaluate their technique on images taken with mobile phones, a hand-held thermal imaging camera, and retrieved using Google image search. Using an ensemble of naive Bayes classifiers and boosting they achieve a binary classification accuracy of 90% on images that have both RGB and thermal information. Quadri et al. [31] present TargetZika, a smartphone application for citizens to report breeding sites using photos and descriptions. They provide no automated classification of the photos but rather rely on users to label them from a menu. They use the data to produce risk maps but do not validate them. Mosquito Alert [32] is a similar smartphone application that allows users to report breeding sites and mosquitos with photos and descriptions. It uses crowdsourcing to identify photos. Reports are displayed on a map on the Mosquito Alert website. All of these previous approaches either require manual effort to first locate possible breeding sites in images or require users or the crowd to manually identify them. In contrast, the approach presented in this paper performs both object localization and classification and can be used on a wide variety of geotagged images taken from a horizontal perspective.Some researchers have manually extracted features from GSV data for environmental monitoring purposes. Rundle et al. [33] manually extracted features from street view data to audit neighborhood environments and compared the results to field audits. They found a high level of concordance for features that are not temporally variable. Rousselet et al. [34] manually extracted species occurrence data for the pine processionary moth from GSV images and compared the results to field data. The two were found to be highly similar.Runge et al. [35] made use of the scene recognition convolutional neural net of Zhou et al. [36] to label GSV images and assembled them into maps to find scenic routes for autonomous vehicle navigation. Although their application differs from ours, their pipeline and the structure of their feature maps are similar to those in this study. Since we are interested in obtaining counts of numbers of breeding sites in a given region, in this study we make use of object detection networks. Recently, region proposal methods have yielded the highest performance in object detection [37]. Region proposal methods employ a mechanism that first iteratively segments the image and groups the adjacent segments based on similarity to hypothesize regions that may contain objects of interest and then use CNNs to identify objects in those regions. Girshick [38] introduced Fast Region-based Convolutional Neural Networks (Fast R-CNN) which reduced the running time of the detection network, making the region proposal computation the bottleneck. Recently, Ren et al. [39] introduced Faster R-CNN, which greatly improves the computational efficiency. By sharing convolutional features between the region proposal and detection networks, they reduce the computational cost of region proposal to near zero and achieve a frame rate of 5 frames per second on a GPU. Because of its accuracy and computational efficiency, Faster R-CNN is the technique used in the current study. | [
"23563266",
"15741559",
"3068349",
"25375766",
"16704841",
"5316746",
"14695086",
"15218911",
"9546405",
"22816001",
"27223693",
"20817262",
"28968420",
"28369149",
"25487167",
"9715943",
"28199323",
"24810901",
"25487167",
"21906782",
"21918642",
"19627614",
"21146773",
"24130675",
"26959679",
"27295650",
"19624476",
"18047191",
"16045462",
"20636303",
"15218910",
"11580037",
"20096802",
"11250812",
"8146129",
"20428384",
"24522133",
"16739405",
"15825756",
"17019767"
] | [
{
"pmid": "23563266",
"title": "The global distribution and burden of dengue.",
"abstract": "Dengue is a systemic viral infection transmitted between humans by Aedes mosquitoes. For some patients, dengue is a life-threatening illness. There are currently no licensed vaccines or specific therapeutics, and substantial vector control efforts have not stopped its rapid emergence and global spread. The contemporary worldwide distribution of the risk of dengue virus infection and its public health burden are poorly known. Here we undertake an exhaustive assembly of known records of dengue occurrence worldwide, and use a formal modelling framework to map the global distribution of dengue risk. We then pair the resulting risk map with detailed longitudinal information from dengue cohort studies and population surfaces to infer the public health burden of dengue in 2010. We predict dengue to be ubiquitous throughout the tropics, with local spatial variations in risk influenced strongly by rainfall, temperature and the degree of urbanization. Using cartographic approaches, we estimate there to be 390 million (95% credible interval 284-528) dengue infections per year, of which 96 million (67-136) manifest apparently (any level of disease severity). This infection total is more than three times the dengue burden estimate of the World Health Organization. Stratification of our estimates by country allows comparison with national dengue reporting, after taking into account the probability of an apparent infection being formally reported. The most notable differences are discussed. These new risk maps and infection estimates provide novel insights into the global, regional and national public health burden imposed by dengue. We anticipate that they will provide a starting point for a wider discussion about the global impact of this disease and will help to guide improvements in disease control strategies using vaccine, drug and vector control methods, and in their economic evaluation."
},
{
"pmid": "15741559",
"title": "Dispersal of the dengue vector Aedes aegypti within and between rural communities.",
"abstract": "Knowledge of mosquito dispersal is critical for vector-borne disease control and prevention strategies and for understanding population structure and pathogen dissemination. We determined Aedes aegypti flight range and dispersal patterns from 21 mark-release-recapture experiments conducted over 11 years (1991-2002) in Puerto Rico and Thailand. Dispersal was compared by release location, sex, age, season, and village. For all experiments, the majority of mosquitoes were collected from their release house or adjacent house. Inter-village movement was detected rarely, with a few mosquitoes moving a maximum of 512 meters from one Thai village to the next. Average dispersal distances were similar for males and females and females released indoors versus outdoors. The movement of Ae. aegypti was not influenced by season or age, but differed by village. Results demonstrate that adult Ae. aegypti disperse relatively short distances, suggesting that people rather than mosquitoes are the primary mode of dengue virus dissemination within and among communities."
},
{
"pmid": "3068349",
"title": "The biology of Aedes albopictus.",
"abstract": "The biology of Aedes albopictus is reviewed, with emphasis on studies of ecology and behavior. The following topics are discussed: distribution and taxonomy, genetics, medical importance, habitat, egg biology, larval biology, adult biology, competitive interactions, comparative studies with Aedes aegypti, population dynamics, photoperiodism, and surveillance and control."
},
{
"pmid": "25375766",
"title": "Epidemiological trends of dengue disease in Thailand (2000-2011): a systematic literature review.",
"abstract": "UNLABELLED\nA literature survey and analysis was conducted to describe the epidemiology of dengue disease in Thailand reported between 2000 and 2011. The literature search identified 610 relevant sources, 40 of which fulfilled the inclusion criteria defined in the review protocol. Peaks in the number of cases occurred during the review period in 2001, 2002, 2008 and 2010. A shift in age group predominance towards older ages continued through the review period. Disease incidence and deaths remained highest in children aged ≤ 15 years and case fatality rates were highest in young children. Heterogeneous geographical patterns were observed with higher incidence rates reported in the Southern region and serotype distribution varied in time and place. Gaps identified in epidemiological knowledge regarding dengue disease in Thailand provide several avenues for future research, in particular studies of seroprevalence.\n\n\nPROTOCOL REGISTRATION\nPROSPERO CRD42012002170."
},
{
"pmid": "16704841",
"title": "Aedes aegypti larval indices and risk for dengue epidemics.",
"abstract": "We assessed in a case-control study the test-validity of Aedes larval indices for the 2000 Havana outbreak. \"Cases\" were blocks where a dengue fever patient lived during the outbreak. \"Controls\" were randomly sampled blocks. Before, during, and after the epidemic, we calculated Breteau index (BI) and house index at the area, neighborhood, and block level. We constructed receiver operating characteristic (ROC) curves to determine their performance as predictors of dengue transmission. We observed a pronounced effect of the level of measurement. The BI(max) (maximum block BI in a radius of 100 m) at 2-month intervals had an area under the ROC curve of 71%. At a cutoff of 4.0, it significantly (odds ratio 6.00, p<0.05) predicted transmission with 78% sensitivity and 63% specificity. Analysis of BI at the local level, with human-defined boundaries, could be introduced in control programs to identify neighborhoods at high risk for dengue transmission."
},
{
"pmid": "5316746",
"title": "Aedes aegypti (L.) and Aedes albopictus (Skuse) in Singapore City. 2. Larval habitats.",
"abstract": "Detailed information on the breeding habitats of Ae. aegypti and Ae. albopictus is necessary when planning programmes for their control. The larval habitats of the two species in 10 city areas were counted and classified according to type, frequency of occurrence, location, and function. Of all the breeding habitats recorded 95% were domestic containers. The most common Ae. aegypti breeding habitats were ant traps, earthenware jars, bowls, tanks, tin cans, and drums, ant traps being the most common indoors and earthenware jars the most common out doors. Breeding habitats for Ae. albopictus were commonly found in earthen ware jars, tin cans, ant traps, rubber tires, bowls, and drums; ant traps were the most common indoor habitat and tin cans were most common outdoors.The majority of Ae. aegypti breeding habitats were found indoors, while only half of all the Ae. albopictus breeding habitats were indoors. The indoor and outdoor distribution of breeding habitats of both species was not related to the type of housing in the area.The distribution of the type of breeding habitats, however, was related to the type of housing in the area. Ant traps were common to all areas, but water-storage containers and unused containers were common in slum-house and shop-house areas. Flats, however, had more containers used for keeping plants and flowers.The most common breeding habitats of Ae. aegypti and Ae. albopictus are discussed in relation to the habits of the people. It is concluded that control of the two species will depend largely on a change in such habits, either through public health education or by some form of law enforcement."
},
{
"pmid": "14695086",
"title": "Characteristics of the spatial pattern of the dengue vector, Aedes aegypti, in Iquitos, Peru.",
"abstract": "We determine the spatial pattern of Aedes aegypti and the containers in which they develop in two neighborhoods of the Amazonian city of Iquitos, Peru. Four variables were examined: adult Ae. aegypti, pupae, containers positive for larvae or pupae, and all water-holding containers. Adults clustered strongly within houses and weakly to a distance of 30 meters beyond the household; clustering was not detected beyond 10 meters for positive containers or pupae. Over short periods of time restricted flight range and frequent blood-feeding behavior of Ae. aegypti appear to be underlying factors in the clustering patterns of human dengue infections. Permanent, consistently infested containers (key premises) were not major producers of Ae. aegypti, indicating that larvaciding strategies by themselves may be less effective than reduction of mosquito development sites by source reduction and education campaigns. We conclude that entomologic risk of human dengue infection should be assessed at the household level at frequent time intervals."
},
{
"pmid": "15218911",
"title": "Longitudinal studies of Aedes aegypti (Diptera: Culicidae) in Thailand and Puerto Rico: blood feeding frequency.",
"abstract": "We used a histologic technique to study multiple blood feeding in a single gonotrophic cycle by engorged Aedes aegypti (L.) that were collected weekly for 2 yr from houses in a rural village in Thailand (n = 1,891) and a residential section of San Juan, Puerto Rico (n = 1,675). Overall, mosquitoes from Thailand contained significantly more multiple meals (n = 1,300, 42% double meals, 5% triple meals) than mosquitoes collected in Puerto Rico (n = 1,156, 32% double meals, 2% triple meals). The portion of specimens for which frequency of feeding could not be determined was 31% at both sites. We estimated that on average Ae. aegypti take 0.76 and 0.63 human blood meals per day in Thailand and Puerto Rico, respectively. However, frequency of multiple feeding varied among houses and, in Puerto Rico, the neighborhoods from which mosquitoes were collected. In Thailand 65% of the mosquitoes fed twice on the same day, whereas in Puerto Rico 57% took multiple meals separated by > or = 1 d. At both sites, the majority of engorged specimens were collected inside houses (Thailand 86%, Puerto Rico 95%). The number of blood meals detected was independent of where mosquitoes were collected (inside versus outside of the house) at both sites and the time of day collections were made in Puerto Rico. Feeding rates were slightly higher for mosquitoes collected in the afternoon in Thailand. Temperatures were significantly higher and mosquitoes significantly smaller in Thailand than in Puerto Rico. At both sites female size was negatively associated with temperature. Rates of multiple feeding were associated positively with temperature and negatively with mosquito size in Thailand, but not in Puerto Rico. Multiple feeding during a single gonotrophic cycle is a regular part of Ae. aegypti biology, can vary geographically and under different climate conditions, and may be associated with variation in patterns of dengue virus transmission."
},
{
"pmid": "9546405",
"title": "Exploratory space-time analysis of reported dengue cases during an outbreak in Florida, Puerto Rico, 1991-1992.",
"abstract": "The spatial and temporal distributions of dengue cases reported during a 1991-1992 outbreak in Florida, Puerto Rico (population = 8,689), were studied by using a Geographic Information System. A total of 377 dengue cases were identified from a laboratory-based dengue surveillance system and georeferenced by their residential addresses on digital zoning and U.S. Geological Survey topographic maps. Weekly case maps were generated for the period between June and December 1991, when 94.2% of the dengue cases were reported. The temporal evolution of the epidemic was rapid, affecting a wide geographic area within seven weeks of the first reported cases of the season. Dengue cases were reported in 217 houses; of these 56 (25.8%) had between two and six reported cases. K-function analysis was used to characterize the spatial clustering patterns for all reported dengue cases (laboratory-positive and indeterminate) and laboratory-positive cases alone, while the Barton and David and Knox tests were used to characterize spatio-temporal attributes of dengue cases reported during the 1991-1992 outbreak. For both sets of data significant case clustering was identified within individual households over short periods of time (three days or less), but in general, the cases had spatial pattern characteristics much like the population pattern as a whole. The rapid temporal and spatial progress of the disease within the community suggests that control measures should be applied to the entire municipality, rather than to the areas immediately surrounding houses of reported cases. The potential for incorporating Geographic Information System technologies into a dengue surveillance system and the limitations of using surveillance data for spatial studies are discussed."
},
{
"pmid": "22816001",
"title": "Fine scale spatiotemporal clustering of dengue virus transmission in children and Aedes aegypti in rural Thai villages.",
"abstract": "BACKGROUND\nBased on spatiotemporal clustering of human dengue virus (DENV) infections, transmission is thought to occur at fine spatiotemporal scales by horizontal transfer of virus between humans and mosquito vectors. To define the dimensions of local transmission and quantify the factors that support it, we examined relationships between infected humans and Aedes aegypti in Thai villages.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nGeographic cluster investigations of 100-meter radius were conducted around DENV-positive and DENV-negative febrile \"index\" cases (positive and negative clusters, respectively) from a longitudinal cohort study in rural Thailand. Child contacts and Ae. aegypti from cluster houses were assessed for DENV infection. Spatiotemporal, demographic, and entomological parameters were evaluated. In positive clusters, the DENV infection rate among child contacts was 35.3% in index houses, 29.9% in houses within 20 meters, and decreased with distance from the index house to 6.2% in houses 80-100 meters away (p<0.001). Significantly more Ae. aegypti were DENV-infectious (i.e., DENV-positive in head/thorax) in positive clusters (23/1755; 1.3%) than negative clusters (1/1548; 0.1%). In positive clusters, 8.2% of mosquitoes were DENV-infectious in index houses, 4.2% in other houses with DENV-infected children, and 0.4% in houses without infected children (p<0.001). The DENV infection rate in contacts was 47.4% in houses with infectious mosquitoes, 28.7% in other houses in the same cluster, and 10.8% in positive clusters without infectious mosquitoes (p<0.001). Ae. aegypti pupae and adult females were more numerous only in houses containing infectious mosquitoes.\n\n\nCONCLUSIONS/SIGNIFICANCE\nHuman and mosquito infections are positively associated at the level of individual houses and neighboring residences. Certain houses with high transmission risk contribute disproportionately to DENV spread to neighboring houses. Small groups of houses with elevated transmission risk are consistent with over-dispersion of transmission (i.e., at a given point in time, people/mosquitoes from a small portion of houses are responsible for the majority of transmission)."
},
{
"pmid": "27223693",
"title": "Temporal Dynamics and Spatial Patterns of Aedes aegypti Breeding Sites, in the Context of a Dengue Control Program in Tartagal (Salta Province, Argentina).",
"abstract": "BACKGROUND\nSince 2009, Fundación Mundo Sano has implemented an Aedes aegypti Surveillance and Control Program in Tartagal city (Salta Province, Argentina). The purpose of this study was to analyze temporal dynamics of Ae. aegypti breeding sites spatial distribution, during five years of samplings, and the effect of control actions over vector population dynamics.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nSeasonal entomological (larval) samplings were conducted in 17,815 fixed sites in Tartagal urban area between 2009 and 2014. Based on information of breeding sites abundance, from satellite remote sensing data (RS), and by the use of Geographic Information Systems (GIS), spatial analysis (hotspots and cluster analysis) and predictive model (MaxEnt) were performed. Spatial analysis showed a distribution pattern with the highest breeding densities registered in city outskirts. The model indicated that 75% of Ae. aegypti distribution is explained by 3 variables: bare soil coverage percentage (44.9%), urbanization coverage percentage(13.5%) and water distribution (11.6%).\n\n\nCONCLUSIONS/SIGNIFICANCE\nThis results have called attention to the way entomological field data and information from geospatial origin (RS/GIS) are used to infer scenarios which could then be applied in epidemiological surveillance programs and in the determination of dengue control strategies. Predictive maps development constructed with Ae. aegypti systematic spatiotemporal data, in Tartagal city, would allow public health workers to identify and target high-risk areas with appropriate and timely control measures. These tools could help decision-makers to improve health system responses and preventive measures related to vector control."
},
{
"pmid": "20817262",
"title": "Monthly district level risk of dengue occurrences in Sakon Nakhon Province, Thailand.",
"abstract": "The paper deals with the incidence of the Dengue Virus Infection (DVI) in the 18 districts of Sakon Nakhon Province, Thailand, from January 2005 to December 2007. Using a statistical and autoregressive analysis to smooth incidence data, we have constructed yearly and monthly district level maps of the DVI distribution. It is found that the DVI incidence is very correlated with weather conditions and higher occurrences are observed in the three most populated districts Wanon Niwat, Sawang Daen Din and Mueang Sakon Nakhon, and the virus transmission period spans from mid-summer to mid-rainy seasons (from April to August). Employing a Generalized Linear Model (GLM), we found that the DVI incidences were related with current meteorological (monthly minimum temperature, past 2-month cumulated rainfall) and socio-economical (population of 0-4years old, per capita number of public small water wells, and proportion of villages with primary schools) covariates. And using the GLM under the climate change conditions (A1B scenario of IPCC), we found that the higher risk of DVI spreads from the three most populated districts to less populated ones, and the period of virus transmission increases from 5 to 9months to include part of winter, summer and rainy seasons (from March to November) during which 6%, 61% and 33% of districts will be at low, medium and high risk of DVI occurrences, respectively."
},
{
"pmid": "28968420",
"title": "Socio-demographic, ecological factors and dengue infection trends in Australia.",
"abstract": "Dengue has been a major public health concern in Australia. This study has explored the spatio-temporal trends of dengue and potential socio- demographic and ecological determinants in Australia. Data on dengue cases, socio-demographic, climatic and land use types for the period January 1999 to December 2010 were collected from Australian National Notifiable Diseases Surveillance System, Australian Bureau of Statistics, Australian Bureau of Meteorology, and Australian Bureau of Agricultural and Resource Economics and Sciences, respectively. Descriptive and linear regression analyses were performed to observe the spatio-temporal trends of dengue, socio-demographic and ecological factors in Australia. A total of 5,853 dengue cases (both local and overseas acquired) were recorded across Australia between January 1999 and December 2010. Most the cases (53.0%) were reported from Queensland, followed by New South Wales (16.5%). Dengue outbreak was highest (54.2%) during 2008-2010. A highest percentage of overseas arrivals (29.9%), households having rainwater tanks (33.9%), Indigenous population (27.2%), separate houses (26.5%), terrace house types (26.9%) and economically advantage people (42.8%) were also observed during 2008-2010. Regression analyses demonstrate that there was an increasing trend of dengue incidence, potential socio-ecological factors such as overseas arrivals, number of households having rainwater tanks, housing types and land use types (e.g. intensive uses and production from dryland agriculture). Spatial variation of socio-demographic factors was also observed in this study. In near future, significant increase of temperature was also projected across Australia. The projected increased temperature as well as increased socio-ecological trend may pose a future threat to the local transmission of dengue in other parts of Australia if Aedes mosquitoes are being established. Therefore, upgraded mosquito and disease surveillance at different ports should be in place to reduce the chance of mosquitoes and dengue cases being imported into all over Australia."
},
{
"pmid": "28369149",
"title": "Socioeconomic and environmental determinants of dengue transmission in an urban setting: An ecological study in Nouméa, New Caledonia.",
"abstract": "BACKGROUND\nDengue is a mosquito-borne virus that causes extensive morbidity and economic loss in many tropical and subtropical regions of the world. Often present in cities, dengue virus is rapidly spreading due to urbanization, climate change and increased human movements. Dengue cases are often heterogeneously distributed throughout cities, suggesting that small-scale determinants influence dengue urban transmission. A better understanding of these determinants is crucial to efficiently target prevention measures such as vector control and education. The aim of this study was to determine which socioeconomic and environmental determinants were associated with dengue incidence in an urban setting in the Pacific.\n\n\nMETHODOLOGY\nAn ecological study was performed using data summarized by neighborhood (i.e. the neighborhood is the unit of analysis) from two dengue epidemics (2008-2009 and 2012-2013) in the city of Nouméa, the capital of New Caledonia. Spatial patterns and hotspots of dengue transmission were assessed using global and local Moran's I statistics. Multivariable negative binomial regression models were used to investigate the association between dengue incidence and various socioeconomic and environmental factors throughout the city.\n\n\nPRINCIPAL FINDINGS\nThe 2008-2009 epidemic was spatially structured, with clusters of high and low incidence neighborhoods. In 2012-2013, dengue incidence rates were more homogeneous throughout the city. In all models tested, higher dengue incidence rates were consistently associated with lower socioeconomic status (higher unemployment, lower revenue or higher percentage of population born in the Pacific, which are interrelated). A higher percentage of apartments was associated with lower dengue incidence rates during both epidemics in all models but one. A link between vegetation coverage and dengue incidence rates was also detected, but the link varied depending on the model used.\n\n\nCONCLUSIONS\nThis study demonstrates a robust spatial association between dengue incidence rates and socioeconomic status across the different neighborhoods of the city of Nouméa. Our findings provide useful information to guide policy and help target dengue prevention efforts where they are needed most."
},
{
"pmid": "25487167",
"title": "Modeling tools for dengue risk mapping - a systematic review.",
"abstract": "INTRODUCTION\nThe global spread and the increased frequency and magnitude of epidemic dengue in the last 50 years underscore the urgent need for effective tools for surveillance, prevention, and control. This review aims at providing a systematic overview of what predictors are critical and which spatial and spatio-temporal modeling approaches are useful in generating risk maps for dengue.\n\n\nMETHODS\nA systematic search was undertaken, using the PubMed, Web of Science, WHOLIS, Centers for Disease Control and Prevention (CDC) and OvidSP databases for published citations, without language or time restrictions. A manual search of the titles and abstracts was carried out using predefined criteria, notably the inclusion of dengue cases. Data were extracted for pre-identified variables, including the type of predictors and the type of modeling approach used for risk mapping.\n\n\nRESULTS\nA wide variety of both predictors and modeling approaches was used to create dengue risk maps. No specific patterns could be identified in the combination of predictors or models across studies. The most important and commonly used predictors for the category of demographic and socio-economic variables were age, gender, education, housing conditions and level of income. Among environmental variables, precipitation and air temperature were often significant predictors. Remote sensing provided a source of varied land cover data that could act as a proxy for other predictor categories. Descriptive maps showing dengue case hotspots were useful for identifying high-risk areas. Predictive maps based on more complex methodology facilitated advanced data analysis and visualization, but their applicability in public health contexts remains to be established.\n\n\nCONCLUSIONS\nThe majority of available dengue risk maps was descriptive and based on retrospective data. Availability of resources, feasibility of acquisition, quality of data, alongside available technical expertise, determines the accuracy of dengue risk maps and their applicability to the field of public health. A large number of unknowns, including effective entomological predictors, genetic diversity of circulating viruses, population serological profile, and human mobility, continue to pose challenges and to limit the ability to produce accurate and effective risk maps, and fail to support the development of early warning systems."
},
{
"pmid": "9715943",
"title": "Domestic Aedes aegypti breeding site surveillance: limitations of remote sensing as a predictive surveillance tool.",
"abstract": "This project tested aerial photography as a surveillance tool in identifying residential premises at high risk of Aedes aegypti breeding by extending the use of a recently developed, ground-based, rapid assessment technique, the modified Premise Condition Index (PCI2). During 1995, we inspected 360 premises in Townsville, Australia for Ae. aegypti breeding, and PCI2 scores were recorded. The PCI2 values were also estimated from 1:3,000 color and infrared aerial photograph interpretation for the same premises. We found that shade levels can be accurately identified from both color and infrared images, and the PCI2 can be accurately identified from infrared photographs. Yard conditions, however, cannot be accurately identified from either aerial photograph type. The airborne PCI2 did not significantly correlate with breeding measures, and logistic regression further demonstrated that neither aerial photograph type allows the accurate prediction of Ae. aegypti breeding risk. Therefore, the ability of low-level aerial photography to enhance Ae. aegypti breeding site surveillance is at present limited, with ground surveillance remaining our most reliable tool for identifying the probability of Ae. aegypti breeding in the residential environment."
},
{
"pmid": "28199323",
"title": "A Large Scale Biorational Approach Using Bacillus thuringiensis israeliensis (Strain AM65-52) for Managing Aedes aegypti Populations to Prevent Dengue, Chikungunya and Zika Transmission.",
"abstract": "BACKGROUND\nAedes aegypti is a container-inhabiting mosquito and a vector of dengue, chikungunya, and Zika viruses. In 2009 several cases of autochthonous dengue transmission were reported in Key West, Florida, USA prompting a comprehensive response to control A. aegypti. In Key West, larvae of this mosquito develop in containers around human habitations which can be numerous and labor intensive to find and treat. Aerial applications of larvicide covering large areas in a short time can be an efficient and economical method to control A. aegypti. Bacillus thuringiensis israelensis (Bti) is a bacterial larvicide which is highly target specific and appropriate for wide area spraying over urban areas, but to date, there are no studies that evaluate aerial spraying of Bti to control container mosquitoes like A. aegypti.\n\n\nMETHODOLOGY\nThis paper examines the effectiveness of aerial larvicide applications using VectoBac® WG, a commercially available Bti formulation, for A. aegypti control in an urban setting in the USA. Droplet characteristics and spray drop deposition were evaluated in Key West, Florida, USA. The mortality of A. aegypti in containers placed under canopy in an urban environment was also evaluated. Efficacy of multiple larvicide applications on adult female A. aegypti population reduction was compared between an untreated control and treatment site.\n\n\nCONCLUSIONS\nDroplet characteristics showed that small droplets can penetrate through dense canopy to reach small containers. VectoBac WG droplets reached small containers under heavy canopy in sufficient amounts to cause > 55% mortality on all application days and >90% mortality on 3 of 5 application days while controls had <5% mortality. Aerial applications of VectoBac WG caused significant decrease in adult female populations throughout the summer and during the 38th week (last application) the difference in adult female numbers between untreated and treated sites was >50%. Aerial larvicide applications using VectoBac WG can cover wide areas in a short period of time and can be effective in controlling A. aegypti and reducing A. aegypti-borne transmission in urban areas similar to Key West, Florida, USA."
},
{
"pmid": "24810901",
"title": "Assessing the relationship between vector indices and dengue transmission: a systematic review of the evidence.",
"abstract": "BACKGROUND\nDespite doubts about methods used and the association between vector density and dengue transmission, routine sampling of mosquito vector populations is common in dengue-endemic countries worldwide. This study examined the evidence from published studies for the existence of any quantitative relationship between vector indices and dengue cases.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nFrom a total of 1205 papers identified in database searches following Cochrane and PRISMA Group guidelines, 18 were included for review. Eligibility criteria included 3-month study duration and dengue case confirmation by WHO case definition and/or serology. A range of designs were seen, particularly in spatial sampling and analyses, and all but 3 were classed as weak study designs. Eleven of eighteen studies generated Stegomyia indices from combined larval and pupal data. Adult vector data were reported in only three studies. Of thirteen studies that investigated associations between vector indices and dengue cases, 4 reported positive correlations, 4 found no correlation and 5 reported ambiguous or inconclusive associations. Six out of 7 studies that measured Breteau Indices reported dengue transmission at levels below the currently accepted threshold of 5.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThere was little evidence of quantifiable associations between vector indices and dengue transmission that could reliably be used for outbreak prediction. This review highlighted the need for standardized sampling protocols that adequately consider dengue spatial heterogeneity. Recommendations for more appropriately designed studies include: standardized study design to elucidate the relationship between vector abundance and dengue transmission; adult mosquito sampling should be routine; single values of Breteau or other indices are not reliable universal dengue transmission thresholds; better knowledge of vector ecology is required."
},
{
"pmid": "25487167",
"title": "Modeling tools for dengue risk mapping - a systematic review.",
"abstract": "INTRODUCTION\nThe global spread and the increased frequency and magnitude of epidemic dengue in the last 50 years underscore the urgent need for effective tools for surveillance, prevention, and control. This review aims at providing a systematic overview of what predictors are critical and which spatial and spatio-temporal modeling approaches are useful in generating risk maps for dengue.\n\n\nMETHODS\nA systematic search was undertaken, using the PubMed, Web of Science, WHOLIS, Centers for Disease Control and Prevention (CDC) and OvidSP databases for published citations, without language or time restrictions. A manual search of the titles and abstracts was carried out using predefined criteria, notably the inclusion of dengue cases. Data were extracted for pre-identified variables, including the type of predictors and the type of modeling approach used for risk mapping.\n\n\nRESULTS\nA wide variety of both predictors and modeling approaches was used to create dengue risk maps. No specific patterns could be identified in the combination of predictors or models across studies. The most important and commonly used predictors for the category of demographic and socio-economic variables were age, gender, education, housing conditions and level of income. Among environmental variables, precipitation and air temperature were often significant predictors. Remote sensing provided a source of varied land cover data that could act as a proxy for other predictor categories. Descriptive maps showing dengue case hotspots were useful for identifying high-risk areas. Predictive maps based on more complex methodology facilitated advanced data analysis and visualization, but their applicability in public health contexts remains to be established.\n\n\nCONCLUSIONS\nThe majority of available dengue risk maps was descriptive and based on retrospective data. Availability of resources, feasibility of acquisition, quality of data, alongside available technical expertise, determines the accuracy of dengue risk maps and their applicability to the field of public health. A large number of unknowns, including effective entomological predictors, genetic diversity of circulating viruses, population serological profile, and human mobility, continue to pose challenges and to limit the ability to produce accurate and effective risk maps, and fail to support the development of early warning systems."
},
{
"pmid": "21906782",
"title": "Modeling dengue fever risk based on socioeconomic parameters, nationality and age groups: GIS and remote sensing based case study.",
"abstract": "Dengue fever (DF) and its impacts are growing environmental, economic, and health concerns in Saudi Arabia. In this study, we have attempted to model areas with humans at risk of dengue fever prevalence, depending on the spatial relationship between dengue fever cases and different socioeconomic parameters. We have developed new methods to verify the quality of neighborhoods from high resolution satellite images based on several factors such as density of houses in each neighborhood in each district, width of streets, and roof area of houses. In the absence of detailed neighborhood quality information being available for each district, we felt this factor would best approximate the reality on the ground at local scales. Socioeconomic parameters, such as population numbers, population density, and neighborhood quality were analyzed using Geographically Weighted Regression (GWR) to create a prediction model identifying levels of risk of dengue and to describe the association between DF cases and the related socio-economic factors. Descriptive analysis was used to characterize dengue fever victims among Saudis and non-Saudis in various age groups. The results show that there was a strong positive association between dengue fever cases and socioeconomic factors (R²=0.80). The prevalence among Saudis was higher compared to non-Saudis in 2006 and 2007, while the prevalence among non-Saudis was higher in 2008, 2009 and 2010. For age groups, DF was more prevalent in adults between the ages of 16 and 60, accounting for approximately 74% of all reported cases in 2006, 67% in 2007, 81% in 2008, 87% in 2009, and 81% in 2010."
},
{
"pmid": "21918642",
"title": "Population density, water supply, and the risk of dengue fever in Vietnam: cohort study and spatial analysis.",
"abstract": "BACKGROUND\nAedes aegypti, the major vector of dengue viruses, often breeds in water storage containers used by households without tap water supply, and occurs in high numbers even in dense urban areas. We analysed the interaction between human population density and lack of tap water as a cause of dengue fever outbreaks with the aim of identifying geographic areas at highest risk.\n\n\nMETHODS AND FINDINGS\nWe conducted an individual-level cohort study in a population of 75,000 geo-referenced households in Vietnam over the course of two epidemics, on the basis of dengue hospital admissions (n = 3,013). We applied space-time scan statistics and mathematical models to confirm the findings. We identified a surprisingly narrow range of critical human population densities between around 3,000 to 7,000 people/km² prone to dengue outbreaks. In the study area, this population density was typical of villages and some peri-urban areas. Scan statistics showed that areas with a high population density or adequate water supply did not experience severe outbreaks. The risk of dengue was higher in rural than in urban areas, largely explained by lack of piped water supply, and in human population densities more often falling within the critical range. Mathematical modeling suggests that simple assumptions regarding area-level vector/host ratios may explain the occurrence of outbreaks.\n\n\nCONCLUSIONS\nRural areas may contribute at least as much to the dissemination of dengue fever as cities. Improving water supply and vector control in areas with a human population density critical for dengue transmission could increase the efficiency of control efforts. Please see later in the article for the Editors' Summary."
},
{
"pmid": "19627614",
"title": "Combining Google Earth and GIS mapping technologies in a dengue surveillance system for developing countries.",
"abstract": "BACKGROUND\nDengue fever is a mosquito-borne illness that places significant burden on tropical developing countries with unplanned urbanization. A surveillance system using Google Earth and GIS mapping technologies was developed in Nicaragua as a management tool.\n\n\nMETHODS AND RESULTS\nSatellite imagery of the town of Bluefields, Nicaragua captured from Google Earth was used to create a base-map in ArcGIS 9. Indices of larval infestation, locations of tire dumps, cemeteries, large areas of standing water, etc. that may act as larval development sites, and locations of the homes of dengue cases collected during routine epidemiologic surveying were overlaid onto this map. Visual imagery of the location of dengue cases, larval infestation, and locations of potential larval development sites were used by dengue control specialists to prioritize specific neighborhoods for targeted control interventions.\n\n\nCONCLUSION\nThis dengue surveillance program allows public health workers in resource-limited settings to accurately identify areas with high indices of mosquito infestation and interpret the spatial relationship of these areas with potential larval development sites such as garbage piles and large pools of standing water. As a result, it is possible to prioritize control strategies and to target interventions to highest risk areas in order to eliminate the likely origin of the mosquito vector. This program is well-suited for resource-limited settings since it utilizes readily available technologies that do not rely on Internet access for daily use and can easily be implemented in many developing countries for very little cost."
},
{
"pmid": "21146773",
"title": "Using Google Street View to audit neighborhood environments.",
"abstract": "BACKGROUND\nResearch indicates that neighborhood environment characteristics such as physical disorder influence health and health behavior. In-person audit of neighborhood environments is costly and time-consuming. Google Street View may allow auditing of neighborhood environments more easily and at lower cost, but little is known about the feasibility of such data collection.\n\n\nPURPOSE\nTo assess the feasibility of using Google Street View to audit neighborhood environments.\n\n\nMETHODS\nThis study compared neighborhood measurements coded in 2008 using Street View with neighborhood audit data collected in 2007. The sample included 37 block faces in high-walkability neighborhoods in New York City. Field audit and Street View data were collected for 143 items associated with seven neighborhood environment constructions: aesthetics, physical disorder, pedestrian safety, motorized traffic and parking, infrastructure for active travel, sidewalk amenities, and social and commercial activity. To measure concordance between field audit and Street View data, percentage agreement was used for categoric measures and Spearman rank-order correlations were used for continuous measures.\n\n\nRESULTS\nThe analyses, conducted in 2009, found high levels of concordance (≥80% agreement or ≥0.60 Spearman rank-order correlation) for 54.3% of the items. Measures of pedestrian safety, motorized traffic and parking, and infrastructure for active travel had relatively high levels of concordance, whereas measures of physical disorder had low levels. Features that are small or that typically exhibit temporal variability had lower levels of concordance.\n\n\nCONCLUSIONS\nThis exploratory study indicates that Google Street View can be used to audit neighborhood environments."
},
{
"pmid": "24130675",
"title": "Assessing species distribution using Google Street View: a pilot study with the Pine Processionary Moth.",
"abstract": "Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant."
},
{
"pmid": "26959679",
"title": "What Makes for Effective Detection Proposals?",
"abstract": "Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "19624476",
"title": "Reducing costs and operational constraints of dengue vector control by targeting productive breeding places: a multi-country non-inferiority cluster randomized trial.",
"abstract": "OBJECTIVES\nTo test the non-inferiority hypothesis that a vector control approach targeting only the most productive water container types gives the same or greater reduction of the vector population as a non-targeted approach in different ecological settings and to analyse whether the targeted intervention is less costly.\n\n\nMETHODS\nCluster randomized trial in eight study sites (Venezuela, Mexico, Peru, Kenya, Thailand, Myanmar, Vietnam, Philippines), with each study area divided into 18-20 clusters (sectors or neighbourhoods) of approximately 50-100 households each. Using a baseline pupal-demographic survey, the most productive container types were identified which produced >or=55% of all Ae. aegypti pupae. Clusters were then paired based on similar pupae per person indices. One cluster from each pair was randomly allocated to receive the targeted vector control intervention; the other received the 'blanket' (non-targeted) intervention attempting to reach all water holding containers.\n\n\nRESULTS\nThe pupal-demographic baseline survey showed a large variation of productive container types across all study sites. In four sites the vector control interventions in both study arms were insecticidal and in the other four sites, non-insecticidal (environmental management and/or biological control methods). Both approaches were associated with a reduction of outcome indicators in the targeted and non-targeted intervention arm of the six study sites where the follow up study was conducted (PPI, Pupae per Person Index and BI, Breteau Index). Targeted interventions were as effective as non-targeted ones in terms of PPI. The direct costs per house reached were lower in targeted intervention clusters than in non-targeted intervention clusters with only one exception, where the targeted intervention was delivered through staff-intensive social mobilization.\n\n\nCONCLUSIONS\nTargeting only the most productive water container types (roughly half of all water holding container types) was as effective in lowering entomological indices as targeting all water holding containers at lower implementation costs. Further research is required to establish the most efficacious method or combination of methods for targeted dengue vector interventions."
},
{
"pmid": "18047191",
"title": "Standardizing container classification for immature Aedes aegypti surveillance in Kamphaeng Phet, Thailand.",
"abstract": "For the development of community-based vector control programs for dengue prevention, one of the key components is to formulate an adequate classification scheme for the different containers in which immature Aedes mosquitoes develop. Such a standardized scheme would permit more efficient targeting of efforts and resources in the most productive way possible. Based on field data from Kamphaeng Phet, Thailand, we developed a classification method that consists of the shape (S), use (U), and material (M) of the container (SUM-method). We determined that by targeting the four container classes that held the most Ae. aegypti pupae, adult mosquito production could theoretically be reduced by 70%. The classification method may be equally suitable for similar studies elsewhere in the world. Main advantages of the classification scheme are that categorization of containers does not need to be done a priori, that there is no \"miscellaneous\" class, and that different immature control strategies can be easily and prospectively tested with a local database. We expect that the classification strategy will 1) facilitate comparison of results among different ecological and geographic settings and 2) simplify communication among vector control personnel and affected communities."
},
{
"pmid": "16045462",
"title": "Effectiveness of dengue control practices in household water containers in Northeast Thailand.",
"abstract": "OBJECTIVE\nTo investigate the influence of larval control methods (using temephos, keeping fish and covering containers with lids), water use and weekly cleaning of containers on the presence of Aedes aegypti larvae in water-storage containers in rural and urban households in Khon Kaen province.\n\n\nMETHOD\nCross-sectional questionnaire survey and larval survey covered 966 households and 5821 containers were inspected.\n\n\nRESULT\nIn rural and urban areas larval control was patchy and often ineffective. Consequently, the mosquito indices exceed the target indices for dengue control with the Breteau Indices of 201 and 113, and Container Indices of 25 and 28 in rural and urban areas, respectively. The containers most frequently infested with larvae were rectangular cement containers storing water for bathing (rural: 37.2%; urban: 35%) and flushing the toilets (rural: 35.7%; urban: 34.3%). Keeping fish [adjusted odds ratio (AOR): 0.08-0.16] was the most effective methods of control. Correctly covering containers with lids was similarly effective (AOR: 0.10-0.25) when used on jars for storing drinking water. However, frequent use of containers reduced the effectiveness of lids. Temephos was effective only in dragon jars in urban areas (AOR: 0.46) where a standard package of temephos were available. Weekly cleaning of containers was an effective method for larval control in most types of containers. A combination of control methods increased effectiveness.\n\n\nCONCLUSION\nThis study highlights the complex interaction of household water use and larval control practices as well as the importance of determining the most effective control measures compatible with water practices for implementing control promotion."
},
{
"pmid": "20636303",
"title": "The development of predictive tools for pre-emptive dengue vector control: a study of Aedes aegypti abundance and meteorological variables in North Queensland, Australia.",
"abstract": "SUMMARY OBJECTIVES\nTo describe the meteorological influences on adult dengue vector abundance in Australia for the development of predictive models to trigger pre-emptive control operation.\n\n\nMETHODS\nMultiple linear regression analyses were performed using meteorological data and female Aedes aegypti collection data from BG-Sentinel Mosquito traps placed at 11 monitoring sites in Cairns, north Queensland.\n\n\nRESULTS\nConsiderable regression coefficients (R(2) = 0.64 and 0.61) for longer- and shorter-term factor models respectively were derived. Longer-term factors significantly associated with abundance of adult vectors were mean minimum temperature (lagged 6 month) and mean daily temperature (lagged 4 month), explaining the predictable increase in abundance during the wet season. Factors explaining fluctuation in abundance in the shorter term were mean relative humidity over the previous 2 weeks and current daily average temperature. Rainfall variables were not found to be strong predictors of A. aegypti abundance in either longer- or shorter-term models.\n\n\nCONCLUSIONS\nThe implications of these findings for the development of useful predictive models for vector abundance risks are discussed. Such models can be used to guide the application of pre-emptive dengue vector control, and thereby enhance disease management."
},
{
"pmid": "15218910",
"title": "Longitudinal studies of Aedes aegypti (Diptera: Culicidae) in Thailand and Puerto Rico: population dynamics.",
"abstract": "Aspiration collections of adult Aedes aegypti (L.) were made weekly from inside and outside of houses for 3 yr in a rural Thai village (n = 9,637 females and n = 11,988 males) and for 2 yr in a residential section of San Juan, Puerto Rico (n = 5,941 females and n = 6,739 males). In Thailand, temperature and rainfall fell into distinct seasonal categories, but only temperature was correlated with fluctuations in female abundance. Average weekly temperature 6 wk before mosquitoes were collected and minimum weekly temperature during the week of collection provided the highest correlations with female abundance. Accounting for annual variation significantly improved Thai models of temperature and mosquito abundance. In Puerto Rico, temperature, but not rainfall, could be categorized into seasonal patterns. Neither was correlated with changes in female abundance. At both sites the vast majority of females were collected inside houses and most contained a blood meal. Most teneral females were collected outside. Wing length--an indicator of female size--and parity, egg development or engorgement status were not correlated, indicating that feeding success and survival were not influenced by female size. At both sites, females fed almost exclusively on human hosts (> or = 96%), a pattern that did not change seasonally. In Puerto Rico more nonhuman blood meals were detected in mosquitoes collected outside than inside houses; no such difference was detected in Thailand. Gut contents of dissected females indicated that females in the Thai population had a younger age distribution and fed more frequently on blood than did Ae. aegypti in Puerto Rico. Our results indicated that aspects of this species' biology can vary significantly from one location to another and 1 yr to the next."
},
{
"pmid": "11580037",
"title": "Precipitation and temperature effects on populations of Aedes albopictus (Diptera: Culicidae): implications for range expansion.",
"abstract": "We investigated how temperature and precipitation regime encountered over the life cycle of Aedes albopictus (Skuse) affects populations. Caged populations of A. albopictus were maintained at 22, 26, and 30 degrees C. Cages were equipped with containers that served as sites for oviposition and larval development. All cages were assigned to one of three simulated precipitation regimes: (1) low fluctuation regime - water within the containers was allowed to evaporate to 90% of its maximum before being refilled, (2) high fluctuation regime - water was allowed to evaporate to 25% of its maximum before being refilled, and (3) drying regime - water was allowed to evaporate to complete container dryness before being refilled. Greater temperature and the absence of drying resulted in greater production of adults. Greater temperature in combination with drying were detrimental to adult production. These precipitation effects on adult production were absent at 22 degrees C. Greater temperatures and drying treatments yielded higher and lower eclosion rates, respectively and, both yielded greater mortality. Development time and size of adults decreased with increased temperatures, and drying produced larger adults. Greater temperatures resulted in greater egg mortality. These results suggest that populations occurring in warmer regions are likely to produce more adults as long as containers do not dry completely. Populations in cooler regions are likely to produce fewer adults with the variability of precipitation contributing less to variation in adult production. Predicted climate change in North America is likely to extend the northern distribution of A. albopictus and to limit further its establishment in arid regions."
},
{
"pmid": "20096802",
"title": "The dengue vector Aedes aegypti: what comes next.",
"abstract": "Aedes aegypti is the urban vector of dengue viruses worldwide. While climate influences the geographical distribution of this mosquito species, other factors also determine the suitability of the physical environment. Importantly, the close association of A. aegypti with humans and the domestic environment allows this species to persist in regions that may otherwise be unsuitable based on climatic factors alone. We highlight the need to incorporate the impact of the urban environment in attempts to model the potential distribution of A. aegypti and we briefly discuss the potential for future technology to aid management and control of this widespread vector species."
},
{
"pmid": "11250812",
"title": "Climate change and mosquito-borne disease.",
"abstract": "Global atmospheric temperatures are presently in a warming phase that began 250--300 years ago. Speculations on the potential impact of continued warming on human health often focus on mosquito-borne diseases. Elementary models suggest that higher global temperatures will enhance their transmission rates and extend their geographic ranges. However, the histories of three such diseases--malaria, yellow fever, and dengue--reveal that climate has rarely been the principal determinant of their prevalence or range; human activities and their impact on local ecology have generally been much more significant. It is therefore inappropriate to use climate-based models to predict future prevalence."
},
{
"pmid": "8146129",
"title": "Dengue: the risk to developed and developing countries.",
"abstract": "Dengue viruses are members of the Flaviviridae, transmitted principally in a cycle involving humans and mosquito vectors. In the last 20 years the incidence of dengue fever epidemics has increased and hyperendemic transmission has been established over a geographically expanding area. A severe form, dengue hemorrhagic fever (DHF), is an immunopathologic disease occurring in persons who experience sequential dengue infections. The risk of sequential infections, and consequently the incidence of DHF, has risen dramatically, first in Asia and now in the Americas. At the root of the emergence of dengue as a major health problem are changes in human demography and behavior, leading to unchecked populations of and increased exposure to the principal domestic mosquito vector, Aedes aegypti. Virus-specified factors also influence the epidemiology of dengue. Speculations on future events in the epidemiology, evolution, and biological expression of dengue are presented."
},
{
"pmid": "20428384",
"title": "Eco-bio-social determinants of dengue vector breeding: a multicountry study in urban and periurban Asia.",
"abstract": "OBJECTIVE\nTo study dengue vector breeding patterns under a variety of conditions in public and private spaces; to explore the ecological, biological and social (eco-bio-social) factors involved in vector breeding and viral transmission, and to define the main implications for vector control.\n\n\nMETHODS\nIn each of six Asian cities or periurban areas, a team randomly selected urban clusters for conducting standardized household surveys, neighbourhood background surveys and entomological surveys. They collected information on vector breeding sites, people's knowledge, attitudes and practices surrounding dengue, and the characteristics of the study areas. All premises were inspected; larval indices were used to quantify vector breeding sites, and pupal counts were used to identify productive water container types and as a proxy measure for adult vector abundance.\n\n\nFINDINGS\nThe most productive vector breeding sites were outdoor water containers, particularly if uncovered, beneath shrubbery and unused for at least one week. Peridomestic and intradomestic areas were much more important for pupal production than commercial and public spaces other than schools and religious facilities. A complex but non-significant association was found between water supply and pupal counts, and lack of waste disposal services was associated with higher vector abundance in only one site. Greater knowledge about dengue and its transmission was associated with lower mosquito breeding and production. Vector control measures (mainly larviciding in one site) substantially reduced larval and pupal counts and \"pushed\" mosquito breeding to alternative containers.\n\n\nCONCLUSION\nVector breeding and the production of adult Aedes aegypti are influenced by a complex interplay of factors. Thus, to achieve effective vector management, a public health response beyond routine larviciding or focal spraying is essential."
},
{
"pmid": "24522133",
"title": "Weather factors influencing the occurrence of dengue fever in Nakhon Si Thammarat, Thailand.",
"abstract": "This study explored the impact of weather variability on the transmission of dengue fever in Nakhon Si Thammarat, Thailand. Data on monthly-notified cases of dengue fever, over the period of January 1981 - June 2012 were collected from the Bureau of Epidemiology, Department of Disease Control, Ministry of Public Health. Weather data over the same period were obtained from the Thai Meteorological Department. Spearman correlation analysis and time-series adjusted Poisson regression analysis were performed to quantify the relationship between weather and the number of dengue cases. The results showed that maximum and minimum temperatures at a lag of zero months, the amount of rainfall, and relative humidity at a lag of two months were significant predictors of dengue incidence in Nakhon Si Thammarat. The time series Poisson regression model demonstrated goodness-of-fit with a correlation between observed and predicted number of dengue incidence rate of 91.82%. This model could be used to optimise dengue prevention by predicting trends in dengue incidence. Accurate predictions, for even a few months, provide an invaluable opportunity to mount a vector control intervention or to prepare for hospital demand in the community."
},
{
"pmid": "16739405",
"title": "Ecological factors influencing Aedes aegypti (Diptera: Culicidae) productivity in artificial containers in Salinas, Puerto Rico.",
"abstract": "We investigated the effects of environmental factors and immature density on the productivity of Aedes aegypti (L.) and explored the hypothesis that immature populations were under nutritional stress. In total, 1,367 containers with water in 624 premises were studied in Salinas, southern Puerto Rico (May-July 2004). We counted 3,632 pupae, and most female pupae (70%) were in five of 18 types of containers. These containers were unattended and influenced by local yards' environmental conditions. Pupal productivity was significantly associated with the number of trees per premise, water volume, and lower water temperatures. Larval and pupal abundance were larger in containers with leaf litter or algae. Pupal productivity and biomass of emerging females varied in containers with litter of different tree species. We found a significant and positive association between numbers of larvae and pupae of Ae. aegypti and a negative relationship between larval density and mass of emerging females. From multivariate analyses, we interpreted that 1) food limitation or competition existed in a number of containers; and 2) to a lesser extent, there was lack of negative larval density effects in containers with a larger water volume and lower temperature, where emerging females were not under nutritional stress. Corroborating evidence for food limitation or intraspecific competition effects came from our observations that females emerging in the field had an average body mass comparable with those females produced in the laboratory with the lowest feeding regime. Ae. aegypti larvae in Salinas are most likely influenced by resource limitation or competition and by rainfall in unmanaged containers in the absence of aquatic predators. Source reduction and improved yard management targeting unattended containers would eliminate most Ae. aegypti productivity and removal or control of shaded, larger containers would eliminate the production of the largest emerging mosquito females in the study area."
},
{
"pmid": "15825756",
"title": "Investigation of relationships between Aedes aegypti egg, larvae, pupae, and adult density indices where their main breeding sites were located indoors.",
"abstract": "Aedes aegypti (L.) density indices obtained in a dengue fever (DF) endemic area were compared. One hundred and twenty premises, in an urban area of Colombia where dengue type-1 and type-2 virus cocirculated, were randomly selected and sampled for 7 months. The geometric mean monthly numbers (density index, DI) of Ae. aegypti eggs (ODI), 4th instar larvae (LDI), pupae (PDI), and adults (ADI) were calculated based on the use of ovitraps, nets, and manual aspirators, respectively. A negative temporal correlation was observed between the LDI and the ODI (r = -0.83, df = 5, and P < 0.01). Positive temporal correlations were only observed between the LDI and the PDI (r = 0.90, df = 5, and P < 00.5) and the Breteau and House indices (r = 0.86, df = 5, and P < 0.01). No other correlations were found between these indices and any of the other density indices or the incidence of suspected DF cases in residents, the temperature, the rainfall, or seasonal fluctuations. Our results were, therefore, probably due to the most productive Ae. aegypti breeding sites (large water containers) being located indoors within this study area. The number of adult female Ae. aegypti/person (n = 0.5) and pupae/person (n = 11) in our study area were lower and dramatically higher than the transmission thresholds previously reported for adult and pupae, respectively. Because there were confirmed DF cases during the study period, the transmission threshold based on the Ae. aegypti pupae was clearly more reliable. We found that the mean ovitrap premise index (OPI) was 98.2% during this study and that the mean larval (L-4th instars) premise index (LPI) was 59.2%, and therefore we suggest that the OPI and LPI would be more sensitive methods to gauge the effectiveness of A. aegypti control programs."
},
{
"pmid": "17019767",
"title": "Different spatial distribution of Aedes aegypti and Aedes albopictus along an urban-rural gradient and the relating environmental factors examined in three villages in northern Thailand.",
"abstract": "A larval survey of dengue vectors was conducted from July to November 1966 and from May to November 1997 in Chiangmai Province, Thailand. Three villages in urban, transition, and rural areas were selected for the survey to clarify the spatial distribution of Ae. aegypti and Ae. albopictus along an urban-rural ecological gradient. The average number of Ae. aegypti larvae in larvitraps was higher in the urban area than in the rural area, as we expected, whereas the opposite was found for Ae. albopictus, rural area > urban area. A house survey of larvae-inhabiting containers showed significant differences in the number and composition of these containers among the study areas. Significant differences were also found in the average distance between houses, average tree height, and average percentage of vegetation cover for each house. The seasonal pattern of rainfall recorded in each study area did not show great differences among the study areas. The response of Ae. aegypti and Ae. albopictus to the urban-rural gradient is discussed in relation to the possibility of applying geographic information system techniques to plan the control strategy and surveillance of dengue vectors."
}
] |
Frontiers in Aging Neuroscience | 31427959 | PMC6688130 | 10.3389/fnagi.2019.00205 | Predicting MCI Status From Multimodal Language Data Using Cascaded Classifiers | Recent work has indicated the potential utility of automated language analysis for the detection of mild cognitive impairment (MCI). Most studies combining language processing and machine learning for the prediction of MCI focus on a single language task; here, we consider a cascaded approach to combine data from multiple language tasks. A cohort of 26 MCI participants and 29 healthy controls completed three language tasks: picture description, reading silently, and reading aloud. Information from each task is captured through different modes (audio, text, eye-tracking, and comprehension questions). Features are extracted from each mode, and used to train a series of cascaded classifiers which output predictions at the level of features, modes, tasks, and finally at the overall session level. The best classification result is achieved through combining the data at the task level (AUC = 0.88, accuracy = 0.83). This outperforms a classifier trained on neuropsychological test scores (AUC = 0.75, accuracy = 0.65) as well as the “early fusion” approach to multimodal classification (AUC = 0.79, accuracy = 0.70). By combining the predictions from the multimodal language classifier and the neuropsychological classifier, this result can be further improved to AUC = 0.90 and accuracy = 0.84. In a correlation analysis, language classifier predictions are found to be moderately correlated (ρ = 0.42) with participant scores on the Rey Auditory Verbal Learning Test (RAVLT). The cascaded approach for multimodal classification improves both system performance and interpretability. This modular architecture can be easily generalized to incorporate different types of classifiers as well as other heterogeneous sources of data (imaging, metabolic, etc.). | 2. Related WorkThe discovery of non-invasive biomarkers to detect early stages of cognitive decline in Alzheimer's disease (AD) and related dementias is a significant challenge, and conventional neuropsychological tests may not be sensitive to some of the earliest changes (Drummond et al., 2015; Beltrami et al., 2018). One potential alternative to conventional cognitive testing is the analysis of naturalistic language use, which can be less stressful (König et al., 2015), more easily repeatable (Forbes-McKay et al., 2013), and a better predictor of actual functional ability (Sajjadi et al., 2012). We briefly review the relevant findings with respect to language production and reception in MCI and early-stage AD.2.1. Narrative Speech Production in MCISpontaneous, connected speech may be affected in the earliest stages of cognitive decline, as speech production involves the coordination of multiple cognitive domains, including semantic memory, working memory, attention, and executive processes (Mueller et al., 2018), activating numerous areas on both sides of the brain (Silbert et al., 2014). We summarize the previous work examining language and speech in MCI, as well as any reported correlations with cognitive test scores.The sensitivity of narrative speech analysis to MCI may depend to some extent on the nature of the production task, as different tasks impose different sets of constraints (Boschi et al., 2017). Picture description tasks are the most relevant to our protocol. Cuetos et al. (2007) used the Cookie Theft picture description task from the Boston Diagnostic Aphasia Examination (BDAE) (Goodglass et al., 1983) to elicit speech samples from asymptomatic, middle-aged participants with and without the E280A mutation (which inevitably leads to AD). They found a significant reduction in information content in the carrier group. Ahmed et al. (2013) also analyzed Cooke Theft picture descriptions, and reported deficits in various aspects of connected speech in 15 MCI participants who later went on to develop AD. Impairments were observed in speech production and fluency, as well as syntactic complexity and semantic content. Mueller et al. (2017) analyzed Cookie Theft speech samples from 264 English-speaking participants at two time points, and found that individuals with early MCI (n = 64) declined faster than healthy controls on measures of semantic content and speech fluency. Measures of lexical diversity and syntactic complexity did not differ significantly between the groups. Drummond et al. (2015) reported an increased production of repetitions and irrelevant details from MCI participants (n = 22) on a task that involved constructing a story from a series of images. However, other work has found no significant differences between MCI and control participants on either verbal (Bschor et al., 2001) or written (Tsantali et al., 2013) Cookie Theft narratives. Forbes-McKay and Venneri (2005) found that while picture description tasks in general can be used to discriminate pathological decline, highly complex images are more sensitive to the earliest stages of decline.Other work has specifically examined the acoustic properties of speech in MCI and dementia. Temporal and prosodic changes in connected speech are well-documented in AD, including decreased articulation rate and speech tempo, as well as increased hesitation ratio (Hoffmann et al., 2010), reduced verbal rate and phonation rate, and increased pause rate (Lee et al., 2011), and increased number of pauses outside syntactic boundaries (Gayraud et al., 2011). Spectrographic properties, such as number of periods of voice, number of voice breaks, shimmer, and noise-to-harmonics ratio have also been shown to exhibit changes in AD (Meilán et al., 2014). There is evidence that these changes might begin very early in the disease progression, including in the prodromal or MCI stages (Tóth et al., 2015; Alhanai et al., 2017; König et al., 2018b).Some correlations between characteristics of narrative speech and neuropsychological test scores have been reported for AD (Ash et al., 2007; Kavé and Goral, 2016; Kavé and Dassa, 2018), however, fewer studies have examined possible correlations in the MCI stage. Tsantali et al. (2013) found a significant correlation between performance on an oral picture description task and the Mini-Mental State Examination (MMSE) (Folstein et al., 1975), in a population of 119 Greek participants with amnestic MCI, mild AD, and no impairment. However, MMSE was more highly correlated with other language tasks, including reading, writing, sentence repetition, and verbal fluency.Mueller et al. (2017) also examined the correlations between measures of narrative speech and standardized neuropsychological test scores, and found only weak correlations: e.g., the correlations between the semantic factor and the Boston Naming Test (Kaplan et al., 2001) and animal fluency task (Schiller, 1947) were positive but not statistically significant. As the authors point out, this may be due to the fact that characteristics of “empty” spontaneous speech, such as an increased production of pronouns, could reflect working memory problems rather than purely semantic impairments (Almor et al., 1999).To summarize, while the findings with respect to narrative speech production in MCI are somewhat mixed, on our Cookie Theft picture description task we expect the MCI group to show a reduction in semantic content and reduced speech fluency, including a slower rate of speech and increased pausing. Performance on the picture description task may be correlated with scores on the Boston naming test and MMSE score.2.2. Reading in MCIReading ability can be assessed in a variety of ways; for example, through reading comprehension, analysis of speech characteristics while reading aloud, and the recording of eye-movements. We summarize the results with respect to MCI along each of these dimensions.Segkouli et al. (2016) found that when MCI participants were given a paragraph to read and associated questions to answer, they had fewer correct responses and longer time to complete the task, relative to healthy controls. Tsantali et al. (2013) found a strong correlation between MMSE score and the ability to read and comprehend phrases and paragraphs, in participants with amnestic MCI and mild AD. When comparing to healthy controls, they found that reading comprehension was one of the earliest language abilities to be affected in MCI. In a related task, Hudon et al. (2006) examined 14 AD, 14 MCI, and 22 control participants on a text memory task, and found that both MCI and AD participants were impaired on the recollection of detail information and recalling the general meaning of the text. Chapman et al. (2002) reported a similar result, and a comparison with the control and AD groups suggested that detail-level processing is affected earlier in the disease progression. Results, such as this are generally attributed to impairments in episodic memory and a declining ability to encode new information, which can be evident from the early stages of cognitive decline (Belleville et al., 2008).Further evidence for this hypothesis is given by Schmitter-Edgecombe and Creamer (2010), who employed a “think-aloud” protocol to examine the reading strategies of 23 MCI participants and 23 controls during a text comprehension task. This methodology revealed that MCI participants made proportionally fewer explanatory inferences, which link knowledge from earlier in the text with the current sentence to form causal connections and promote comprehension and understanding. The authors suggest that this could indicate difficulties accessing and applying narrative information stored in episodic memory. They administered a series of true-false questions after each text, and found that MCI participants tended to answer fewer questions correctly, and that comprehension accuracy in the MCI group was correlated with the Rey Auditory Verbal Learning Test (RAVLT) of word learning, as well as RAVLT immediate and delayed recall.Recent work has used eye-tracking technology to examine reading processes in greater detail. For example, Fernández et al. (2013) recorded the eye movements of 20 people with mild AD and 20 matched controls while they read sentences. They found that the AD patients had an increased number of fixations, regressions, and skipped words, and a decreased number of words with only one fixation, relative to controls. In related work, Fernández et al. (2014) found that participants with mild AD showed an increase in gaze duration. Lueck et al. (2000) similarly reported more irregular eye movements while reading in their mild-moderate AD group (n = 14), as well as increased regressions and longer fixation times. Biondi et al. (2017) used eye-tracking data in a deep learning model to distinguish between AD patients and control participants while reading sentences and proverbs. Previous work from our group examined a similar set of features, extending this finding to MCI (Fraser et al., 2017). More generally, Beltrán et al. (2018) propose that the analysis of eye movements (in reading, as well as other paradigms) could support the early diagnosis of AD, and Pereira et al. (2014) suggest that eye movements may be able to predict the conversion from MCI to AD, as eye-movements can be sensitive to subtle changes in memory, visual, and executive processes.When texts are read aloud, the speech can also be analyzed from an acoustic perspective, in a similar manner to spontaneous speech. De Looze et al. (2018) found that participants with MCI and mild AD generally read slower, with shorter speech chunks relative to controls, and a greater number of pauses and dysfluencies. Segkouli et al. (2016) also observed a reduction in speech rate, reporting a significant positive correlation between the time taken to complete the paragraph reading comprehension task, and the time taken to complete a variety of neuropsychological tests.Other work has reported increased difficulty in reading words with irregular grapheme-to-morpheme correspondence (i.e., surface dyslexia) in AD (Patterson et al., 1994), although this finding is not universal (Lambon Ralph et al., 1995). A longitudinal study of AD participants concluded that these kinds of surface reading impairments are only significantly correlated with disease severity at the later stages of the disease (Fromm et al., 1991).In our study, then, we expect that MCI participants will not have difficulty producing the words associated with the texts, but they may read slower and produce more pauses and dysfluencies. MCI participants are expected to answer fewer comprehension questions correctly, as declines in working and episodic memory affect their ability to integrate and retain information from the texts. Their eye movements may show similarities to those of mild AD patients, with an increase in fixations, regressions, and skipped words, and longer gaze duration, although possibly to a lesser extent than has been reported in AD.2.3. Multimodal Machine Learning for MCI DetectionThe essential challenge of multimodal learning is to combine information from different sources (i.e., modalities, or modes) to improve performance on some final task, where those information sources may be complementary, redundant, or even contradictory. Traditionally, approaches to multimodal learning have been broadly separated into the two categories of early (or feature-level) fusion and late (or decision-level) fusion, although hybrid approaches also exist (Baltrusaitis et al., 2018). In early fusion, features extracted from different modes are concatenated into a single feature vector, and used to train a classifier. One advantage to this approach is that, depending on the classifier, it can be possible to model relationships between features from different modes. In late fusion, a separate classifier is trained for each mode, and the predictions are then combined, often through a process of voting. One advantage of the late fusion approach is that it avoids the high-dimensional feature space resulting from early fusion, which can make it more appropriate for smaller data sets. Late fusion also offers more flexibility, e.g., the ability to use different classification models for each mode (Wu et al., 1999).Multimodal learning has been applied to a variety of natural language processing (NLP) tasks, including audio-visual speech recognition (Potamianos et al., 2003), emotion and affect recognition (Schuller et al., 2011; Valstar et al., 2013; D'Mello and Kory, 2015), multimedia information retrieval (Atrey et al., 2010), and many others. With respect to dementia detection, multimodal approaches have been most effective in the medical imaging domain, where such methodologies have been used to combine information from various brain imaging technologies (Suk et al., 2014; Thung et al., 2017). For example, work from Beltrachini et al. (2015) and De Marco et al. (2017) has shown that the detection of MCI can be improved when combining features from MRI images with cognitive test scores in a multimodal machine learning classifier, compared to learning from either data source individually.However, previous NLP work on detecting MCI and dementia has typically focused on language production elicited by a single task, such as picture description (Fraser et al., 2016; Yancheva and Rudzicz, 2016), story recall (Roark et al., 2011; Lehr et al., 2013; Prud'hommeaux and Roark, 2015), conversation (Thomas et al., 2005; Asgari et al., 2017), or tests of semantic verbal fluency (Pakhomov and Hemmy, 2014; Linz et al., 2017). In cases where more than one speech elicitation task has been considered, the approach has typically been to simply concatenate the features in an early fusion paradigm.For example, Toth et al. (2018) consider three different speech tasks, concatenating speech-based features extracted from each task for a best MCI-vs.-control classification accuracy of 0.75. They do not report the results for each task individually, so it is not possible to say whether one task is more discriminative than the others. In a similar vein, König et al. (2018b) combine features from eight language tasks into a single classifier, and distinguish between MCI and subjective cognitive impairment with an accuracy of 0.86, but include only a qualitative discussion of the relative contributions of each of the tasks to the final prediction. Gosztolya et al. (2019) use a late fusion approach to combine linguistic and acoustic features for MCI detection; however, the data from their three tasks was again merged into a single feature set for each mode, obscuring any differences in predictive power between the tasks.2.4. HypothesesPrevious work has found that speech, language, eye-movements, and comprehension/recall can all exhibit changes in the early stages of cognitive decline. Furthermore, tasks assessing these abilities have been successfully used to detect MCI using machine learning. However, to our knowledge there has been no previous work combining information from all these various sources, and the few studies in the field which have explored multimodal classification have primarily focused on a single approach to fusing the data sources. Additionally, there has been no prior work attempting to link the predictions generated by a machine learning classifier to standardized neuropsychological testing. Thus, the two questions that we seek to answer in the current study are:Can we improve the accuracy of detection of MCI by combining information from different modes and tasks, and at what level of analysis is the information best integrated? Our hypothesis is that combining all the available information will lead to better performance than using any single mode or task.Do the predictions made by the machine learning classifier correlate with participant scores on standard tests of language and other cognitive abilities? Our hypotheses are: (a) Neuropsychological tests which are timed will be correlated with predictions based on the speech mode, which also encodes timing information; (b) Neuropsychological tests in the language domain will be correlated with predictions based on the language mode; and (c) Predictions which combine information from all modes and tasks will be correlated with MMSE, which also involves many cognitive domains. Since there is no previous work correlating eye-movements while reading with cognitive test scores, we do not generate a specific hypothesis for this mode, although we do include it in the analysis. | [
"24142144",
"10210631",
"29067328",
"9447440",
"29994351",
"22112550",
"18394487",
"26238814",
"30483116",
"29887912",
"28321196",
"11768376",
"12218649",
"19683225",
"8722896",
"17445292",
"19363178",
"29623841",
"28891818",
"26074814",
"24282223",
"25080188",
"1202204",
"25287871",
"16193251",
"26484921",
"1743032",
"16631882",
"21080826",
"28591751",
"12674822",
"20380247",
"16938019",
"27171756",
"29886493",
"28847279",
"27239498",
"10780625",
"24481220",
"28436388",
"9447441",
"28174533",
"29669461",
"23845236",
"25190209",
"25031536",
"10190820",
"28386518",
"18072981",
"10675135",
"22199464",
"1447438",
"18905647",
"20438657",
"25267658",
"25042445",
"18569251",
"29165085",
"22661485",
"23628238",
"8970012",
"26174331"
] | [
{
"pmid": "24142144",
"title": "Connected speech as a marker of disease progression in autopsy-proven Alzheimer's disease.",
"abstract": "Although an insidious history of episodic memory difficulty is a typical presenting symptom of Alzheimer's disease, detailed neuropsychological profiling frequently demonstrates deficits in other cognitive domains, including language. Previous studies from our group have shown that language changes may be reflected in connected speech production in the earliest stages of typical Alzheimer's disease. The aim of the present study was to identify features of connected speech that could be used to examine longitudinal profiles of impairment in Alzheimer's disease. Samples of connected speech were obtained from 15 former participants in a longitudinal cohort study of ageing and dementia, in whom Alzheimer's disease was diagnosed during life and confirmed at post-mortem. All patients met clinical and neuropsychological criteria for mild cognitive impairment between 6 and 18 months before converting to a status of probable Alzheimer's disease. In a subset of these patients neuropsychological data were available, both at the point of conversion to Alzheimer's disease, and after disease severity had progressed from the mild to moderate stage. Connected speech samples from these patients were examined at later disease stages. Spoken language samples were obtained using the Cookie Theft picture description task. Samples were analysed using measures of syntactic complexity, lexical content, speech production, fluency and semantic content. Individual case analysis revealed that subtle changes in language were evident during the prodromal stages of Alzheimer's disease, with two-thirds of patients with mild cognitive impairment showing significant but heterogeneous changes in connected speech. However, impairments at the mild cognitive impairment stage did not necessarily entail deficits at mild or moderate stages of disease, suggesting non-language influences on some aspects of performance. Subsequent examination of these measures revealed significant linear trends over the three stages of disease in syntactic complexity, semantic and lexical content. The findings suggest, first, that there is a progressive disruption in language integrity, detectable from the prodromal stage in a subset of patients with Alzheimer's disease, and secondly that measures of semantic and lexical content and syntactic complexity best capture the global progression of linguistic impairment through the successive clinical stages of disease. The identification of disease-specific language impairment in prodromal Alzheimer's disease could enhance clinicians' ability to distinguish probable Alzheimer's disease from changes attributable to ageing, while longitudinal assessment could provide a simple approach to disease monitoring in therapeutic trials."
},
{
"pmid": "10210631",
"title": "Why do Alzheimer patients have difficulty with pronouns? Working memory, semantics, and reference in comprehension and production in Alzheimer's disease.",
"abstract": "Three experiments investigated the extent to which semantic and working-memory deficits contribute to Alzheimer patients' impairments in producing and comprehending referring expressions. In Experiment 1, the spontaneous speech of 11 patients with Alzheimer's disease (AD) contained a greater ratio of pronouns to full noun phrases than did the spontaneous speech produced by 9 healthy controls. Experiments 2 and 3 used a cross-modal naming methodology to compare reference comprehension in another group of 10 patients and 10 age-matched controls. In Experiment 2, patients were less sensitive than healthy controls to the grammatical information necessary for processing pronouns. In Experiment 3, patients were better able to remember referent information in short paragraphs when reference was maintained with full noun phrases rather than pronouns, but healthy controls showed the reverse pattern. Performance in all three experiments was linked to working memory performance but not to word finding difficulty. We discuss these findings in terms of a theory of reference processing, the Informational Load Hypothesis, which views referential impairments in AD as the consequence of normal discourse processing in the context of a working memory impairment."
},
{
"pmid": "29067328",
"title": "Predicting mild cognitive impairment from spontaneous spoken utterances.",
"abstract": "INTRODUCTION\nTrials in Alzheimer's disease are increasingly focusing on prevention in asymptomatic individuals. We hypothesized that indicators of mild cognitive impairment (MCI) may be present in the content of spoken language in older adults and be useful in distinguishing those with MCI from those who are cognitively intact. To test this hypothesis, we performed linguistic analyses of spoken words in participants with MCI and those with intact cognition participating in a clinical trial.\n\n\nMETHODS\nData came from a randomized controlled behavioral clinical trial to examine the effect of unstructured conversation on cognitive function among older adults with either normal cognition or MCI (ClinicalTrials.gov: NCT01571427). Unstructured conversations (but with standardized preselected topics across subjects) were recorded between interviewers and interviewees during the intervention sessions of the trial from 14 MCI and 27 cognitively intact participants. From the transcription of interviewees recordings, we grouped spoken words using Linguistic Inquiry and Word Count (LIWC), a structured table of words, which categorizes 2500 words into 68 different word subcategories such as positive and negative words, fillers, and physical states. The number of words in each LIWC word subcategory constructed a vector of 68 dimensions representing the linguistic features of each subject. We used support vector machine and random forest classifiers to distinguish MCI from cognitively intact participants.\n\n\nRESULTS\nMCI participants were distinguished from those with intact cognition using linguistic features obtained by LIWC with 84% classification accuracy which is well above chance 60%.\n\n\nDISCUSSION\nLinguistic analyses of spoken language may be a powerful tool in distinguishing MCI subjects from those with intact cognition. Further studies to assess whether spoken language derived measures could detect changes in cognitive functions in clinical trials are warrented."
},
{
"pmid": "9447440",
"title": "The GDS/FAST staging system.",
"abstract": "Staging methodologies are an essential tool in the assessment of disease severity in progressive dementing illness. Several different instruments have been developed for this purpose. One of the most widely used methodologies is the Global Deterioration Scale/Functional Assessment Staging (GDS/FAST) system. This system has been studied extensively and proven to be reliable and valid for staging dementia in Alzheimer's disease (AD) in diverse settings. One of the major advantages of this system is that it spans, demarcates, and describes the entire course of normal aging and progressive AD until the final substages of the disease process. Other advantages include: (a) greatly enhanced ability to track the longitudinal course of AD, (b) improved clinicopathologic observations of AD interrelationships, and (c) enhanced diagnostic, differential diagnostic, and prognostic information. This article presents a brief overview of the GDS/FAST staging system."
},
{
"pmid": "29994351",
"title": "Multimodal Machine Learning: A Survey and Taxonomy.",
"abstract": "Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research."
},
{
"pmid": "22112550",
"title": "Extent and neural basis of semantic memory impairment in mild cognitive impairment.",
"abstract": "An increasing number of studies indicate that semantic memory is impaired in mild cognitive impairment (MCI). However, the extent and the neural basis of this impairment remain unknown. The aim of the present study was: 1) to evaluate whether all or only a subset of semantic domains are impaired in MCI patients; and 2) to assess the neural substrate of the semantic impairment in MCI patients using voxel-based analysis of MR grey matter density and SPECT perfusion. 29 predominantly amnestic MCI patients and 29 matched control subjects participated in this study. All subjects underwent a full neuropsychological assessment, along with a battery of five tests evaluating different domains of semantic memory. A semantic memory composite Z-score was established on the basis of this battery and was correlated with MRI grey matter density and SPECT perfusion measures. MCI patients were found to have significantly impaired performance across all semantic tasks, in addition to their anterograde memory deficit. Moreover, no temporal gradient was found for famous faces or famous public events and knowledge for the most remote decades was also impaired. Neuroimaging analyses revealed correlations between semantic knowledge and perirhinal/entorhinal areas as well as the anterior hippocampus. Therefore, the deficits in the realm of semantic memory in patients with MCI is more widespread than previously thought and related to dysfunction of brain areas beyond the limbic-diencephalic system involved in episodic memory. The severity of the semantic impairment may indicate a decline of semantic memory that began many years before the patients first consulted."
},
{
"pmid": "18394487",
"title": "Characterizing the memory changes in persons with mild cognitive impairment.",
"abstract": "Persons with mild cognitive impairment (MCI) do not meet criteria for Alzheimer's disease (AD) but are at high risk for developing the disease. Presence of a memory deficit is a key component in the characterization of MCI. This chapter presents empirical studies that attempt to describe and understand the nature of the memory deficit in MCI with a focus on episodic memory and working memory. Cross-sectional studies report prominent deficits of episodic memory characterized by impaired encoding of the contextual information that makes up complex events. This results in reduced free and cued recall, impaired recognition, and impaired associative learning. Although semantic encoding is found to be impaired in conditions that rely on explicit and intentional retrieval, preserved semantic processing is found with automatic conditions of testing. Studies indicate the presence of a partial deficit of working memory with the ability to divide attention being most severely impaired. However, there appears to be heterogeneity as to the extent of the working memory impairment. The presence of vascular anomalies on MRI, as well as being in a more advanced stage in the continuum from MCI to AD, are associated with more severe and more pervasive working memory deficits. Finally, longitudinal studies indicate that the combination of episodic and working memory deficits represents a strong predictor of progression from MCI to AD."
},
{
"pmid": "26238814",
"title": "Integration of Cognitive Tests and Resting State fMRI for the Individual Identification of Mild Cognitive Impairment.",
"abstract": "BACKGROUND\nResting-state functional magnetic resonance imaging (RS-fMRI) appears as a promising imaging technique to identify early biomarkers of Alzheimer type neurodegeneration, which can be more sensitive to detect the earliest stages of this disease than structural alterations. Recent findings have highlighted interesting patterns of alteration in resting-state activity at the mild cognitive impairment (MCI) prodromal stage of Alzheimer's disease. However, it has not been established whether RS-fMRI alterations may be of any diagnostic use at the individual patient level and whether parameters derived from RS-fMRI images add any quantitative predictive/classificatory value to standard cognitive tests (CTs).\n\n\nMETHODS\nWe computed a set of 444 features based on RS-fMRI and used 21 variables obtained from a neuropsychological assessment battery of tests in 29 MCI patients and 21 healthy controls. We used these indices to evaluate their impact on MCI/healthy control classification using machine learning algorithms and a 10-fold cross validation analysis.\n\n\nRESULTS\nA classification accuracy (sensitivity/ specificity/area under curve/positive predictive value/negative predictive value) of 0.9559 (0.9620/0.9470/ 0.9517/0.9720/0.9628) was achieved when using both sets of indices. There was a statistically significant improvement over the use of CTs only, highlighting the superior classificatory role of RS-fMRI.\n\n\nCONCLUSIONS\nRS-fMRI provides complementary information to CTs for MCI-patient/healthy control individual classification."
},
{
"pmid": "30483116",
"title": "Speech Analysis by Natural Language Processing Techniques: A Possible Tool for Very Early Detection of Cognitive Decline?",
"abstract": "Background: The discovery of early, non-invasive biomarkers for the identification of \"preclinical\" or \"pre-symptomatic\" Alzheimer's disease and other dementias is a key issue in the field, especially for research purposes, the design of preventive clinical trials, and drafting population-based health care policies. Complex behaviors are natural candidates for this. In particular, recent studies have suggested that speech alterations might be one of the earliest signs of cognitive decline, frequently noticeable years before other cognitive deficits become apparent. Traditional neuropsychological language tests provide ambiguous results in this context. In contrast, the analysis of spoken language productions by Natural Language Processing (NLP) techniques can pinpoint language modifications in potential patients. This interdisciplinary study aimed at using NLP to identify early linguistic signs of cognitive decline in a population of elderly individuals. Methods: We enrolled 96 participants (age range 50-75): 48 healthy controls (CG) and 48 cognitively impaired participants: 16 participants with single domain amnestic Mild Cognitive Impairment (aMCI), 16 with multiple domain MCI (mdMCI) and 16 with early Dementia (eD). Each subject underwent a brief neuropsychological screening composed by MMSE, MoCA, GPCog, CDT, and verbal fluency (phonemic and semantic). The spontaneous speech during three tasks (describing a complex picture, a typical working day and recalling a last remembered dream) was then recorded, transcribed and annotated at various linguistic levels. A multidimensional parameter computation was performed by a quantitative analysis of spoken texts, computing rhythmic, acoustic, lexical, morpho-syntactic, and syntactic features. Results: Neuropsychological tests showed significant differences between controls and mdMCI, and between controls and eD participants; GPCog, MoCA, PF, and SF also discriminated between controls and aMCI. In the linguistic experiments, a number of features regarding lexical, acoustic and syntactic aspects were significant in differentiating between mdMCI, eD, and CG (non-parametric statistical analysis). Some features, mainly in the acoustic domain also discriminated between CG and aMCI. Conclusions: Linguistic features of spontaneous speech transcribed and analyzed by NLP techniques show significant differences between controls and pathological states (not only eD but also MCI) and seems to be a promising approach for the identification of preclinical stages of dementia. Long duration follow-up studies are needed to confirm this assumption."
},
{
"pmid": "29887912",
"title": "Computational Techniques for Eye Movements Analysis towards Supporting Early Diagnosis of Alzheimer's Disease: A Review.",
"abstract": "An opportune early diagnosis of Alzheimer's disease (AD) would help to overcome symptoms and improve the quality of life for AD patients. Research studies have identified early manifestations of AD that occur years before the diagnosis. For instance, eye movements of people with AD in different tasks differ from eye movements of control subjects. In this review, we present a summary and evolution of research approaches that use eye tracking technology and computational analysis to measure and compare eye movements under different tasks and experiments. Furthermore, this review is targeted to the feasibility of pioneer work on developing computational tools and techniques to analyze eye movements under naturalistic scenarios. We describe the progress in technology that can enhance the analysis of eye movements everywhere while subjects perform their daily activities and give future research directions to develop tools to support early AD diagnosis through analysis of eye movements."
},
{
"pmid": "28321196",
"title": "Connected Speech in Neurodegenerative Language Disorders: A Review.",
"abstract": "Language assessment has a crucial role in the clinical diagnosis of several neurodegenerative diseases. The analysis of extended speech production is a precious source of information encompassing the phonetic, phonological, lexico-semantic, morpho-syntactic, and pragmatic levels of language organization. The knowledge about the distinctive linguistic variables identifying language deficits associated to different neurodegenerative diseases has progressively improved in the last years. However, the heterogeneity of such variables and of the way they are measured and classified limits any generalization and makes the comparison among studies difficult. Here we present an exhaustive review of the studies focusing on the linguistic variables derived from the analysis of connected speech samples, with the aim of characterizing the language disorders of the most prevalent neurodegenerative diseases, including primary progressive aphasia, Alzheimer's disease, movement disorders, and amyotrophic lateral sclerosis. A total of 61 studies have been included, considering only those reporting group analysis and comparisons with a group of healthy persons. This review first analyzes the differences in the tasks used to elicit connected speech, namely picture description, story narration, and interview, considering the possible different contributions to the assessment of different linguistic domains. This is followed by an analysis of the terminologies and of the methods of measurements of the variables, indicating the need for harmonization and standardization. The final section reviews the linguistic domains affected by each different neurodegenerative disease, indicating the variables most consistently impaired at each level and suggesting the key variables helping in the differential diagnosis among diseases. While a large amount of valuable information is already available, the review highlights the need of further work, including the development of automated methods, to take advantage of the richness of connected speech analysis for both research and clinical purposes."
},
{
"pmid": "11768376",
"title": "Spontaneous speech of patients with dementia of the Alzheimer type and mild cognitive impairment.",
"abstract": "This article discusses the potential of three assessments of language function in the diagnosis of Alzheimer-type dementia (DAT). A total of 115 patients (mean age 65.9 years) attending a memory clinic were assessed using three language tests: a picture description task (Boston Cookie-Theft picture), the Boston Naming Test, and a semantic and phonemic word fluency measure. Results of these assessments were compared with those of clinical diagnosis including the Global Deterioration Scale (GDS). The patients were classified by ICD-10 diagnosis and GDS stage as without cognitive impairment (n = 40), mild cognitive impairment (n = 34), mild DAT (n = 21), and moderate to severe DAT (n = 20). Hypotheses were (a) that the complex task of a picture description could more readily identify language disturbances than specific language tests and that (b) examination of spontaneous speech could help to identify patients with even mild forms of DAT. In the picture description task, all diagnostic groups produced an equal number of words. However, patients with mild or moderate to severe DAT described significantly fewer objects and persons, actions, features, and localizations than patients without or with mild cognitive impairment. Persons with mild cognitive impairment had results similar to those without cognitive impairment. The Boston Naming Test and both fluency measures were superior to the picture description task in differentiating the diagnostic groups. In sum, both hypotheses had to be rejected. Our results confirm that DAT patients have distinct semantic speech disturbances whereas they are not impaired in the amount of produced speech."
},
{
"pmid": "12218649",
"title": "Discourse changes in early Alzheimer disease, mild cognitive impairment, and normal aging.",
"abstract": "The purpose of this study was to determine the sensitivity of discourse gist measures to the early cognitive-linguistic changes in Alzheimer disease (AD) and in the preclinical stages. Differences in discourse abilities were examined in 25 cognitively normal adults, 24 adults with mild probable AD, and 20 adults with mild cognitive impairment (MCI) at gist and detail levels of discourse processing. The authors found that gist and detail levels of discourse processing were significantly impaired in persons with AD and MCI as compared with normal control subjects. Gist-level discourse processing abilities showed minimal overlap between cognitively normal control subjects and those with mild AD. Moreover, the majority of the persons with MCI performed in the range of AD on gist measures. These findings indicate that discourse gist measures hold promise as a diagnostic complement to enhance early detection of AD. Further studies are needed to determine how early the discourse gist deficits arise in AD."
},
{
"pmid": "19683225",
"title": "Functional neuroanatomy of the encoding and retrieval processes of verbal episodic memory in MCI.",
"abstract": "INTRODUCTION\nThe goal of this study was to explore the association between disease severity and performance on brain activation associated with episodic memory encoding and retrieval in persons with mild cognitive impairment (MCI).\n\n\nMETHOD\nThis was achieved by scanning 12 MCI persons and 10 age- and education-matched healthy controls while encoding words and while retrieving them in a recognition test.\n\n\nRESULTS\nBehaviorally, there was no significant group difference on recognition performance. However, MCI and healthy controls showed different patterns of cerebral activation during encoding. While most of these differences demonstrated reduced activation in the MCI group, there were areas of increased activation in the left ventrolateral prefrontal cortex. Reduced activation was found in brain areas known to be either structurally compromised or hypometabolic in Alzheimer's disease (AD). In contrast, very few group differences were associated with retrieval. Correlation analyses indicated that increased disease severity, as measured with the Mattis Dementia Rating Scale, was associated with smaller activation of the right middle and superior temporal gyri. In contrast, recognition success in MCI persons was associated with larger activation of the left ventrolateral prefrontal cortex during the encoding phase.\n\n\nCONCLUSION\nOverall, our results indicate that most of the memory-related cerebral network changes in MCI persons occur during the encoding phase. They also suggest that a prefrontal compensatory mechanism could occur in parallel with the disease-associated reduction of cerebral activation in temporal areas."
},
{
"pmid": "8722896",
"title": "Comparative study of oral and written picture description in patients with Alzheimer's disease.",
"abstract": "Oral and written picture descriptions were compared in 22 patients with Alzheimer's disease (AD) and 24 healthy elderly subjects. AD patients had a significant reduction of all word categories, which, similarly to controls, was more pronounced in written than in oral texts. They also reported fewer information units than controls, but without task difference. At the syntactic level, written descriptions of AD subjects were characterized by a diminution of subordinate clauses and a reduction of functors. More grammatical errors were present in written descriptions by AD and control subjects. AD and control groups produced an equivalent number of semantic errors in both tasks. However, in oral description, AD patients had more word-finding difficulties. In sum, AD descriptions were always shorter and less informative than control texts. Additionally, written descriptions of AD patients appeared shorter and more syntactically simplified than, but as informative as oral descriptions. Whereas no phonemic paraphasias were observed in either group, AD patients produced many more graphemic paragraphias than controls produced. Furthermore, written descriptions had more irrelevant semantic intrusions. Thus, as compared to oral descriptions, written texts appeared to be a more reliable test of semantic and linguistics difficulties in AD."
},
{
"pmid": "17445292",
"title": "Linguistic changes in verbal expression: a preclinical marker of Alzheimer's disease.",
"abstract": "Despite the many studies examining linguistic deterioration in Alzheimer's disease (AD), very little is known about changes in verbal expression during the preclinical phase of this disease. The objective of this study was to determine whether changes in verbal expression occur in the preclinical phase of AD. The sample consisted of 40 healthy Spanish speakers from Antioquia, Colombia. A total of 19 were carriers of the E280A mutation in the Presenilin 1 gene, and 21 were noncarrier family members. The two groups were similar in age and education. All the participants were shown the Cookie Theft Picture Card from the Boston Diagnostic Aphasia Examination and were asked to describe the scene. Specific grammatical and semantic variables were evaluated. The performance of each group was compared using multivariate analyses of the variance for semantic and grammatical variables, and errors. Carriers of the mutation produced fewer semantic categories than noncarriers. In the preclinical phase of AD, changes in verbal expression are apparent and early detection of these differences may assist the early diagnosis of and intervention in this disease."
},
{
"pmid": "19363178",
"title": "Praat script to detect syllable nuclei and measure speech rate automatically.",
"abstract": "In this article, we describe a method for automatically detecting syllable nuclei in order to measure speech rate without the need for a transcription. A script written in the software program Praat (Boersma & Weenink, 2007) detects syllables in running speech. Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered to be potential syllable nuclei. The script subsequently discards peaks that are not voiced. Testing the resulting syllable counts of this script on two corpora of spoken Dutch, we obtained high correlations between speech rate calculated from human syllable counts and speech rate calculated from automatically determined syllable counts. We conclude that a syllable count measured in this automatic fashion suffices to reliably assess and compare speech rates between participants and tasks."
},
{
"pmid": "29623841",
"title": "Changes in Speech Chunking in Reading Aloud is a Marker of Mild Cognitive Impairment and Mild-to-Moderate Alzheimer's Disease.",
"abstract": "BACKGROUND\nSpeech and Language Impairments, generally attributed to lexico-semantic deficits, have been documented in Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). This study investigates the temporal organisation of speech (reflective of speech production planning) in reading aloud in relation to cognitive impairment, particularly working memory and attention deficits in MCI and AD. The discriminative ability of temporal features extracted from a newly designed read speech task is also evaluated for the detection of MCI and AD.\n\n\nMETHOD\nSixteen patients with MCI, eighteen patients with mild-to-moderate AD and thirty-six healthy controls (HC) underwent a battery of neuropsychological tests and read a set of sentences varying in cognitive load, probed by manipulating sentence length and syntactic complexity.\n\n\nRESULTS\nOur results show that Mild-to-Moderate AD is associated with a general slowness of speech, attributed to a higher number of speech chunks, silent pauses and dysfluences, and slower speech and articulation rates. Speech chunking in the context of high cognitive-linguistic demand appears to be an informative marker of MCI, specifically related to early deficits in working memory and attention. In addition, Linear Discriminant Analysis shows the ROC AUCs (Areas Under the Receiver Operating Characteristic Curves) of identifying MCI vs. HC, MCI vs. AD and AD vs. HC using these speech characteristics are 0.75, 0.90 and 0.94 respectively.\n\n\nCONCLUSION\nThe implementation of connected speech-based technologies in clinical and community settings may provide additional information for the early detection of MCI and AD."
},
{
"pmid": "28891818",
"title": "Machine-learning Support to Individual Diagnosis of Mild Cognitive Impairment Using Multimodal MRI and Cognitive Assessments.",
"abstract": "BACKGROUND\nUnderstanding whether the cognitive profile of a patient indicates mild cognitive impairment (MCI) or performance levels within normality is often a clinical challenge. The use of resting-state functional magnetic resonance imaging (RS-fMRI) and machine learning may represent valid aids in clinical settings for the identification of MCI patients.\n\n\nMETHODS\nMachine-learning models were computed to test the classificatory accuracy of cognitive, volumetric [structural magnetic resonance imaging (sMRI)] and blood oxygen level dependent-connectivity (extracted from RS-fMRI) features, in single-modality and mixed classifiers.\n\n\nRESULTS\nThe best and most significant classifier was the RS-fMRI+Cognitive mixed classifier (94% accuracy), whereas the worst performing was the sMRI classifier (∼80%). The mixed global (sMRI+RS-fMRI+Cognitive) had a slightly lower accuracy (∼90%), although not statistically different from the mixed RS-fMRI+Cognitive classifier. The most important cognitive features were indices of declarative memory and semantic processing. The crucial volumetric feature was the hippocampus. The RS-fMRI features selected by the algorithms were heavily based on the connectivity of mediotemporal, left temporal, and other neocortical regions.\n\n\nCONCLUSION\nFeature selection was profoundly driven by statistical independence. Some features showed no between-group differences, or showed a trend in either direction. This indicates that clinically relevant brain alterations typical of MCI might be subtle and not inferable from group analysis."
},
{
"pmid": "26074814",
"title": "Deficits in narrative discourse elicited by visual stimuli are already present in patients with mild cognitive impairment.",
"abstract": "Language batteries used to assess the skills of elderly individuals, such as naming and semantic verbal fluency, present some limitations in differentiating healthy controls from patients with amnestic mild cognitive impairment (a-MCI). Deficits in narrative discourse occur early in dementia caused by Alzheimer's disease (AD), and the narrative discourse abilities of a-MCI patients are poorly documented. The present study sought to propose and evaluate parameters for investigating narrative discourse in these populations. After a pilot study of 30 healthy subjects who served as a preliminary investigation of macro- and micro-linguistic aspects, 77 individuals (patients with AD and a-MCI and a control group) were evaluated. The experimental task required the participants to narrate a story based on a sequence of actions visually presented. The Control and AD groups differed in all parameters except narrative time and the total number of words recalled. The a-MCI group displayed mild discursive difficulties that were characterized as an intermediate stage between the Control and AD groups' performances. The a-MCI and Control groups differed from the AD group with respect to global coherence, discourse type and referential cohesion. The a-MCI and AD groups were similar to one another but differed from the Control group with respect to the type of words recalled, the repetition of words in the same sentence, the narrative structure and the inclusion of irrelevant propositions in the narrative. The narrative parameter that best distinguished the three groups was the speech effectiveness index. The proposed task was able to reveal differences between healthy controls and groups with cognitive decline. According to our findings, patients with a-MCI already present narrative deficits that are characterized by mild discursive difficulties that are less severe than those found in patients with AD."
},
{
"pmid": "24282223",
"title": "Eye movement alterations during reading in patients with early Alzheimer disease.",
"abstract": "PURPOSE\nEye movements follow a reproducible pattern during normal reading. Each eye movement ends up in a fixation point, which allows the brain to process the incoming information and to program the following saccade. Alzheimer disease (AD) produces eye movement abnormalities and disturbances in reading. In this work, we investigated whether eye movement alterations during reading might be already present at very early stages of the disease.\n\n\nMETHODS\nTwenty female and male adult patients with the diagnosis of probable AD and 20 age-matched individuals with no evidence of cognitive decline participated in the study. Participants were seated in front of a 20-inch LCD monitor and single sentences were presented on it. Eye movements were recorded with an eye tracker, with a sampling rate of 1000 Hz and an eye position resolution of 20 arc seconds.\n\n\nRESULTS\nAnalysis of eye movements during reading revealed that patients with early AD decreased the amount of words with only one fixation, increased their total number of first- and second-pass fixations, the amount of saccade regressions and the number of words skipped, compared with healthy individuals (controls). They also reduced the size of outgoing saccades, simultaneously increasing fixation duration.\n\n\nCONCLUSIONS\nThe present study shows that patients with mild AD evidenced marked alterations in eye movement behavior during reading, even at early stages of the disease. Hence, evaluation of eye movement behavior during reading might provide a useful tool for a more precise early diagnosis of AD and for dynamical monitoring of the pathology."
},
{
"pmid": "25080188",
"title": "Lack of contextual-word predictability during reading in patients with mild Alzheimer disease.",
"abstract": "In the present work we analyzed the effect of contextual word predictability on the eye movement behavior of patients with mild Alzheimer disease (AD) compared to age-matched controls, by using the eyetracking technique and lineal mixed models. Twenty AD patients and 40 age-matched controls participated in the study. We first evaluated gaze duration during reading low and highly predictable sentences. AD patients showed an increase in gaze duration, compared to controls, both in sentences of low or high predictability. In controls, highly predictable sentences led to shorter gaze durations; by contrary, AD patients showed similar gaze durations in both types of sentences. Similarly, gaze duration in controls was affected by the cloze predictability of word N and N+1, whereas it was the same in AD patients. In contrast, the effects of word frequency and word length were similar in controls and AD patients. Our results imply that contextual-word predictability, whose processing is proposed to require memory retrieval, facilitated reading behavior in healthy subjects, but this facilitation was lost in early AD patients. This loss might reveal impairments in brain areas such as those corresponding to working memory, memory retrieval, and semantic memory functions that are already present at early stages of AD. In contrast, word frequency and length processing might require less complex mechanisms, which were still retained by AD patients. To the best of our knowledge, this is the first study measuring how patients with early AD process well-defined words embedded in sentences of high and low predictability. Evaluation of the resulting changes in eye movement behavior might provide a useful tool for a more precise early diagnosis of AD."
},
{
"pmid": "25287871",
"title": "Profiling spontaneous speech decline in Alzheimer's disease: a longitudinal study.",
"abstract": "OBJECTIVE\nThis study aims to document the nature and progression of spontaneous speech impairment suffered by patients with Alzheimer's disease (AD) over a 12-month period, using both cross-sectional and prospective longitudinal design.\n\n\nMETHODS\nThirty one mild-moderate AD patients and 30 controls matched for age and socio-cultural background completed a simple and complex oral description task at baseline. The AD patients then underwent follow-up assessments at 6 and 12 months.\n\n\nRESULTS\nCross-sectional comparisons indicated that mild-moderate AD patients produced more word-finding delays (WFDs) and empty and indefinite phrases, while producing fewer pictorial themes, repairing fewer errors, responding to fewer WFDs, produce shorter and less complex phrases and produce speech with less intonational contour than controls. However, the two groups could not be distinguished on the basis of phonological paraphasias. Longitudinal follow-up, however, suggested that phonological processing deteriorates over time, where the prevalence of phonological errors increased over 12 months. Discussion Consistent with findings from neuropsychological, neuropathological and neuroimaging studies, the language deterioration shown by the AD patients shows a pattern of impairment dominated by semantic errors, which is later joined by a disruption in the phonological aspects of speech."
},
{
"pmid": "16193251",
"title": "Detecting subtle spontaneous language decline in early Alzheimer's disease with a picture description task.",
"abstract": "The objective was to collect normative data for a simple and a complex version of a picture description task devised to assess spontaneous speech and writing skills in patients with Alzheimer's disease (AD), and to test whether some aspects of spontaneous language can discriminate between normal and pathological cognitive decline. Two hundred and forty English-speaking healthy volunteers were recruited to participate in this normative study. Thirty patients with a clinical diagnosis of minimal to moderate probable AD were also recruited. Age and education influenced some aspects of spontaneous oral and written language whereas sex had no influence on any of the variables assessed. A high proportion (>70%) of AD patients performed below cut-off on those scales that measured semantic processing skills. Deficits were detected even amongst those in the very early stage of the disease when the complex version of the task was used. Prospective assessment of spontaneous language skills with a picture description task is useful to detect those subtle spontaneous language impairments caused by AD even at an early stage of the disease."
},
{
"pmid": "26484921",
"title": "Linguistic Features Identify Alzheimer's Disease in Narrative Speech.",
"abstract": "BACKGROUND\nAlthough memory impairment is the main symptom of Alzheimer's disease (AD), language impairment can be an important marker. Relatively few studies of language in AD quantify the impairments in connected speech using computational techniques.\n\n\nOBJECTIVE\nWe aim to demonstrate state-of-the-art accuracy in automatically identifying Alzheimer's disease from short narrative samples elicited with a picture description task, and to uncover the salient linguistic factors with a statistical factor analysis.\n\n\nMETHODS\nData are derived from the DementiaBank corpus, from which 167 patients diagnosed with \"possible\" or \"probable\" AD provide 240 narrative samples, and 97 controls provide an additional 233. We compute a number of linguistic variables from the transcripts, and acoustic variables from the associated audio files, and use these variables to train a machine learning classifier to distinguish between participants with AD and healthy controls. To examine the degree of heterogeneity of linguistic impairments in AD, we follow an exploratory factor analysis on these measures of speech and language with an oblique promax rotation, and provide interpretation for the resulting factors.\n\n\nRESULTS\nWe obtain state-of-the-art classification accuracies of over 81% in distinguishing individuals with AD from those without based on short samples of their language on a picture description task. Four clear factors emerge: semantic impairment, acoustic abnormality, syntactic impairment, and information impairment.\n\n\nCONCLUSION\nModern machine learning and linguistic analysis will be increasingly useful in assessment and clustering of suspected AD."
},
{
"pmid": "1743032",
"title": "A longitudinal study of word-reading ability in Alzheimer's disease: evidence from the National Adult Reading Test.",
"abstract": "The purpose of this longitudinal study was to examine word-reading ability of subjects with probable Alzheimer's disease (AD), using the National Adult Reading Test (NART). In addition to the NART, a battery of neuropsychological tests was administered to 18 AD and 20 elderly control subjects at yearly intervals over 3 years. Repeated measures analysis with grouping factors showed that the controls scored better than AD subjects on the NART at each test date and the AD subjects scored significantly worse over time. NART scores were significantly correlated with dementia severity in AD subjects at final testing only, suggesting that the NART is sensitive to dementia severity only at the later stages of the disease. Associations between the NART and other cognitive measures yielded few significant results. Finally, error responses to NART words were summarized by type and percentage for each group at each test session."
},
{
"pmid": "16631882",
"title": "Mild cognitive impairment.",
"abstract": "Mild cognitive impairment is a syndrome defined as cognitive decline greater than expected for an individual's age and education level but that does not interfere notably with activities of daily life. Prevalence in population-based epidemiological studies ranges from 3% to 19% in adults older than 65 years. Some people with mild cognitive impairment seem to remain stable or return to normal over time, but more than half progress to dementia within 5 years. Mild cognitive impairment can thus be regarded as a risk state for dementia, and its identification could lead to secondary prevention by controlling risk factors such as systolic hypertension. The amnestic subtype of mild cognitive impairment has a high risk of progression to Alzheimer's disease, and it could constitute a prodromal stage of this disorder. Other definitions and subtypes of mild cognitive impairment need to be studied as potential prodromes of Alzheimer's disease and other types of dementia."
},
{
"pmid": "21080826",
"title": "Syntactic and lexical context of pauses and hesitations in the discourse of Alzheimer patients and healthy elderly subjects.",
"abstract": "Psycholinguistic studies dealing with Alzheimer's disease (AD) commonly consider verbal aspects of language. In this article, we investigated both verbal and non-verbal aspects of speech production in AD. We used pauses and hesitations as markers of planning difficulties and hypothesized that AD patients show different patterns in the process of discourse production. We compared the distribution, the duration and the frequency of speech dysfluencies in the spontaneous discourse of 20 AD patients with 20 age, gender and socio-economically matched healthy peers. We found that patients and controls differ along several lines: patients' discourse displays more frequent silent pauses, which occur more often outside syntactic boundaries and are followed by more frequent words. Overall patients show more lexical retrieval and planning difficulties, but where controls signal their planning difficulties using filled pauses, AD patients do not."
},
{
"pmid": "28591751",
"title": "Prognostic Accuracy of Mild Cognitive Impairment Subtypes at Different Cut-Off Levels.",
"abstract": "BACKGROUND/AIMS\nThe prognostic accuracy of mild cognitive impairment (MCI) in clinical settings is debated, variable across criteria, cut-offs, subtypes, and follow-up time. We aimed to estimate the prognostic accuracy of MCI and the MCI subtypes for dementia using three different cut-off levels.\n\n\nMETHODS\nMemory clinic patients were followed for 2 (n = 317, age 63.7 ± 7.8) and 4-6 (n = 168, age 62.6 ± 7.4) years. We used 2.0, 1.5, and 1.0 standard deviations (SD) below the mean of normal controls (n = 120, age 64.1 ± 6.6) to categorize MCI and the MCI subtypes. Prognostic accuracy for dementia syndrome at follow-up was estimated.\n\n\nRESULTS\nAmnestic multi-domain MCI (aMCI-md) significantly predicted dementia under all conditions, most markedly when speed/attention, language, or executive function was impaired alongside memory. For aMCI-md, sensitivity increased and specificity decreased when the cut-off was lowered from 2.0 to 1.5 and 1.0 SD. Non-subtyped MCI had a high sensitivity and a low specificity.\n\n\nCONCLUSION\nOur results suggest that aMCI-md is the only viable subtype for predicting dementia for both follow-up times. Lowering the cut-off decreases the positive predictive value and increases the negative predictive value of aMCI-md. The results are important for understanding the clinical prognostic utility of MCI, and MCI as a non-progressive disorder."
},
{
"pmid": "20380247",
"title": "Temporal parameters of spontaneous speech in Alzheimer's disease.",
"abstract": "This paper reports on four temporal parameters of spontaneous speech in three stages of Alzheimer's disease (mild, moderate, and severe) compared to age-matched normal controls. The analysis of the time course of speech has been shown to be a particularly sensitive neuropsychological method to investigate cognitive processes such as speech planning and production. The following parameters of speech were measured in Hungarian native-speakers with Alzheimer's disease and normal controls: articulation rate, speech tempo, hesitation ratio, and rate of grammatical errors. Results revealed significant differences in most of these speech parameters among the three Alzheimer's disease groups. Additionally, the clearest difference between the normal control group and the mild Alzheimer's disease group involved the hesitation ratio, which was significantly higher in the latter group. This parameter of speech may have diagnostic value for mild-stage Alzheimer's disease and therefore could be a useful aid in medical practice."
},
{
"pmid": "16938019",
"title": "Memory for gist and detail information in Alzheimer's disease and mild cognitive impairment.",
"abstract": "Two experiments examined different forms of gist and detail memory in people with Alzheimer's disease (AD) and those with amnestic mild cognitive impairment (MCI). In Experiment 1, 14 AD, 14 MCI, and 22 control participants were assessed with the Deese-Roediger-McDermott paradigm. Results indicated that false recognition of nonstudied critical lures (gist memory) was diminished in the AD compared with the MCI and control groups; the two latter cohorts performed similarly. In Experiment 2, 14 AD, 20 MCI, and 26 control participants were tested on a text memory task. Results revealed that recall of both macropropositions (gist information) and micropropositions (detail information) decreased significantly in AD and in MCI as compared with control participants. This experiment also revealed that the impairment was comparable between gist and detail memory. In summary, the results were consistent across experiments in the AD but not in the MCI participants. The discrepancy in MCI participants might be explained by differences in the degree of sensitivity of the experimental procedures and/or by the differences in the cognitive processes these procedures assessed."
},
{
"pmid": "27171756",
"title": "Word retrieval in picture descriptions produced by individuals with Alzheimer's disease.",
"abstract": "What can tests of single-word production tell us about word retrieval in connected speech? We examined this question in 20 people with Alzheimer's disease (AD) and in 20 cognitively intact individuals. All participants completed tasks of picture naming and semantic fluency and provided connected speech through picture descriptions. Picture descriptions were analyzed for total word output, percentages of content words, percentages of nouns, and percentages of pronouns out of all words, type-token ratio of all words and type-token ratio of nouns alone, mean frequency of all words and mean frequency of nouns alone, and mean word length. Individuals with AD performed worse than did cognitively intact individuals on the picture naming and semantic fluency tasks. They also produced a lower proportion of content words overall, a lower proportion of nouns, and a higher proportion of pronouns, as well as more frequent and shorter words on picture descriptions. Group differences in total word output and type-token ratios did not reach significance. Correlations between scores on tasks of single-word retrieval and measures of retrieval in picture descriptions emerged in the AD group but not in the control group. Scores on a picture naming task were associated with difficulties in word retrieval in connected speech in AD, while scores on a task of semantic verbal fluency were less useful in predicting measures of retrieval in context in this population."
},
{
"pmid": "29886493",
"title": "Fully Automatic Speech-Based Analysis of the Semantic Verbal Fluency Task.",
"abstract": "BACKGROUND\nSemantic verbal fluency (SVF) tests are routinely used in screening for mild cognitive impairment (MCI). In this task, participants name as many items as possible of a semantic category under a time constraint. Clinicians measure task performance manually by summing the number of correct words and errors. More fine-grained variables add valuable information to clinical assessment, but are time-consuming. Therefore, the aim of this study is to investigate whether automatic analysis of the SVF could provide these as accurate as manual and thus, support qualitative screening of neurocognitive impairment.\n\n\nMETHODS\nSVF data were collected from 95 older people with MCI (n = 47), Alzheimer's or related dementias (ADRD; n = 24), and healthy controls (HC; n = 24). All data were annotated manually and automatically with clusters and switches. The obtained metrics were validated using a classifier to distinguish HC, MCI, and ADRD.\n\n\nRESULTS\nAutomatically extracted clusters and switches were highly correlated (r = 0.9) with manually established values, and performed as well on the classification task separating HC from persons with ADRD (area under curve [AUC] = 0.939) and MCI (AUC = 0.758).\n\n\nCONCLUSION\nThe results show that it is possible to automate fine-grained analyses of SVF data for the assessment of cognitive decline."
},
{
"pmid": "28847279",
"title": "Use of Speech Analyses within a Mobile Application for the Assessment of Cognitive Impairment in Elderly People.",
"abstract": "BACKGROUND\nVarious types of dementia and Mild Cognitive Impairment (MCI) are manifested as irregularities in human speech and language, which have proven to be strong predictors for the disease presence and progress ion. Therefore, automatic speech analytics provided by a mobile application may be a useful tool in providing additional indicators for assessment and detection of early stage dementia and MCI.\n\n\nMETHOD\n165 participants (subjects with subjective cognitive impairment (SCI), MCI patients, Alzheimer's disease (AD) and mixed dementia (MD) patients) were recorded with a mobile application while performing several short vocal cognitive tasks during a regular consultation. These tasks included verbal fluency, picture description, counting down and a free speech task. The voice recordings were processed in two steps: in the first step, vocal markers were extracted using speech signal processing techniques; in the second, the vocal markers were tested to assess their 'power' to distinguish between SCI, MCI, AD and MD. The second step included training automatic classifiers for detecting MCI and AD, based on machine learning methods, and testing the detection accuracy.\n\n\nRESULTS\nThe fluency and free speech tasks obtain the highest accuracy rates of classifying AD vs. MD vs. MCI vs. SCI. Using the data, we demonstrated classification accuracy as follows: SCI vs. AD = 92% accuracy; SCI vs. MD = 92% accuracy; SCI vs. MCI = 86% accuracy and MCI vs. AD = 86%.\n\n\nCONCLUSIONS\nOur results indicate the potential value of vocal analytics and the use of a mobile application for accurate automatic differentiation between SCI, MCI and AD. This tool can provide the clinician with meaningful information for assessment and monitoring of people with MCI and AD based on a non-invasive, simple and low-cost method."
},
{
"pmid": "27239498",
"title": "Automatic speech analysis for the assessment of patients with predementia and Alzheimer's disease.",
"abstract": "BACKGROUND\nTo evaluate the interest of using automatic speech analyses for the assessment of mild cognitive impairment (MCI) and early-stage Alzheimer's disease (AD).\n\n\nMETHODS\nHealthy elderly control (HC) subjects and patients with MCI or AD were recorded while performing several short cognitive vocal tasks. The voice recordings were processed, and the first vocal markers were extracted using speech signal processing techniques. Second, the vocal markers were tested to assess their \"power\" to distinguish among HC, MCI, and AD. The second step included training automatic classifiers for detecting MCI and AD, using machine learning methods and testing the detection accuracy.\n\n\nRESULTS\nThe classification accuracy of automatic audio analyses were as follows: between HCs and those with MCI, 79% ± 5%; between HCs and those with AD, 87% ± 3%; and between those with MCI and those with AD, 80% ± 5%, demonstrating its assessment utility.\n\n\nCONCLUSION\nAutomatic speech analyses could be an additional objective assessment tool for elderly with cognitive decline."
},
{
"pmid": "10780625",
"title": "Eye movement abnormalities during reading in patients with Alzheimer disease.",
"abstract": "OBJECTIVE\nThis goal of this study was to evaluate reading ability by assessing eye movements during reading among patients with Alzheimer disease (AD) compared with normal elderly controls.\n\n\nBACKGROUND\nReading is disturbed in patients with AD. These patients may have changes in reading ability early in the course of their disease before clinical alexia or abnormalities are apparent on standard reading tasks.\n\n\nMETHOD\nReading competence was evaluated by recording eye movements during reading in 14 patients with mild to moderate clinically probable AD and 14 age- and education-matched controls.\n\n\nRESULTS\nAll patients with AD could recognize letters and words and could understand written material of similar difficulty. Despite successful reading comprehension among the patients with AD, their oculographs showed slowed reading and irregular eye movements. Compared with controls, the patients with AD did not differ in saccadic duration; however, they had significantly longer fixation times, more forward saccades per line of text, and more saccadic regressions. In addition, increased reading difficulty significantly correlated with a scale of dementia severity in the patients with AD.\n\n\nCONCLUSIONS\nThis pattern of eye movements corresponds to increased text difficulty and probably represents difficulty with lexical-semantic access in AD. These results suggest that disordered eye movements can signal difficulties in reading ability in AD even before complaints of reading difficulty or abnormalities on reading tests and may be a means of identifying linguistic impairment early in this disorder."
},
{
"pmid": "24481220",
"title": "Speech in Alzheimer's disease: can temporal and acoustic parameters discriminate dementia?",
"abstract": "AIMS\nThe study explores how speech measures may be linked to language profiles in participants with Alzheimer's disease (AD) and how these profiles could distinguish AD from changes associated with normal aging.\n\n\nMETHODS\nWe analysed simple sentences spoken by older adults with and without AD. Spectrographic analysis of temporal and acoustic characteristics was carried out using the Praat software.\n\n\nRESULTS\nWe found that measures of speech, such as variations in the percentage of voice breaks, number of periods of voice, number of voice breaks, shimmer (amplitude perturbation quotient), and noise-to-harmonics ratio, characterise people with AD with an accuracy of 84.8%.\n\n\nDISCUSSION\nThese measures offer a sensitive method of assessing spontaneous speech output in AD, and they discriminate well between people with AD and healthy older adults. This method of evaluation is a promising tool for AD diagnosis and prognosis, and it could be used as a dependent measure in clinical trials."
},
{
"pmid": "28436388",
"title": "Toward the Automation of Diagnostic Conversation Analysis in Patients with Memory Complaints.",
"abstract": "BACKGROUND\nThe early diagnosis of dementia is of great clinical and social importance. A recent study using the qualitative methodology of conversation analysis (CA) demonstrated that language and communication problems are evident during interactions between patients and neurologists, and that interactional observations can be used to differentiate between cognitive difficulties due to neurodegenerative disorders (ND) or functional memory disorders (FMD).\n\n\nOBJECTIVE\nThis study explores whether the differential diagnostic analysis of doctor-patient interactions in a memory clinic can be automated.\n\n\nMETHODS\nVerbatim transcripts of conversations between neurologists and patients initially presenting with memory problems to a specialist clinic were produced manually (15 with FMD, and 15 with ND). A range of automatically detectable features focusing on acoustic, lexical, semantic, and visual information contained in the transcripts were defined aiming to replicate the diagnostic qualitative observations. The features were used to train a set of five machine learning classifiers to distinguish between ND and FMD.\n\n\nRESULTS\nThe mean rate of correct classification between ND and FMD was 93% ranging from 97% by the Perceptron classifier to 90% by the Random Forest classifier.Using only the ten best features, the mean correct classification score increased to 95%.\n\n\nCONCLUSION\nThis pilot study provides proof-of-principle that a machine learning approach to analyzing transcripts of interactions between neurologists and patients describing memory problems can distinguish people with neurodegenerative dementia from people with FMD."
},
{
"pmid": "9447441",
"title": "Clinical dementia rating: a reliable and valid diagnostic and staging measure for dementia of the Alzheimer type.",
"abstract": "Global staging measures for dementia of the Alzheimer type (DAT) assess the influence of cognitive loss on the ability to conduct everyday activities and represent the \"ultimate test\" of efficacy for antidementia drug trials. They provide information about clinically meaningful function and behavior and are less affected by the \"floor\" and \"ceiling\" effects commonly associated with psychometric tests. The Washington University Clinical Dementia Rating (CDR) is a global scale developed to clinically denote the presence of DAT and stage its severity. The clinical protocol incorporates semistructured interviews with the patient and informant to obtain information necessary to rate the subject's cognitive performance in six domains: memory, orientation, judgment and problem solving, community affairs, home and hobbies, and personal care. The CDR has been standardized for multicenter use, including the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) and the Alzheimer's Disease Cooperative Study, and interrater reliability has been established. Criterion validity for both the global CDR and scores on individual domains has been demonstrated, and the CDR also has been validated neuropathologically, particularly for the presence or absence of dementia. Standardized training protocols are available. Although not well suited as a brief screening tool for population surveys of dementia because the protocol depends on sufficient time to conduct interviews, the CDR has become widely accepted in the clinical setting as a reliable and valid global assessment measure for DAT."
},
{
"pmid": "28174533",
"title": "Reward Dependent Invigoration Relates to Theta Oscillations and Is Predicted by Dopaminergic Midbrain Integrity in Healthy Elderly.",
"abstract": "Motivation can have invigorating effects on behavior via dopaminergic neuromodulation. While this relationship has mainly been established in theoretical models and studies in younger subjects, the impact of structural declines of the dopaminergic system during healthy aging remains unclear. To investigate this issue, we used electroencephalography (EEG) in healthy young and elderly humans in a reward-learning paradigm. Specifically, scene images were initially encoded by combining them with cues predicting monetary reward (high vs. low reward). Subsequently, recognition memory for the scenes was tested. As a main finding, we can show that response times (RTs) during encoding were faster for high reward predicting images in the young but not elderly participants. This pattern was resembled in power changes in the theta-band (4-7 Hz). Importantly, analyses of structural MRI data revealed that individual reward-related differences in the elderlies' response time could be predicted by the structural integrity of the dopaminergic substantia nigra (SN; as measured by magnetization transfer (MT)). These findings suggest a close relationship between reward-based invigoration, theta oscillations and age-dependent changes of the dopaminergic system."
},
{
"pmid": "29669461",
"title": "Connected speech and language in mild cognitive impairment and Alzheimer's disease: A review of picture description tasks.",
"abstract": "INTRODUCTION\nThe neuropsychological profile of people with mild cognitive impairment (MCI) and Alzheimer's disease (AD) dementia includes a history of decline in memory and other cognitive domains, including language. While language impairments have been well described in AD dementia, language features of MCI are less well understood. Connected speech and language analysis is the study of an individual's spoken discourse, usually elicited by a target stimulus, the results of which can facilitate understanding of how language deficits typical of MCI and AD dementia manifest in everyday communication. Among discourse genres, picture description is a constrained task that relies less on episodic memory and more on semantic knowledge and retrieval, within the cognitive demands of a communication context. Understanding the breadth of evidence across the continuum of cognitive decline will help to elucidate the areas of strength and need in terms of using this method as an evaluative tool for both cognitive changes and everyday functional communication.\n\n\nMETHOD\nWe performed an extensive literature search of peer-reviewed journal articles that focused on the use of picture description tasks for evaluating language in persons with MCI or AD dementia. We selected articles based on inclusion and exclusion criteria and described the measures assessed, the psychometric properties that were reported, the findings, and the limitations of the included studies.\n\n\nRESULTS\n36 studies were selected and reviewed. Across all 36 studies, there were 1, 127 patients with AD dementia and 274 with MCI or early cognitive decline. Multiple measures were examined, including those describing semantic content, syntactic complexity, speech fluency, vocal parameters, and pragmatic language. Discriminant validity widely reported and distinct differences in language were observable between adults with dementia and controls; fewer studies were able to distinguish language differences between typically aging adults and those with MCI.\n\n\nDISCUSSION\nOur review shows that picture description tasks are useful tools for detecting differences in a wide variety of language and communicative measures. Future research should expand knowledge about subtle changes to language in preclinical AD and Mild Cognitive Impairment (MCI) which may improve the utility of this method as a clinically meaningful screening tool."
},
{
"pmid": "23845236",
"title": "A computational linguistic measure of clustering behavior on semantic verbal fluency task predicts risk of future dementia in the nun study.",
"abstract": "Generative semantic verbal fluency (SVF) tests show early and disproportionate decline relative to other abilities in individuals developing Alzheimer's disease. Optimal performance on SVF tests depends on the efficiency of using clustered organization of semantically related items and the ability to switch between clusters. Traditional approaches to clustering and switching have relied on manual determination of clusters. We evaluated a novel automated computational linguistic approach for quantifying clustering behavior. Our approach is based on Latent Semantic Analysis (LSA) for computing strength of semantic relatedness between pairs of words produced in response to SVF test. The mean size of semantic clusters (MCS) and semantic chains (MChS) are calculated based on pairwise relatedness values between words. We evaluated the predictive validity of these measures on a set of 239 participants in the Nun Study, a longitudinal study of aging. All were cognitively intact at baseline assessment, measured with the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) battery, and were followed in 18-month waves for up to 20 years. The onset of either dementia or memory impairment were used as outcomes in Cox proportional hazards models adjusted for age and education and censored at follow-up waves 5 (6.3 years) and 13 (16.96 years). Higher MCS was associated with 38% reduction in dementia risk at wave 5 and 26% reduction at wave 13, but not with the onset of memory impairment. Higher [+1 standard deviation (SD)] MChS was associated with 39% dementia risk reduction at wave 5 but not wave 13, and association with memory impairment was not significant. Higher traditional SVF scores were associated with 22-29% memory impairment and 35-40% dementia risk reduction. SVF scores were not correlated with either MCS or MChS. Our study suggests that an automated approach to measuring clustering behavior can be used to estimate dementia risk in cognitively normal individuals."
},
{
"pmid": "25190209",
"title": "Risk for Mild Cognitive Impairment Is Associated With Semantic Integration Deficits in Sentence Processing and Memory.",
"abstract": "OBJECTIVES\nWe examined the degree to which online sentence processing and offline sentence memory differed among older adults who showed risk for amnestic and nonamnestic varieties of mild cognitive impairment (MCI), based on psychometric classification.\n\n\nMETHOD\nParticipants (N = 439) read a series of sentences in a self-paced word-by-word reading paradigm for subsequent recall and completed a standardized cognitive test battery. Participants were classified into 3 groups: unimpaired controls (N = 281), amnestic MCI (N = 94), or nonamnestic MCI (N = 64).\n\n\nRESULTS\nRelative to controls, both MCI groups had poorer sentence memory and showed reduced sentence wrap-up effects, indicating reduced allocation to semantic integration processes. Wrap-up effects predicted subsequent recall in the control and nonamnestic groups. The amnestic MCI group showed poorer recall than the nonamnestic MCI group, and only the amnestic MCI group showed no relationship between sentence wrap-up and recall.\n\n\nDISCUSSION\nOur findings suggest that psychometrically defined sub-types of MCI are associated with unique deficits in sentence processing and can differentiate between the engagement of attentional resources during reading and the effectiveness of engaging attentional resources in producing improved memory."
},
{
"pmid": "25031536",
"title": "Eye movement analysis and cognitive processing: detecting indicators of conversion to Alzheimer's disease.",
"abstract": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease."
},
{
"pmid": "10190820",
"title": "Mild cognitive impairment: clinical characterization and outcome.",
"abstract": "BACKGROUND\nSubjects with a mild cognitive impairment (MCI) have a memory impairment beyond that expected for age and education yet are not demented. These subjects are becoming the focus of many prediction studies and early intervention trials.\n\n\nOBJECTIVE\nTo characterize clinically subjects with MCI cross-sectionally and longitudinally.\n\n\nDESIGN\nA prospective, longitudinal inception cohort.\n\n\nSETTING\nGeneral community clinic.\n\n\nPARTICIPANTS\nA sample of 76 consecutively evaluated subjects with MCI were compared with 234 healthy control subjects and 106 patients with mild Alzheimer disease (AD), all from a community setting as part of the Mayo Clinic Alzheimer's Disease Center/Alzheimer's Disease Patient Registry, Rochester, Minn.\n\n\nMAIN OUTCOME MEASURES\nThe 3 groups of individuals were compared on demographic factors and measures of cognitive function including the Mini-Mental State Examination, Wechsler Adult Intelligence Scale-Revised, Wechsler Memory Scale-Revised, Dementia Rating Scale, Free and Cued Selective Reminding Test, and Auditory Verbal Learning Test. Clinical classifications of dementia and AD were determined according to the Diagnostic and Statistical Manual of Mental Disorders, Revised Third Edition and the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association criteria, respectively.\n\n\nRESULTS\nThe primary distinction between control subjects and subjects with MCI was in the area of memory, while other cognitive functions were comparable. However, when the subjects with MCI were compared with the patients with very mild AD, memory performance was similar, but patients with AD were more impaired in other cognitive domains as well. Longitudinal performance demonstrated that the subjects with MCI declined at a rate greater than that of the controls but less rapidly than the patients with mild AD.\n\n\nCONCLUSIONS\nPatients who meet the criteria for MCI can be differentiated from healthy control subjects and those with very mild AD. They appear to constitute a clinical entity that can be characterized for treatment interventions."
},
{
"pmid": "28386518",
"title": "Outcomes Assessment in Clinical Trials of Alzheimer's Disease and its Precursors: Readying for Short-term and Long-term Clinical Trial Needs.",
"abstract": "An evolving paradigm shift in the diagnostic conceptualization of Alzheimer's disease is reflected in its recently updated diagnostic criteria from the National Institute on Aging-Alzheimer's Association and the International Working Group. Additionally, it is reflected in the increased focus in this field on conducting prevention trials in addition to improving cognition and function in people with dementia. These developments are making key contributions towards defining new regulatory thinking around Alzheimer's disease treatment earlier in the disease continuum. As a result, the field as a whole is now concentrated on exploring the next-generation of cognitive and functional outcome measures that will support clinical trials focused on treating the slow slide into cognitive and functional impairment. With this backdrop, the International Society for CNS Clinical Trials and Methodology convened semi-annual working group meetings which began in spring of 2012 to address methodological issues in this area. This report presents the most critical issues around primary outcome assessments in Alzheimer's disease clinical trials, and summarizes the presentations, discussions, and recommendations of those meetings, within the context of the evolving landscape of Alzheimer's disease clinical trials."
},
{
"pmid": "22199464",
"title": "Spoken Language Derived Measures for Detecting Mild Cognitive Impairment.",
"abstract": "Spoken responses produced by subjects during neuropsychological exams can provide diagnostic markers beyond exam performance. In particular, characteristics of the spoken language itself can discriminate between subject groups. We present results on the utility of such markers in discriminating between healthy elderly subjects and subjects with mild cognitive impairment (MCI). Given the audio and transcript of a spoken narrative recall task, a range of markers are automatically derived. These markers include speech features such as pause frequency and duration, and many linguistic complexity measures. We examine measures calculated from manually annotated time alignments (of the transcript with the audio) and syntactic parse trees, as well as the same measures calculated from automatic (forced) time alignments and automatic parses. We show statistically significant differences between clinical subject groups for a number of measures. These differences are largely preserved with automation. We then present classification results, and demonstrate a statistically significant improvement in the area under the ROC curve (AUC) when using automatic spoken language derived features in addition to the neuropsychological test scores. Our results indicate that using multiple, complementary measures can aid in automatic detection of MCI."
},
{
"pmid": "1447438",
"title": "Bedside assessment of executive cognitive impairment: the executive interview.",
"abstract": "OBJECTIVE\nThis study is a pilot validation of the Executive Interview (EXIT), a novel instrument designed to assess executive cognitive function (ECF) at the bedside.\n\n\nDESIGN\nInter-rater reliability testing and validation using inter-group comparisons across levels of care and measures of cognition and behavior.\n\n\nPARTICIPANTS\nForty elderly subjects randomly selected across four levels of care.\n\n\nSETTING\nSettings ranged from independent living apartments to designated Alzheimer's Special Care units in a single 537-bed retirement community.\n\n\nMEASUREMENTS\nThe EXIT: a 10-minute, 25-item interview scored from 0-50 (higher scores = greater executive dyscontrol) was administered by a physician. Subjects were also administered the Mini-Mental State Exam (MMSE) and traditional tests of \"frontal\" executive function by a neuropsychologist, and the Nursing Home Behavior Problem Scale (NHBPS) by Licensed Vocational Nurses.\n\n\nRESULTS\nInterrater reliability was high (r = .90). EXIT scores correlated well with other measures of ECF. The interview discriminated among residents at each level of care. In contrast, the MMSE did not discriminate apartment-dwelling from residential care residents, or residential care from nursing home residents. The EXIT was highly correlated with disruptive behaviors as measured by the NHBPS (r = .79).\n\n\nCONCLUSIONS\nThese preliminary findings suggest that the EXIT is a valid and reliable instrument for the assessment of executive impairment at the bedside. It correlates well with level of care and problem behavior. It discriminates residents at earlier stages of cognitive impairment than the MMSE."
},
{
"pmid": "20438657",
"title": "Assessment of strategic processing during narrative comprehension in individuals with mild cognitive impairment.",
"abstract": "A think-aloud protocol was used to examine the strategies used by individuals with mild cognitive impairment (MCI) during text comprehension. Twenty-three participants with MCI and 23 cognitively healthy older adults (OA) read narratives, pausing to verbalize their thoughts after each sentence. The verbal protocol analysis developed by Trabasso and Magliano (1996) was then used to code participants' utterances into inferential and non-inferential statements; inferential statements were further coded to identify the memory operation used in their generation. Compared with OA controls, the MCI participants showed poorer story comprehension and produced fewer inferences. The MCI participants were also less skilled at providing explanations of story events and in using prior text information to support inference generation. Poorer text comprehension was associated with poorer verbal memory abilities and poorer use of prior text events when producing inferential statements. The results suggest that the memory difficulties of the MCI group may be an important cognitive factor interfering with their ability to integrate narrative events through the use of inferences and to form a global coherence to support text comprehension."
},
{
"pmid": "25267658",
"title": "Coupled neural systems underlie the production and comprehension of naturalistic narrative speech.",
"abstract": "Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker's and listener's brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production- and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems."
},
{
"pmid": "25042445",
"title": "Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.",
"abstract": "For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET."
},
{
"pmid": "18569251",
"title": "Language performance in Alzheimer's disease and mild cognitive impairment: a comparative review.",
"abstract": "Mild cognitive impairment (MCI) manifests as memory impairment in the absence of dementia and progresses to Alzheimer's disease (AD) at a rate of around 15% per annum, versus 1-2% in the general population. It thus constitutes a primary target for investigation of early markers of AD. Language deficits occur early in AD, and performance on verbal tasks is an important diagnostic criterion for both AD and MCI. We review language performance in MCI, compare these findings to those seen in AD, and identify the primary issues in understanding language performance in MCI and selecting tasks with diagnostic and prognostic value."
},
{
"pmid": "29165085",
"title": "A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech.",
"abstract": "BACKGROUND\nEven today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI.\n\n\nMETHODS\nParticipants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features.\n\n\nRESULTS\nThe statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%.\n\n\nCONCLUSION\nThe temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community."
},
{
"pmid": "22661485",
"title": "Standardized assessment of reading performance: the New International Reading Speed Texts IReST.",
"abstract": "PURPOSE\nThere is a need for standardized texts to assess reading performance, for multiple equivalent texts for repeated measurements, and for texts equated across languages for multi-language studies. Paragraphs are preferable to single sentences for accurate speed measurement. We developed such texts previously in 6 languages. The aim of our current study was to develop texts in more languages for a wide range of countries and users, and to assess the reading speeds of normally-sighted readers.\n\n\nMETHODS\nTen texts were designed for 17 languages each by a linguist who matched content, length, difficulty, and linguistic complexity. The texts then were used to assess reading speeds of 436 normally-sighted native speakers (age 18-35 years, 25 per language, 36 in Japanese), presented at a distance of 40 cm and size 1 M, that is 10-point Times New Roman font. Reading time (aloud) was measured by stopwatch.\n\n\nRESULTS\nFor all 17 languages, average mean reading speed was 1.42 ± 0.13 texts/min (±SD), 184 ± 29 words/min, 370 ± 80 syllables/min, and 863 ± 234 characters/min. For 14 languages, mean reading time was 68 ms/character (95% confidence interval [CI] 65-71 ms). Our analysis focussed on words per minute. The variability of reading speed within subjects accounts only for an average of 11.5%, between subjects for 88.5%.\n\n\nCONCLUSIONS\nThe low within-subject variability shows the equivalence of the texts. The IReST (second edition) can now be provided in 17 languages allowing standardized assessment of reading speed, as well as comparability of results before and after interventions, and is a useful tool for multi-language studies (for further information see www.amd-read.net)."
},
{
"pmid": "23628238",
"title": "Could language deficits really differentiate Mild Cognitive Impairment (MCI) from mild Alzheimer's disease?",
"abstract": "Naming abilities seem to be affected in Alzheimer's disease (AD) patients, though MCI individuals tend to exhibit greater impairments in category fluency. In this study we: (1) detect language deficits of amnestic MCIs (aMCIs) and mild AD (mAD) participants and present their language performance (the Boston Diagnostic Aphasia Examination - BDAE scores) according to educational level, (2) study the diagnostic value of language deficits according to the cognitive state of the participants. One hundred nineteen participants, 38 normal controls (NC), 28 aMCIs and 53 mADs, were recruited randomly as outpatients of 2 clinical departments and administered clinical, neuropsychological and neuroimaging assessment. Language abilities were assessed by the adapted Greek edition of the BDAE (2nd edition). Our results indicate that verbal fluency, auditory, reading comprehension and narrative ability are the main language abilities to be affected in mADs, although they are almost intact in NCs and less vulnerable in aMCIs. Narrative ability seems to be significantly impaired in mADs but not so in aMCIs. Six language subtests of the BDAE assess safely the above deficits. This brief version of the BDAE discriminated mADs from the other 2 groups 92.5% of the time, NCs 86.8% and aMCI 67.9% of the time in order to save time and to be accurate in clinical practice."
},
{
"pmid": "8970012",
"title": "Stepwise comparative status analysis (STEP): a tool for identification of regional brain syndromes in dementia.",
"abstract": "A method for clinical examination of patients with dementia, stepwise comparative status analysis (STEP), is presented. It combines psychiatric and neurologic status examination methods to identify certain common dementia symptoms by which the patient's regional brain symptom profile can be determined. Fifty status variables (items) are estimated with respect to occurrence and severity. The analysis is performed in three steps. The scores on the 'primary' variables reflect observations of single dementia symptoms. These scores form the basis for the assessment of the 'compound' variables, which in turn form the basis for evaluation of the 'complex' variables, one of which describes the patient's regional (predominant) brain syndrome (subcortical, frontosubcortical, frontal, frontoparietal, parietal, or global). In 96 mildly and moderately demented inpatients, the global (42%) and frontosubcortical (31%) were the most common. Ninety-one percent of the patients with vascular dementia had a predominant frontal and/or subcortical symptomatology."
},
{
"pmid": "26174331",
"title": "The Gothenburg MCI study: Design and distribution of Alzheimer's disease and subcortical vascular disease diagnoses from baseline to 6-year follow-up.",
"abstract": "There is a need for increased nosological knowledge to enable rational trials in Alzheimer’s disease (AD) and related disorders. The ongoing Gothenburg mild cognitive impairment (MCI) study is an attempt to conduct longitudinal in-depth phenotyping of patients with different forms and degrees of cognitive impairment using neuropsychological, neuroimaging, and neurochemical tools. Particular attention is paid to the interplay between AD and subcortical vascular disease, the latter representing a disease entity that may cause or contribute to cognitive impairment with an effect size that may be comparable to AD. Of 664 patients enrolled between 1999 and 2013, 195 were diagnosed with subjective cognitive impairment (SCI), 274 with mild cognitive impairment (MCI), and 195 with dementia, at baseline. Of the 195 (29%) patients with dementia at baseline, 81 (42%) had AD, 27 (14%) SVD, 41 (21%) mixed type dementia (=AD + SVD = MixD), and 46 (23%) other etiologies. After 6 years, 292 SCI/MCI patients were eligible for follow-up. Of these 292, 69 (24%) had converted to dementia (29 (42%) AD, 16 (23%) SVD, 15 (22%) MixD, 9 (13%) other etiologies). The study has shown that it is possible to identify not only AD but also incipient and manifest MixD/SVD in a memory clinic setting. These conditions should be taken into account in clinical trials."
}
] |
Journal of Translational Medicine | 31395072 | PMC6688360 | 10.1186/s12967-019-2009-x | MLMDA: a machine learning approach to predict and validate MicroRNA–disease associations by integrating of heterogenous information sources | BackgroundEmerging evidences show that microRNA (miRNA) plays an important role in many human complex diseases. However, considering the inherent time-consuming and expensive of traditional in vitro experiments, more and more attention has been paid to the development of efficient and feasible computational methods to predict the potential associations between miRNA and disease.MethodsIn this work, we present a machine learning-based model called MLMDA for predicting the association of miRNAs and diseases. More specifically, we first use the k-mer sparse matrix to extract miRNA sequence information, and combine it with miRNA functional similarity, disease semantic similarity and Gaussian interaction profile kernel similarity information. Then, more representative features are extracted from them through deep auto-encoder neural network (AE). Finally, the random forest classifier is used to effectively predict potential miRNA–disease associations.ResultsThe experimental results show that the MLMDA model achieves promising performance under fivefold cross validations with AUC values of 0.9172, which is higher than the methods using different classifiers or different feature combination methods mentioned in this paper. In addition, to further evaluate the prediction performance of MLMDA model, case studies are carried out with three Human complex diseases including Lymphoma, Lung Neoplasm, and Esophageal Neoplasms. As a result, 39, 37 and 36 out of the top 40 predicted miRNAs are confirmed by other miRNA–disease association databases.ConclusionsThese prominent experimental results suggest that the MLMDA model could serve as a useful tool guiding the future experimental validation for those promising miRNA biomarker candidates. The source code and datasets explored in this work are available at http://220.171.34.3:81/. | Comparison with related worksTo evaluate the effectiveness of our approach, we use the HMDD dataset to compare the performance of MLMDA with the 6 state-of-the-art methods which are BNPMDA, miRGOFS, MDHGI, DRMDA, SPM, LMTRDA and NNMDA, as shown in Table 8 [22, 33–37]. Since the version of HMDD used in the state-of-the-art methods is different, and some methods do not report detailed evaluation indicators, here we only compare the reported AUC values to verify the effectiveness of our method. As can be seen from Table 8, the proposed method is only 1.9% worse than the highest NNMDA of AUC, the second highest in all methods and 1.35% higher than the average AUC. This is due to the fact that sequence information can describe miRNAs more comprehensively and deeply, and can be used as an excellent source of knowledge for predicting potential miRNA–disease associations.Table 8The comparison results of MLMDA model and related worksMethodAUC (%)BNPMDA89.80miRGOFS87.70MDHGI87.94DRMDA91.56SPM91.40LMTRDA90.54NNMDA93.60MLMDA91.72 | [
"15372042",
"14744438",
"25815108",
"22094949",
"27165343",
"29467480",
"20528768",
"24502829",
"15944708",
"17060945",
"18923704",
"17353930",
"19204784",
"17344234",
"30881376",
"29858068",
"30917115",
"30349558",
"20522252",
"24273243",
"23950912",
"25618864",
"27533456",
"28421868",
"29701758",
"30598077",
"29490018",
"30142158",
"31077936",
"18927107",
"14567057",
"25135367",
"15761078",
"17990321",
"10801023",
"21547903",
"24194601",
"20439255",
"12835272",
"18957447",
"19649320",
"21893517",
"28113829"
] | [
{
"pmid": "15372042",
"title": "The functions of animal microRNAs.",
"abstract": "MicroRNAs (miRNAs) are small RNAs that regulate the expression of complementary messenger RNAs. Hundreds of miRNA genes have been found in diverse animals, and many of these are phylogenetically conserved. With miRNA roles identified in developmental timing, cell death, cell proliferation, haematopoiesis and patterning of the nervous system, evidence is mounting that animal miRNAs are more numerous, and their regulatory impact more pervasive, than was previously suspected."
},
{
"pmid": "14744438",
"title": "MicroRNAs: genomics, biogenesis, mechanism, and function.",
"abstract": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes."
},
{
"pmid": "25815108",
"title": "Role of circulating miRNAs as biomarkers in idiopathic pulmonary arterial hypertension: possible relevance of miR-23a.",
"abstract": "Idiopathic pulmonary hypertension (IPAH) is a rare disease characterized by a progressive increase in pulmonary vascular resistance leading to heart failure. MicroRNAs (miRNAs) are small noncoding RNAs that control the expression of genes, including some involved in the progression of IPAH, as studied in animals and lung tissue. These molecules circulate freely in the blood and their expression is associated with the progression of different vascular pathologies. Here, we studied the expression profile of circulating miRNAs in 12 well-characterized IPAH patients using microarrays. We found significant changes in 61 miRNAs, of which the expression of miR23a was correlated with the patients' pulmonary function. We also studied the expression profile of circulating messenger RNA (mRNAs) and found that miR23a controlled 17% of the significantly changed mRNA, including PGC1α, which was recently associated with the progression of IPAH. Finally we found that silencing of miR23a resulted in an increase of the expression of PGC1α, as well as in its well-known regulated genes CYC, SOD, NRF2, and HO1. The results point to the utility of circulating miRNA expression as a biomarker of disease progression."
},
{
"pmid": "22094949",
"title": "Non-coding RNAs in human disease.",
"abstract": "The relevance of the non-coding genome to human disease has mainly been studied in the context of the widespread disruption of microRNA (miRNA) expression and function that is seen in human cancer. However, we are only beginning to understand the nature and extent of the involvement of non-coding RNAs (ncRNAs) in disease. Other ncRNAs, such as PIWI-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), transcribed ultraconserved regions (T-UCRs) and large intergenic non-coding RNAs (lincRNAs) are emerging as key elements of cellular homeostasis. Along with microRNAs, dysregulation of these ncRNAs is being found to have relevance not only to tumorigenesis, but also to neurological, cardiovascular, developmental and other diseases. There is great interest in therapeutic strategies to counteract these perturbations of ncRNAs."
},
{
"pmid": "27165343",
"title": "E2 regulates MMP-13 via targeting miR-140 in IL-1β-induced extracellular matrix degradation in human chondrocytes.",
"abstract": "BACKGROUND\nEstrogen deficiency is closely related to the development of menopausal arthritis. Estrogen replacement therapy (ERT) shows a protective effect against the osteoarthritis. However, the underlying mechanism of this protective effect is unknown. This study aimed to determine the role of miR-140 in the estrogen-dependent regulation of MMP-13 in human chondrocytes.\n\n\nMETHODS\nPrimary human articular chondrocytes were obtained from female OA patients undergoing knee replacement surgery. Normal articular chondrocytes were isolated from the knee joints of female donors after trauma and treated with interleukin-1 beta (IL-1β). Gene expression levels of miR-140, MMP-13, and ADAMTS-5 were detected by quantitative real-time PCR (qRT-PCR). miR-140 levels were upregulated or downregulated by transfecting cells with a miRNA mimic and inhibitor, respectively, prior to treatment with IL-1β. MMP-13 expression was then evaluated by Western blotting and immunofluorescence. Luciferase reporter assays were performed to verify the interaction between miR-140 and ER.\n\n\nRESULTS\n17-β-estradiol (E2) suppressed MMP-13 expression in human articular chondrocytes. miR-140 expression was upregulated after estrogen treatment. Knockdown of miR-140 expression abolished the inhibitory effect of estrogen on MMP-13. In addition, the estrogen/ER/miR-140 pathway showed an inhibitory effect on IL-1β-induced cartilage matrix degradation.\n\n\nCONCLUSIONS\nThis study suggests that estrogen acts via ER and miR-140 to inhibit the catabolic activity of proteases within the chondrocyte extracellular matrix. These findings provide new insight into the mechanism of menopausal arthritis and indicate that the ER/miR-140 signaling pathway may be a potential target for therapeutic interventions for menopausal arthritis."
},
{
"pmid": "29467480",
"title": "Adenoid cystic carcinomas of the salivary gland, lacrimal gland, and breast are morphologically and genetically similar but have distinct microRNA expression profiles.",
"abstract": "Adenoid cystic carcinoma is among the most frequent malignancies in the salivary and lacrimal glands and has a grave prognosis characterized by frequent local recurrences, distant metastases, and tumor-related mortality. Conversely, adenoid cystic carcinoma of the breast is a rare type of triple-negative (estrogen and progesterone receptor, HER2) and basal-like carcinoma, which in contrast to other triple-negative and basal-like breast carcinomas has a very favorable prognosis. Irrespective of site, adenoid cystic carcinoma is characterized by gene fusions involving MYB, MYBL1, and NFIB, and the reason for the different clinical outcomes is unknown. In order to identify the molecular mechanisms underlying the discrepancy in clinical outcome, we characterized the phenotypic profiles, pattern of gene rearrangements, and global microRNA expression profiles of 64 salivary gland, 9 lacrimal gland, and 11 breast adenoid cystic carcinomas. All breast and lacrimal gland adenoid cystic carcinomas had triple-negative and basal-like phenotypes, while salivary gland tumors were indeterminate in 13% of cases. Aberrations in MYB and/or NFIB were found in the majority of cases in all three locations, whereas MYBL1 involvement was restricted to tumors in the salivary gland. Global microRNA expression profiling separated salivary and lacrimal gland adenoid cystic carcinoma from their respective normal glands but could not distinguish normal breast adenoid cystic carcinoma from normal breast tissue. Hierarchical clustering separated adenoid cystic carcinomas of salivary gland origin from those of the breast and placed lacrimal gland carcinomas in between these. Functional annotation of the microRNAs differentially expressed between salivary gland and breast adenoid cystic carcinoma showed these as regulating genes involved in metabolism, signal transduction, and genes involved in other cancers. In conclusion, microRNA dysregulation is the first class of molecules separating adenoid cystic carcinoma according to the site of origin. This highlights a novel venue for exploring the biology of adenoid cystic carcinoma."
},
{
"pmid": "20528768",
"title": "Gene expression profiling in whole blood of patients with coronary artery disease.",
"abstract": "Owing to the dynamic nature of the transcriptome, gene expression profiling is a promising tool for discovery of disease-related genes and biological pathways. In the present study, we examined gene expression in whole blood of 12 patients with CAD (coronary artery disease) and 12 healthy control subjects. Furthermore, ten patients with CAD underwent whole-blood gene expression analysis before and after the completion of a cardiac rehabilitation programme following surgical coronary revascularization. mRNA and miRNA (microRNA) were isolated for expression profiling. Gene expression analysis identified 365 differentially expressed genes in patients with CAD compared with healthy controls (175 up- and 190 down-regulated in CAD), and 645 in CAD rehabilitation patients (196 up- and 449 down-regulated post-rehabilitation). Biological pathway analysis identified a number of canonical pathways, including oxidative phosphorylation and mitochondrial function, as being significantly and consistently modulated across the groups. Analysis of miRNA expression revealed a number of differentially expressed miRNAs, including hsa-miR-140-3p (control compared with CAD, P=0.017), hsa-miR-182 (control compared with CAD, P=0.093), hsa-miR-92a and hsa-miR-92b (post- compared with pre-exercise, P<0.01). Global analysis of predicted miRNA targets found significantly reduced expression of genes with target regions compared with those without: hsa-miR-140-3p (P=0.002), hsa-miR-182 (P=0.001), hsa-miR-92a and hsa-miR-92b (P=2.2x10-16). In conclusion, using whole blood as a 'surrogate tissue' in patients with CAD, we have identified differentially expressed miRNAs, differentially regulated genes and modulated pathways which warrant further investigation in the setting of cardiovascular function. This approach may represent a novel non-invasive strategy to unravel potentially modifiable pathways and possible therapeutic targets in cardiovascular disease."
},
{
"pmid": "24502829",
"title": "Has-mir-146a rs2910164 polymorphism and risk of immune thrombocytopenia.",
"abstract": "The purpose of this study was to determine the association of single nucleotide polymorphisms (SNP) of the has-mir-146a (miR-146a) genes with the risk for immune thrombocytopenia (ITP). The genotyping of miR-146a rs2910164 polymorphism was detected by polymerase chain reaction-restriction fragment length polymorphism. In the patients with ITP, the frequencies of GG, GC and CC genotypes and G and C alleles were 12.5%, 47.9%, 39.6%, 36.4% and 63.6%, respectively. There was no significant difference in genotype and alleles distribution between the ITP patient and the controls (p = 0.77 and 0.51, respectively). No significant differences were found between the two groups when stratified by the age and disease course including acute adult, chronic adult, acute childhood and chronic childhood. In conclusion, there was no association between the SNP of miR-146a and the susceptibility to ITP in a Chinese population."
},
{
"pmid": "15944708",
"title": "MicroRNA expression profiles classify human cancers.",
"abstract": "Recent work has revealed the existence of a class of small non-coding RNA species, known as microRNAs (miRNAs), which have critical functions across various biological processes. Here we use a new, bead-based flow cytometric miRNA expression profiling method to present a systematic expression analysis of 217 mammalian miRNAs from 334 samples, including multiple human cancers. The miRNA profiles are surprisingly informative, reflecting the developmental lineage and differentiation state of the tumours. We observe a general downregulation of miRNAs in tumours compared with normal tissues. Furthermore, we were able to successfully classify poorly differentiated tumours using miRNA expression profiles, whereas messenger RNA profiles were highly inaccurate when applied to the same samples. These findings highlight the potential of miRNA profiling in cancer diagnosis."
},
{
"pmid": "17060945",
"title": "MicroRNA signatures in human cancers.",
"abstract": "MicroRNA (miRNA) alterations are involved in the initiation and progression of human cancer. The causes of the widespread differential expression of miRNA genes in malignant compared with normal cells can be explained by the location of these genes in cancer-associated genomic regions, by epigenetic mechanisms and by alterations in the miRNA processing machinery. MiRNA-expression profiling of human tumours has identified signatures associated with diagnosis, staging, progression, prognosis and response to treatment. In addition, profiling has been exploited to identify miRNA genes that might represent downstream targets of activated oncogenic pathways, or that target protein-coding genes involved in cancer."
},
{
"pmid": "18923704",
"title": "An analysis of human microRNA and disease associations.",
"abstract": "It has been reported that increasingly microRNAs are associated with diseases. However, the patterns among the microRNA-disease associations remain largely unclear. In this study, in order to dissect the patterns of microRNA-disease associations, we performed a comprehensive analysis to the human microRNA-disease association data, which is manually collected from publications. We built a human microRNA associated disease network. Interestingly, microRNAs tend to show similar or different dysfunctional evidences for the similar or different disease clusters, respectively. A negative correlation between the tissue-specificity of a microRNA and the number of diseases it associated was uncovered. Furthermore, we observed an association between microRNA conservation and disease. Finally, we uncovered that microRNAs associated with the same disease tend to emerge as predefined microRNA groups. These findings can not only provide help in understanding the associations between microRNAs and human diseases but also suggest a new way to identify novel disease-associated microRNAs."
},
{
"pmid": "17353930",
"title": "Network-based prediction of protein function.",
"abstract": "Functional annotation of proteins is a fundamental problem in the post-genomic era. The recent availability of protein interaction networks for many model species has spurred on the development of computational methods for interpreting such data in order to elucidate protein function. In this review, we describe the current computational approaches for the task, including direct methods, which propagate functional information through the network, and module-assisted methods, which infer functional modules within the network and use those for the annotation task. Although a broad variety of interesting approaches has been developed, further progress in the field will depend on systematic evaluation of the methods and their dissemination in the biological community."
},
{
"pmid": "19204784",
"title": "Cepred: predicting the co-expression patterns of the human intronic microRNAs with their host genes.",
"abstract": "Identifying the tissues in which a microRNA is expressed could enhance the understanding of the functions, the biological processes, and the diseases associated with that microRNA. However, the mechanisms of microRNA biogenesis and expression remain largely unclear and the identification of the tissues in which a microRNA is expressed is limited. Here, we present a machine learning based approach to predict whether an intronic microRNA show high co-expression with its host gene, by doing so, we could infer the tissues in which a microRNA is high expressed through the expression profile of its host gene. Our approach is able to achieve an accuracy of 79% in the leave-one-out cross validation and 95% on an independent testing dataset. We further estimated our method through comparing the predicted tissue specific microRNAs and the tissue specific microRNAs identified by biological experiments. This study presented a valuable tool to predict the co-expression patterns between human intronic microRNAs and their host genes, which would also help to understand the microRNA expression and regulation mechanisms. Finally, this framework can be easily extended to other species."
},
{
"pmid": "17344234",
"title": "A new method to measure the semantic similarity of GO terms.",
"abstract": "MOTIVATION\nAlthough controlled biochemical or biological vocabularies, such as Gene Ontology (GO) (http://www.geneontology.org), address the need for consistent descriptions of genes in different data sources, there is still no effective method to determine the functional similarities of genes based on gene annotation information from heterogeneous data sources.\n\n\nRESULTS\nTo address this critical need, we proposed a novel method to encode a GO term's semantics (biological meanings) into a numeric value by aggregating the semantic contributions of their ancestor terms (including this specific term) in the GO graph and, in turn, designed an algorithm to measure the semantic similarity of GO terms. Based on the semantic similarities of GO terms used for gene annotation, we designed a new algorithm to measure the functional similarity of genes. The results of using our algorithm to measure the functional similarities of genes in pathways retrieved from the saccharomyces genome database (SGD), and the outcomes of clustering these genes based on the similarity values obtained by our algorithm are shown to be consistent with human perspectives. Furthermore, we developed a set of online tools for gene similarity measurement and knowledge discovery.\n\n\nAVAILABILITY\nThe online tools are available at: http://bioinformatics.clemson.edu/G-SESAME.\n\n\nSUPPLEMENTARY INFORMATION\nhttp://bioinformatics.clemson.edu/Publication/Supplement/gsp.htm."
},
{
"pmid": "30881376",
"title": "An Improved Deep Forest Model for Predicting Self-Interacting Proteins From Protein Sequence Using Wavelet Transformation.",
"abstract": "Self-interacting proteins (SIPs), whose more than two identities can interact with each other, play significant roles in the understanding of cellular process and cell functions. Although a number of experimental methods have been designed to detect the SIPs, they remain to be extremely time-consuming, expensive, and challenging even nowadays. Therefore, there is an urgent need to develop the computational methods for predicting SIPs. In this study, we propose a deep forest based predictor for accurate prediction of SIPs using protein sequence information. More specifically, a novel feature representation method, which integrate position-specific scoring matrix (PSSM) with wavelet transform, is introduced. To evaluate the performance of the proposed method, cross-validation tests are performed on two widely used benchmark datasets. The experimental results show that the proposed model achieved high accuracies of 95.43 and 93.65% on human and yeast datasets, respectively. The AUC value for evaluating the performance of the proposed method was also reported. The AUC value for yeast and human datasets are 0.9203 and 0.9586, respectively. To further show the advantage of the proposed method, it is compared with several existing methods. The results demonstrate that the proposed model is better than other SIPs prediction methods. This work can offer an effective architecture to biologists in detecting new SIPs."
},
{
"pmid": "29858068",
"title": "A Deep Learning Framework for Robust and Accurate Prediction of ncRNA-Protein Interactions Using Evolutionary Information.",
"abstract": "The interactions between non-coding RNAs (ncRNAs) and proteins play an important role in many biological processes, and their biological functions are primarily achieved by binding with a variety of proteins. High-throughput biological techniques are used to identify protein molecules bound with specific ncRNA, but they are usually expensive and time consuming. Deep learning provides a powerful solution to computationally predict RNA-protein interactions. In this work, we propose the RPI-SAN model by using the deep-learning stacked auto-encoder network to mine the hidden high-level features from RNA and protein sequences and feed them into a random forest (RF) model to predict ncRNA binding proteins. Stacked assembling is further used to improve the accuracy of the proposed method. Four benchmark datasets, including RPI2241, RPI488, RPI1807, and NPInter v2.0, were employed for the unbiased evaluation of five established prediction tools: RPI-Pred, IPMiner, RPISeq-RF, lncPro, and RPI-SAN. The experimental results show that our RPI-SAN model achieves much better performance than other methods, with accuracies of 90.77%, 89.7%, 96.1%, and 99.33%, respectively. It is anticipated that RPI-SAN can be used as an effective computational tool for future biomedical researches and can accurately predict the potential ncRNA-protein interacted pairs, which provides reliable guidance for biological research."
},
{
"pmid": "30917115",
"title": "LMTRDA: Using logistic model tree to predict MiRNA-disease associations by fusing multi-source information of sequences and similarities.",
"abstract": "Emerging evidence has shown microRNAs (miRNAs) play an important role in human disease research. Identifying potential association among them is significant for the development of pathology, diagnose and therapy. However, only a tiny portion of all miRNA-disease pairs in the current datasets are experimentally validated. This prompts the development of high-precision computational methods to predict real interaction pairs. In this paper, we propose a new model of Logistic Model Tree for predicting miRNA-Disease Association (LMTRDA) by fusing multi-source information including miRNA sequences, miRNA functional similarity, disease semantic similarity, and known miRNA-disease associations. In particular, we introduce miRNA sequence information and extract its features using natural language processing technique for the first time in the miRNA-disease prediction model. In the cross-validation experiment, LMTRDA obtained 90.51% prediction accuracy with 92.55% sensitivity at the AUC of 90.54% on the HMDD V3.0 dataset. To further evaluate the performance of LMTRDA, we compared it with different classifier and feature descriptor models. In addition, we also validate the predictive ability of LMTRDA in human diseases including Breast Neoplasms, Breast Neoplasms and Lymphoma. As a result, 28, 27 and 26 out of the top 30 miRNAs associated with these diseases were verified by experiments in different kinds of case studies. These experimental results demonstrate that LMTRDA is a reliable model for predicting the association among miRNAs and diseases."
},
{
"pmid": "30349558",
"title": "Accurate Prediction of ncRNA-Protein Interactions From the Integration of Sequence and Evolutionary Information.",
"abstract": "Non-coding RNA (ncRNA) plays a crucial role in numerous biological processes including gene expression and post-transcriptional gene regulation. The biological function of ncRNA is mostly realized by binding with related proteins. Therefore, an accurate understanding of interactions between ncRNA and protein has a significant impact on current biological research. The major challenge at this stage is the waste of a great deal of redundant time and resource consumed on classification in traditional interaction pattern prediction methods. Fortunately, an efficient classifier named LightGBM can solve this difficulty of long time consumption. In this study, we employed LightGBM as the integrated classifier and proposed a novel computational model for predicting ncRNA and protein interactions. More specifically, the pseudo-Zernike Moments and singular value decomposition algorithm are employed to extract the discriminative features from protein and ncRNA sequences. On four widely used datasets RPI369, RPI488, RPI1807, and RPI2241, we evaluated the performance of LGBM and obtained an superior performance with AUC of 0.799, 0.914, 0.989, and 0.762, respectively. The experimental results of 10-fold cross-validation shown that the proposed method performs much better than existing methods in predicting ncRNA-protein interaction patterns, which could be used as a useful tool in proteomics research."
},
{
"pmid": "20522252",
"title": "Prioritization of disease microRNAs through a human phenome-microRNAome network.",
"abstract": "BACKGROUND\nThe identification of disease-related microRNAs is vital for understanding the pathogenesis of diseases at the molecular level, and is critical for designing specific molecular tools for diagnosis, treatment and prevention. Experimental identification of disease-related microRNAs poses considerable difficulties. Computational analysis of microRNA-disease associations is an important complementary means for prioritizing microRNAs for further experimental examination.\n\n\nRESULTS\nHerein, we devised a computational model to infer potential microRNA-disease associations by prioritizing the entire human microRNAome for diseases of interest. We tested the model on 270 known experimentally verified microRNA-disease associations and achieved an area under the ROC curve of 75.80%. Moreover, we demonstrated that the model is applicable to diseases with which no known microRNAs are associated. The microRNAome-wide prioritization of microRNAs for 1,599 disease phenotypes is publicly released to facilitate future identification of disease-related microRNAs.\n\n\nCONCLUSIONS\nWe presented a network-based approach that can infer potential microRNA-disease associations and drive testable hypotheses for the experimental efforts to identify the roles of microRNAs in human diseases."
},
{
"pmid": "24273243",
"title": "Protein-driven inference of miRNA-disease associations.",
"abstract": "MOTIVATION\nMicroRNAs (miRNAs) are a highly abundant class of non-coding RNA genes involved in cellular regulation and thus also diseases. Despite miRNAs being important disease factors, miRNA-disease associations remain low in number and of variable reliability. Furthermore, existing databases and prediction methods do not explicitly facilitate forming hypotheses about the possible molecular causes of the association, thereby making the path to experimental follow-up longer.\n\n\nRESULTS\nHere we present miRPD in which miRNA-Protein-Disease associations are explicitly inferred. Besides linking miRNAs to diseases, it directly suggests the underlying proteins involved, which can be used to form hypotheses that can be experimentally tested. The inference of miRNAs and diseases is made by coupling known and predicted miRNA-protein associations with protein-disease associations text mined from the literature. We present scoring schemes that allow us to rank miRNA-disease associations inferred from both curated and predicted miRNA targets by reliability and thereby to create high- and medium-confidence sets of associations. Analyzing these, we find statistically significant enrichment for proteins involved in pathways related to cancer and type I diabetes mellitus, suggesting either a literature bias or a genuine biological trend. We show by example how the associations can be used to extract proteins for disease hypothesis.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll datasets, software and a searchable Web site are available at http://mirpd.jensenlab.org."
},
{
"pmid": "23950912",
"title": "Prediction of microRNAs associated with human diseases based on weighted k most similar neighbors.",
"abstract": "BACKGROUND\nThe identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIt is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates.\n\n\nCONCLUSIONS\nThe superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred."
},
{
"pmid": "25618864",
"title": "Prediction of potential disease-associated microRNAs based on random walk.",
"abstract": "MOTIVATION\nIdentifying microRNAs associated with diseases (disease miRNAs) is helpful for exploring the pathogenesis of diseases. Because miRNAs fulfill function via the regulation of their target genes and because the current number of experimentally validated targets is insufficient, some existing methods have inferred potential disease miRNAs based on the predicted targets. It is difficult for these methods to achieve excellent performance due to the high false-positive and false-negative rates for the target prediction results. Alternatively, several methods have constructed a network composed of miRNAs based on their associated diseases and have exploited the information within the network to predict the disease miRNAs. However, these methods have failed to take into account the prior information regarding the network nodes and the respective local topological structures of the different categories of nodes. Therefore, it is essential to develop a method that exploits the more useful information to predict reliable disease miRNA candidates.\n\n\nRESULTS\nmiRNAs with similar functions are normally associated with similar diseases and vice versa. Therefore, the functional similarity between a pair of miRNAs is calculated based on their associated diseases to construct a miRNA network. We present a new prediction method based on random walk on the network. For the diseases with some known related miRNAs, the network nodes are divided into labeled nodes and unlabeled nodes, and the transition matrices are established for the two categories of nodes. Furthermore, different categories of nodes have different transition weights. In this way, the prior information of nodes can be completely exploited. Simultaneously, the various ranges of topologies around the different categories of nodes are integrated. In addition, how far the walker can go away from the labeled nodes is controlled by restarting the walking. This is helpful for relieving the negative effect of noisy data. For the diseases without any known related miRNAs, we extend the walking on a miRNA-disease bilayer network. During the prediction process, the similarity between diseases, the similarity between miRNAs, the known miRNA-disease associations and the topology information of the bilayer network are exploited. Moreover, the importance of information from different layers of network is considered. Our method achieves superior performance for 18 human diseases with AUC values ranging from 0.786 to 0.945. Moreover, case studies on breast neoplasms, lung neoplasms, prostatic neoplasms and 32 diseases further confirm the ability of our method to discover potential disease miRNAs.\n\n\nAVAILABILITY AND IMPLEMENTATION\nA web service for the prediction and analysis of disease miRNAs is available at http://bioinfolab.stx.hk/midp/."
},
{
"pmid": "27533456",
"title": "HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction.",
"abstract": "Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models."
},
{
"pmid": "28421868",
"title": "RKNNMDA: Ranking-based KNN for MiRNA-Disease Association prediction.",
"abstract": "Cumulative verified experimental studies have demonstrated that microRNAs (miRNAs) could be closely related with the development and progression of human complex diseases. Based on the assumption that functional similar miRNAs may have a strong correlation with phenotypically similar diseases and vice versa, researchers developed various effective computational models which combine heterogeneous biologic data sets including disease similarity network, miRNA similarity network, and known disease-miRNA association network to identify potential relationships between miRNAs and diseases in biomedical research. Considering the limitations in previous computational study, we introduced a novel computational method of Ranking-based KNN for miRNA-Disease Association prediction (RKNNMDA) to predict potential related miRNAs for diseases, and our method obtained an AUC of 0.8221 based on leave-one-out cross validation. In addition, RKNNMDA was applied to 3 kinds of important human cancers for further performance evaluation. The results showed that 96%, 80% and 94% of predicted top 50 potential related miRNAs for Colon Neoplasms, Esophageal Neoplasms, and Prostate Neoplasms have been confirmed by experimental literatures, respectively. Moreover, RKNNMDA could be used to predict potential miRNAs for diseases without any known miRNAs, and it is anticipated that RKNNMDA would be of great use for novel miRNA-disease association identification."
},
{
"pmid": "29701758",
"title": "BNPMDA: Bipartite Network Projection for MiRNA-Disease Association prediction.",
"abstract": "Motivation\nA large number of resources have been devoted to exploring the associations between microRNAs (miRNAs) and diseases in the recent years. However, the experimental methods are expensive and time-consuming. Therefore, the computational methods to predict potential miRNA-disease associations have been paid increasing attention.\n\n\nResults\nIn this paper, we proposed a novel computational model of Bipartite Network Projection for MiRNA-Disease Association prediction (BNPMDA) based on the known miRNA-disease associations, integrated miRNA similarity and integrated disease similarity. We firstly described the preference degree of a miRNA for its related disease and the preference degree of a disease for its related miRNA with the bias ratings. We constructed bias ratings for miRNAs and diseases by using agglomerative hierarchical clustering according to the three types of networks. Then, we implemented the bipartite network recommendation algorithm to predict the potential miRNA-disease associations by assigning transfer weights to resource allocation links between miRNAs and diseases based on the bias ratings. BNPMDA had been shown to improve the prediction accuracy in comparison with previous models according to the area under the receiver operating characteristics (ROC) curve (AUC) results of three typical cross validations. As a result, the AUCs of Global LOOCV, Local LOOCV and 5-fold cross validation obtained by implementing BNPMDA were 0.9028, 0.8380 and 0.8980 ± 0.0013, respectively. We further implemented two types of case studies on several important human complex diseases to confirm the effectiveness of BNPMDA. In conclusion, BNPMDA could effectively predict the potential miRNA-disease associations at a high accuracy level.\n\n\nAvailability and implementation\nBNPMDA is available via http://www.escience.cn/system/file?fileId=99559.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "30598077",
"title": "Constructing a database for the relations between CNV and human genetic diseases via systematic text mining.",
"abstract": "BACKGROUND\nThe detection and interpretation of CNVs are of clinical importance in genetic testing. Several databases and web services are already being used by clinical geneticists to interpret the medical relevance of identified CNVs in patients. However, geneticists or physicians would like to obtain the original literature context for more detailed information, especially for rare CNVs that were not included in databases.\n\n\nRESULTS\nThe resulting CNVdigest database includes 440,485 sentences for CNV-disease relationship. A total number of 1582 CNVs and 2425 diseases are involved. Sentences describing CNV-disease correlations are indexed in CNVdigest, with CNV mentions and disease mentions annotated.\n\n\nCONCLUSIONS\nIn this paper, we use a systematic text mining method to construct a database for the relationship between CNVs and diseases. Based on that, we also developed a concise front-end to facilitate the analysis of CNV/disease association, providing a user-friendly web interface for convenient queries. The resulting system is publically available at http://cnv.gtxlab.com /."
},
{
"pmid": "29490018",
"title": "Prediction of potential disease-associated microRNAs using structural perturbation method.",
"abstract": "Motivation\nThe identification of disease-related microRNAs (miRNAs) is an essential but challenging task in bioinformatics research. Similarity-based link prediction methods are often used to predict potential associations between miRNAs and diseases. In these methods, all unobserved associations are ranked by their similarity scores. Higher score indicates higher probability of existence. However, most previous studies mainly focus on designing advanced methods to improve the prediction accuracy while neglect to investigate the link predictability of the networks that present the miRNAs and diseases associations. In this work, we construct a bilayer network by integrating the miRNA-disease network, the miRNA similarity network and the disease similarity network. We use structural consistency as an indicator to estimate the link predictability of the related networks. On the basis of the indicator, a derivative algorithm, called structural perturbation method (SPM), is applied to predict potential associations between miRNAs and diseases.\n\n\nResults\nThe link predictability of bilayer network is higher than that of miRNA-disease network, indicating that the prediction of potential miRNAs-diseases associations on bilayer network can achieve higher accuracy than based merely on the miRNA-disease network. A comparison between the SPM and other algorithms reveals the reliable performance of SPM which performed well in a 5-fold cross-validation. We test fifteen networks. The AUC values of SPM are higher than some well-known methods, indicating that SPM could serve as a useful computational method for improving the identification accuracy of miRNA‒disease associations. Moreover, in a case study on breast neoplasm, 80% of the top-20 predicted miRNAs have been manually confirmed by previous experimental studies.\n\n\nAvailability and implementation\nhttps://github.com/lecea/SPM-code.git.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "30142158",
"title": "MDHGI: Matrix Decomposition and Heterogeneous Graph Inference for miRNA-disease association prediction.",
"abstract": "Recently, a growing number of biological research and scientific experiments have demonstrated that microRNA (miRNA) affects the development of human complex diseases. Discovering miRNA-disease associations plays an increasingly vital role in devising diagnostic and therapeutic tools for diseases. However, since uncovering associations via experimental methods is expensive and time-consuming, novel and effective computational methods for association prediction are in demand. In this study, we developed a computational model of Matrix Decomposition and Heterogeneous Graph Inference for miRNA-disease association prediction (MDHGI) to discover new miRNA-disease associations by integrating the predicted association probability obtained from matrix decomposition through sparse learning method, the miRNA functional similarity, the disease semantic similarity, and the Gaussian interaction profile kernel similarity for diseases and miRNAs into a heterogeneous network. Compared with previous computational models based on heterogeneous networks, our model took full advantage of matrix decomposition before the construction of heterogeneous network, thereby improving the prediction accuracy. MDHGI obtained AUCs of 0.8945 and 0.8240 in the global and the local leave-one-out cross validation, respectively. Moreover, the AUC of 0.8794+/-0.0021 in 5-fold cross validation confirmed its stability of predictive performance. In addition, to further evaluate the model's accuracy, we applied MDHGI to four important human cancers in three different kinds of case studies. In the first type, 98% (Esophageal Neoplasms) and 98% (Lymphoma) of top 50 predicted miRNAs have been confirmed by at least one of the two databases (dbDEMC and miR2Disease) or at least one experimental literature in PubMed. In the second type of case study, what made a difference was that we removed all known associations between the miRNAs and Lung Neoplasms before implementing MDHGI on Lung Neoplasms. As a result, 100% (Lung Neoplasms) of top 50 related miRNAs have been indexed by at least one of the three databases (dbDEMC, miR2Disease and HMDD V2.0) or at least one experimental literature in PubMed. Furthermore, we also tested our prediction method on the HMDD V1.0 database to prove the applicability of MDHGI to different datasets. The results showed that 50 out of top 50 miRNAs related with the breast neoplasms were validated by at least one of the three databases (HMDD V2.0, dbDEMC, and miR2Disease) or at least one experimental literature."
},
{
"pmid": "31077936",
"title": "Prediction of Potential Disease-Associated MicroRNAs by Using Neural Networks.",
"abstract": "Identifying disease-related microRNAs (miRNAs) is an essential but challenging task in bioinformatics research. Much effort has been devoted to discovering the underlying associations between miRNAs and diseases. However, most studies mainly focus on designing advanced methods to improve prediction accuracy while neglecting to investigate the link predictability of the relationships between miRNAs and diseases. In this work, we construct a heterogeneous network by integrating neighborhood information in the neural network to predict potential associations between miRNAs and diseases, which also consider the imbalance of datasets. We also employ a new computational method called a neural network model for miRNA-disease association prediction (NNMDA). This model predicts miRNA-disease associations by integrating multiple biological data resources. Comparison of our work with other algorithms reveals the reliable performance of NNMDA. Its average AUC score was 0.937 over 15 diseases in a 5-fold cross-validation and AUC of 0.8439 based on leave-one-out cross-validation. The results indicate that NNMDA could be used in evaluating the accuracy of miRNA-disease associations. Moreover, NNMDA was applied to two common human diseases in two types of case studies. In the first type, 26 out of the top 30 predicted miRNAs of lung neoplasms were confirmed by the experiments. In the second type of case study for new diseases without any known miRNAs related to it, we selected breast neoplasms as the test example by hiding the association information between the miRNAs and this disease. The results verified 50 out of the top 50 predicted breast-neoplasm-related miRNAs."
},
{
"pmid": "18927107",
"title": "miR2Disease: a manually curated database for microRNA deregulation in human disease.",
"abstract": "'miR2Disease', a manually curated database, aims at providing a comprehensive resource of microRNA deregulation in various human diseases. The current version of miR2Disease documents 1939 curated relationships between 299 human microRNAs and 94 human diseases by reviewing more than 600 published papers. Around one-seventh of the microRNA-disease relationships represent the pathogenic roles of deregulated microRNA in human disease. Each entry in the miR2Disease contains detailed information on a microRNA-disease relationship, including a microRNA ID, the disease name, a brief description of the microRNA-disease relationship, an expression pattern of the microRNA, the detection method for microRNA expression, experimentally verified target gene(s) of the microRNA and a literature reference. miR2Disease provides a user-friendly interface for a convenient retrieval of each entry by microRNA ID, disease name, or target gene. In addition, miR2Disease offers a submission page that allows researchers to submit established microRNA-disease relationships that are not documented. Once approved by the submission review committee, the submitted records will be included in the database. miR2Disease is freely available at http://www.miR2Disease.org."
},
{
"pmid": "14567057",
"title": "Lymphomas.",
"abstract": "Hodgkin's and non-Hodgkin's lymphomas are an important part of the differential diagnosis of head and neck tumors. Their diagnosis begins with a complete history and physical examination and is confirmed with an appropriately obtained and prepared pathologic specimen. Prognosis and therapy of the lymphomas vary depending on stage and the characteristics of each particular subtype of lymphoma. Low-grade lymphomas and chronic lymphocytic leukemia are characterized by long survival times and are most often treated with palliative intent. More aggressive high-grade lymphomas are treated for cure. Although chemotherapy and radiotherapy remain the mainstays of treatment, immunotherapy demonstrates increasing promise."
},
{
"pmid": "25135367",
"title": "Precision therapy for lymphoma--current state and future directions.",
"abstract": "Modern advances in genomics and cancer biology have produced an unprecedented body of knowledge regarding the molecular pathogenesis of lymphoma. The diverse histological subtypes of lymphoma are molecularly heterogeneous, and most likely arise from distinct oncogenic mechanisms. In parallel to these advances in lymphoma biology, several new classes of molecularly targeted agents have been developed with varying degrees of efficacy across the different types of lymphoma. In general, the development of new drugs for treating lymphoma has been mostly empiric, with a limited knowledge of the molecular target, its involvement in the disease, and the effect of the drug on the target. Thus, the variability observed in clinical responses likely results from underlying molecular heterogeneity. In the era of personalized medicine, the challenge for the treatment of patients with lymphoma will involve correctly matching a molecularly targeted therapy to the unique genetic and molecular composition of each individual lymphoma. In this Review, we discuss current and emerging biomarkers that can guide treatment decisions for patients with lymphoma, and explore the potential challenges and strategies for making biomarker-driven personalized medicine a reality in the cure and management of this disease."
},
{
"pmid": "15761078",
"title": "Global cancer statistics, 2002.",
"abstract": "Estimates of the worldwide incidence, mortality and prevalence of 26 cancers in the year 2002 are now available in the GLOBOCAN series of the International Agency for Research on Cancer. The results are presented here in summary form, including the geographic variation between 20 large \"areas\" of the world. Overall, there were 10.9 million new cases, 6.7 million deaths, and 24.6 million persons alive with cancer (within three years of diagnosis). The most commonly diagnosed cancers are lung (1.35 million), breast (1.15 million), and colorectal (1 million); the most common causes of cancer death are lung cancer (1.18 million deaths), stomach cancer (700,000 deaths), and liver cancer (598,000 deaths). The most prevalent cancer in the world is breast cancer (4.4 million survivors up to 5 years following diagnosis). There are striking variations in the risk of different cancers by geographic area. Most of the international variation is due to exposure to known or suspected risk factors related to lifestyle or environment, and provides a clear challenge to prevention."
},
{
"pmid": "17990321",
"title": "Trends in oesophageal cancer incidence and mortality in Europe.",
"abstract": "To monitor recent trends in mortality from oesophageal cancer in 33 European countries, we analyzed the data provided by the World Health Organization over the last 2 decades, using also joinpoint regression. For selected European cancer registration areas, we also analyzed incidence rates for different histological types. For men in the European Union (EU), age-standardized (world population) mortality rates were stable around 6/100,000 between the early 1980s and the early 1990 s, and slightly declined in the last decade (5.4/100,000 in the early 2000s, annual percent change, APC = -1.1%). In several western European countries, male rates have started to level off or decline during the last decade (APC = -3.4% in France, and -3.0% in Italy). Also in Spain and the UK, which showed upward trends in the 1990 s, the rates tended to level off in most recent years. A levelling of rates was observed only more recently in countries of central and eastern Europe, which had had substantial rises up to the late 1990 s. Oesophageal cancer mortality rates remained comparatively low in European women, and overall EU female rates were stable around 1.1-1.2/100,000 over the last 2 decades (APC = -0.1%). In northern Europe a clear upward trend was observed in the incidence of oesophageal adenocarcinoma, and in Denmark and Scotland incidence of adenocarcinoma in men is now higher than that of squamous-cell carcinoma. Squamous-cell carcinoma remained the prevalent histological type in southern Europe. Changes in smoking habits and alcohol drinking for men, and perhaps nutrition, diet and physical activity for both sexes, can partly or largely explain these trends."
},
{
"pmid": "10801023",
"title": "Esophageal cancer: results of an American College of Surgeons Patient Care Evaluation Study.",
"abstract": "BACKGROUND\nThe last two decades have seen changes in the prevalence, histologic type, and management algorithms for patients with esophageal cancer. The purpose of this study was to evaluate the presentation, stage distribution, and treatment of patients with esophageal cancer using the National Cancer Database of the American College of Surgeons.\n\n\nSTUDY DESIGN\nConsecutively accessed patients (n = 5,044) with esophageal cancer from 828 hospitals during 1994 were evaluated in 1997 for case mix, diagnostic tests, and treatment modalities.\n\n\nRESULTS\nThe mean age of patients was 67.3 years with a male to female ratio of 3:1; non-Hispanic Caucasians made up most patients. Only 16.6% reported no tobacco use. Dysphagia (74%), weight loss (57.3%), gastrointestinal reflux (20.5%), odynophagia (16.6%), and dyspnea (12.1%) were the most common symptoms. Approximately 50% of patients had the tumor in the lower third of the esophagus. Of all patients, 51.6% had squamous cell histology and 41.9% had adenocarcinoma. Barrett's esophagus occurred in 777 patients, or 39% of those with adenocarcinoma. Of those patients that underwent surgery initially, pathology revealed stage I (13.3%), II (34.7%), III (35.7%), and IV (12.3%) disease. For patients with various stages of squamous cell cancer, radiation therapy plus chemotherapy were the most common treatment modalities (39.5%) compared with surgery plus adjuvant therapy (13.2%). For patients with adenocarcinoma, surgery plus adjuvant therapy were the most common treatment methods. Disease-specific overall survival at 1 year was 43%, ranging from 70% to 18% from stages I to IV.\n\n\nCONCLUSIONS\nCancer of the esophagus shows an increasing occurrence of adenocarcinoma in the lower third of the esophagus and is frequently associated with Barrett's esophagus. Choice of treatment was influenced by tumor histology and tumor site. Multimodality (neoadjuvant) therapy was the most common treatment method for patients with esophageal adenocarcinoma. The use of multimodality treatment did not appear to increase postoperative morbidity."
},
{
"pmid": "21547903",
"title": "CpG island methylation status of miRNAs in esophageal squamous cell carcinoma.",
"abstract": "Previous studies on esophageal squamous cell carcinoma (ESCC) indicated that it contains much dysregulation of microRNAs (miRNAs). DNA hypermethylation in the miRNA 5' regulatory region is a mechanism that can account for the downregulation of miRNA in tumors (Esteller, N Engl J Med 2008;358:1148-59). Among those dysregulated miRNAs, miR-203, miR-34b/c, miR-424 and miR-129-2 are embedded in CpG islands, as is the promoter of miR-34a. We investigated their methylation status in ESCC by bisulfite sequencing PCR (BSP) and methylation specific PCR (MSP). The methylation frequency of miR-203 and miR-424 is the same in carcinoma and in the corresponding non-tumor tissues. The methylation ratio of miR-34a, miR-34b/c and miR-129-2 is 66.7% (36/54), 40.7% (22/54) and 96.3% (52/54), respectively in ESCC, which are significantly higher than that in the corresponding non-tumor tissues(p < 0.01). Quantitative RT-PCR analysis in clinical samples suggested that CpG island methylation is significantly correlated with their low expression in ESCC, 5-aza-2'-deoxycytidine (DAC) treatment partly recovered their expression in EC9706 cell line. We conclude that CpG island methylation of miR-34a, miR-34b/c and miR-129-2 are frequent events and important mechanism for their low expression in ESCC. DNA methylation changes have been reported to occur early in carcinogenesis and are potentially good early indicators of carcinoma (Laird, Nat Rev Cancer 2003;3:253-66). The high methylation ratio of miR-129-2 indicated its potential as a methylation biomarker in early diagnosis of ESCC."
},
{
"pmid": "24194601",
"title": "HMDD v2.0: a database for experimentally supported human microRNA and disease associations.",
"abstract": "The Human microRNA Disease Database (HMDD; available via the Web site at http://cmbi.bjmu.edu.cn/hmdd and http://202.38.126.151/hmdd/tools/hmdd2.html) is a collection of experimentally supported human microRNA (miRNA) and disease associations. Here, we describe the HMDD v2.0 update that presented several novel options for users to facilitate exploration of the data in the database. In the updated database, miRNA-disease association data were annotated in more details. For example, miRNA-disease association data from genetics, epigenetics, circulating miRNAs and miRNA-target interactions were integrated into the database. In addition, HMDD v2.0 presented more data that were generated based on concepts derived from the miRNA-disease association data, including disease spectrum width of miRNAs and miRNA spectrum width of human diseases. Moreover, we provided users a link to download all the data in the HMDD v2.0 and a link to submit novel data into the database. Meanwhile, we also maintained the old version of HMDD. By keeping data sets up-to-date, HMDD should continue to serve as a valuable resource for investigating the roles of miRNAs in human disease."
},
{
"pmid": "20439255",
"title": "Inferring the human microRNA functional similarity and functional network based on microRNA-associated diseases.",
"abstract": "MOTIVATION\nIt is popular to explore meaningful molecular targets and infer new functions of genes through gene functional similarity measuring and gene functional network construction. However, little work is available in this field for microRNA (miRNA) genes due to limited miRNA functional annotations. With the rapid accumulation of miRNAs, it is increasingly needed to uncover their functional relationships in a systems level.\n\n\nRESULTS\nIt is known that genes with similar functions are often associated with similar diseases, and the relationship of different diseases can be represented by a structure of directed acyclic graph (DAG). This is also true for miRNA genes. Therefore, it is feasible to infer miRNA functional similarity by measuring the similarity of their associated disease DAG. Based on the above observations and the rapidly accumulated human miRNA-disease association data, we presented a method to infer the pairwise functional similarity and functional network for human miRNAs based on the structures of their disease relationships. Comparisons showed that the calculated miRNA functional similarity is well associated with prior knowledge of miRNA functional relationship. More importantly, this method can also be used to predict novel miRNA biomarkers and to infer novel potential functions or associated diseases for miRNAs. In addition, this method can be easily extended to other species when sufficient miRNA-associated disease data are available for specific species.\n\n\nAVAILABILITY\nThe online tool is available at http://cmbi.bjmu.edu.cn/misim\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "12835272",
"title": "Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation.",
"abstract": "MOTIVATION\nMany bioinformatics data resources not only hold data in the form of sequences, but also as annotation. In the majority of cases, annotation is written as scientific natural language: this is suitable for humans, but not particularly useful for machine processing. Ontologies offer a mechanism by which knowledge can be represented in a form capable of such processing. In this paper we investigate the use of ontological annotation to measure the similarities in knowledge content or 'semantic similarity' between entries in a data resource. These allow a bioinformatician to perform a similarity measure over annotation in an analogous manner to those performed over sequences. A measure of semantic similarity for the knowledge component of bioinformatics resources should afford a biologist a new tool in their repertoire of analyses.\n\n\nRESULTS\nWe present the results from experiments that investigate the validity of using semantic similarity by comparison with sequence similarity. We show a simple extension that enables a semantic search of the knowledge held within sequence databases.\n\n\nAVAILABILITY\nSoftware available from http://www.russet.org.uk."
},
{
"pmid": "18957447",
"title": "The database of experimentally supported targets: a functional update of TarBase.",
"abstract": "TarBase5.0 is a database which houses a manually curated collection of experimentally supported microRNA (miRNA) targets in several animal species of central scientific interest, plants and viruses. MiRNAs are small non-coding RNA molecules that exhibit an inhibitory effect on gene expression, interfering with the stability and translational efficiency of the targeted mature messenger RNAs. Even though several computational programs exist to predict miRNA targets, there is a need for a comprehensive collection and description of miRNA targets with experimental support. Here we introduce a substantially extended version of this resource. The current version includes more than 1300 experimentally supported targets. Each target site is described by the miRNA that binds it, the gene in which it occurs, the nature of the experiments that were conducted to test it, the sufficiency of the site to induce translational repression and/or cleavage, and the paper from which all these data were extracted. Additionally, the database is functionally linked to several other relevant and useful databases such as Ensembl, Hugo, UCSC and SwissProt. The TarBase5.0 database can be queried or downloaded from http://microrna.gr/tarbase."
},
{
"pmid": "19649320",
"title": "Semantic similarity in biomedical ontologies.",
"abstract": "In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research."
},
{
"pmid": "21893517",
"title": "Gaussian interaction profile kernels for predicting drug-target interaction.",
"abstract": "MOTIVATION\nThe in silico prediction of potential interactions between drugs and target proteins is of core importance for the identification of new drugs or novel targets for existing drugs. However, only a tiny portion of all drug-target pairs in current datasets are experimentally validated interactions. This motivates the need for developing computational methods that predict true interaction pairs with high accuracy.\n\n\nRESULTS\nWe show that a simple machine learning method that uses the drug-target network as the only source of information is capable of predicting true interaction pairs with high accuracy. Specifically, we introduce interaction profiles of drugs (and of targets) in a network, which are binary vectors specifying the presence or absence of interaction with every target (drug) in that network. We define a kernel on these profiles, called the Gaussian Interaction Profile (GIP) kernel, and use a simple classifier, (kernel) Regularized Least Squares (RLS), for prediction drug-target interactions. We test comparatively the effectiveness of RLS with the GIP kernel on four drug-target interaction networks used in previous studies. The proposed algorithm achieves area under the precision-recall curve (AUPR) up to 92.7, significantly improving over results of state-of-the-art methods. Moreover, we show that using also kernels based on chemical and genomic information further increases accuracy, with a neat improvement on small datasets. These results substantiate the relevance of the network topology (in the form of interaction profiles) as source of information for predicting drug-target interactions.\n\n\nAVAILABILITY\nSoftware and Supplementary Material are available at http://cs.ru.nl/~tvanlaarhoven/drugtarget2011/.\n\n\nCONTACT\[email protected]; [email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "28113829",
"title": "Highly Efficient Framework for Predicting Interactions Between Proteins.",
"abstract": "Protein-protein interactions (PPIs) play a central role in many biological processes. Although a large amount of human PPI data has been generated by high-throughput experimental techniques, they are very limited compared to the estimated 130 000 protein interactions in humans. Hence, automatic methods for human PPI-detection are highly desired. This work proposes a novel framework, i.e., Low-rank approximation-kernel Extreme Learning Machine (LELM), for detecting human PPI from a protein's primary sequences automatically. It has three main steps: 1) mapping each protein sequence into a matrix built on all kinds of adjacent amino acids; 2) applying the low-rank approximation model to the obtained matrix to solve its lowest rank representation, which reflects its true subspace structures; and 3) utilizing a powerful kernel extreme learning machine to predict the probability for PPI based on this lowest rank representation. Experimental results on a large-scale human PPI dataset demonstrate that the proposed LELM has significant advantages in accuracy and efficiency over the state-of-art approaches. Hence, this work establishes a new and effective way for the automatic detection of PPI."
}
] |
Frontiers in Psychology | 31428026 | PMC6688584 | 10.3389/fpsyg.2019.01777 | Modeling the Quality of Player Passing Decisions in Australian Rules Football Relative to Risk, Reward, and Commitment | The value of player decisions has typically been measured by changes in possession expectations, rather than relative to the value of a player’s alternative options. This study presents a mathematical approach to the measurement of passing decisions of Australian Rules footballers that considers the risk and reward of passing options. A new method for quantifying a player’s spatial influence is demonstrated through a process called commitment modeling, in which the bounds and density of a player’s motion model are fit on empirical commitment to contests, producing a continuous representation of a team’s spatial ownership. This process involves combining the probability density functions of contests that a player committed to, and those they did not. Spatiotemporal player tracking data was collected for AFL matches played at a single stadium in the 2017 and 2018 seasons. It was discovered that the probability of a player committing to a contest decreases as a function of their velocity and of the ball’s time-to-point. Furthermore, the peak density of player commitment probabilities is at a greater distance in front of a player the faster they are moving, while their ability to participate in contests requiring re-orientation diminishes at higher velocities. Analysis of passing decisions revealed that, for passes resulting in a mark, opposition pressure is bimodal, with peaks at spatial dominance equivalent to no pressure and to a one-on-one contest. Density of passing distance peaks at 17.3 m, marginally longer than the minimum distance of a legal mark (15 m). Conversely, the model presented in this study identifies long-range options as have higher associated decision-making values, however a lack of passes in these ranges may be indicative of differing tactical behavior or a difficulty in identifying long-range options. | Related WorkMotion ModelsThere exist many methods for representing a player’s spatial occupancy. One common approach, particularly in football, is that of Voronoi tessellations which bound a player’s owned space as the space in which they could occupy before any other player. Simple applications of this approach do not consider player orientation, velocity, or individual physical capabilities (e.g., Fonseca et al., 2012). Taki and Hasegawa (2000) produced variations incorporating a player’s orientation, velocity, but assumed consistent acceleration. Fujimura and Sugihara (2005) proposed an alternative motion equation, adding a resistive force that decreases velocity. This approach involved a generalized formula that more realistically represented a player’s inability to cover negative space if moving at speed. Gudmundsson and Wolle (2014) individualized these models, fitting a player’s dominant region from observed tracking data.Underlying these models is an assumption that spatial ownership is binary. That is, each location on the field is owned completely by a single player, determined by the time it would take them to reach said location, henceforth referred to as their time-to-point. Through observations of contests, we propose that ownership of space is continuous. For a given location, if the time-to-point of the ball is greater than the time-to-point of at least two players, then no single player owns the space completely. This distinction is important if we wish to quantify spatial occupancy (and its creation) relative to the ball, given its time-to-point, as we need to account for changes in field formations that could occur between possessions.Recent papers have addressed this. The density of playing groups was explored with Gaussian mixture models in Spencer et al. (2017). Spencer et al. (2018) produced a smoothed representation of a team’s control using non-probabilistic player motion models fit on observed tracking data. While a team’s ownership was expressed on a continuous scale, the use of motion models with discrete bounds may result in unrealistic estimations of a player’s influence (Brefeld et al., 2018). Fernandez and Bornn (2018) measured a player’s influence area using bivariate normal distributions that considered a player’s location, velocity, and distance to the ball. The result is a smoothed surface of control in which a team’s influence over a region is continuous, however the size of a player’s influence is within a selected range, rather than learnt from observed movements. Recently, Brefeld et al. (2018) fit player motion models on the distribution of observed player movements, utilizing these probabilistic models to produce more realistic Voronoi-like regions of control. In the interest of computing time, two-dimensional models were produced for different speed and time bands, hence the resultant models are not continuous in all dimensions.Given its contested and dynamic nature, a continuous representation of space control is preferable (e.g., Fernandez and Bornn, 2018; Spencer et al., 2018). Furthermore, a player logically exhibits greater control over space in which they are closer, hence we develop probabilistic motion models in this paper. When probabilistic models are fit on the entirety of a player’s movements (as in Brefeld et al., 2018), we find that the probability of player reorientation is underestimated. In decision-making modeling, our interest is in measuring the contest of space that would occur if the ball were kicked to said space. Hence to represent this realistically, it is important to fit the distribution of player movements observed under similar circumstances. We model a player’s behavior when within proximity of contests. We achieve this via a procedure we call commitment modeling, where we fit the distribution of player commitment to contests in four dimensions (velocity, time, and x- and y- field position). The result is a realistic representation of player behaviors when presented with the opportunity to participate in a contest. | [
"18341133",
"23139744",
"22770973",
"28141823",
"24357947",
"28941634",
"26176890",
"30574842"
] | [
{
"pmid": "18341133",
"title": "Biomechanical considerations of distance kicking in Australian Rules football.",
"abstract": "Kicking for distance in Australian Rules football is an important skill. Here, I examine technical aspects that contribute to achieving maximal kick distance. Twenty-eight elite players kicked for distance while being videoed at 500 Hz. Two-dimensional digitized data of nine body landmarks and the football were used to calculate kinematic parameters from kicking foot toe-off to the instant before ball contact. Longer kick distances were associated with greater foot speeds and shank angular velocities at ball contact, larger last step lengths, and greater distances from the ground when ball contact occurred. Foot speed, shank angular velocity, and ball position relative to the support foot at ball contact were included in the best regression predicting distance. A continuum of technique was evident among the kickers. At one end, kickers displayed relatively larger knee angular velocities and smaller thigh angular velocities at ball contact. At the other end, kickers produced relatively larger thigh angular velocities and smaller knee angular velocities at ball contact. To increase kicking distance, increasing foot speed and shank angular velocity at ball contact, increasing the last step length, and optimizing ball position relative to the ground and support foot are recommended."
},
{
"pmid": "23139744",
"title": "Basketball teams as strategic networks.",
"abstract": "We asked how team dynamics can be captured in relation to function by considering games in the first round of the NBA 2010 play-offs as networks. Defining players as nodes and ball movements as links, we analyzed the network properties of degree centrality, clustering, entropy and flow centrality across teams and positions, to characterize the game from a network perspective and to determine whether we can assess differences in team offensive strategy by their network properties. The compiled network structure across teams reflected a fundamental attribute of basketball strategy. They primarily showed a centralized ball distribution pattern with the point guard in a leadership role. However, individual play-off teams showed variation in their relative involvement of other players/positions in ball distribution, reflected quantitatively by differences in clustering and degree centrality. We also characterized two potential alternate offensive strategies by associated variation in network structure: (1) whether teams consistently moved the ball towards their shooting specialists, measured as \"uphill/downhill\" flux, and (2) whether they distributed the ball in a way that reduced predictability, measured as team entropy. These network metrics quantified different aspects of team strategy, with no single metric wholly predictive of success. However, in the context of the 2010 play-offs, the values of clustering (connectedness across players) and network entropy (unpredictability of ball movement) had the most consistent association with team advancement. Our analyses demonstrate the utility of network approaches in quantifying team strategy and show that testable hypotheses can be evaluated using this approach. These analyses also highlight the richness of basketball networks as a dataset for exploring the relationships between network structure and dynamics with team organization and effectiveness."
},
{
"pmid": "22770973",
"title": "Spatial dynamics of team sports exposed by Voronoi diagrams.",
"abstract": "Team sports represent complex systems: players interact continuously during a game, and exhibit intricate patterns of interaction, which can be identified and investigated at both individual and collective levels. We used Voronoi diagrams to identify and investigate the spatial dynamics of players' behavior in Futsal. Using this tool, we examined 19 plays of a sub-phase of a Futsal game played in a reduced area (20 m(2)) from which we extracted the trajectories of all players. Results obtained from a comparative analysis of player's Voronoi area (dominant region) and nearest teammate distance revealed different patterns of interaction between attackers and defenders, both at the level of individual players and teams. We found that, compared to defenders, larger dominant regions were associated with attackers. Furthermore, these regions were more variable in size among players from the same team but, at the player level, the attackers' dominant regions were more regular than those associated with each of the defenders. These findings support a formal description of the dynamic spatial interaction of the players, at least during the particular sub-phase of Futsal investigated. The adopted approach may be extended to other team behaviors where the actions taken at any instant in time by each of the involved agents are associated with the space they occupy at that particular time."
},
{
"pmid": "28141823",
"title": "Exploring Team Passing Networks and Player Movement Dynamics in Youth Association Football.",
"abstract": "Understanding how youth football players base their game interactions may constitute a solid criterion for fine-tuning the training process and, ultimately, to achieve better individual and team performances during competition. The present study aims to explore how passing networks and positioning variables can be linked to the match outcome in youth elite association football. The participants included 44 male elite players from under-15 and under-17 age groups. A passing network approach within positioning-derived variables was computed to identify the contributions of individual players for the overall team behaviour outcome during a simulated match. Results suggested that lower team passing dependency for a given player (expressed by lower betweenness network centrality scores) and high intra-team well-connected passing relations (expressed by higher closeness network centrality scores) were related to better outcomes. The correlation between the dyads' positioning regularity and the passing density showed a most likely higher correlation in under-15 (moderate effect), indicating a possible more dependence of the ball position rather than in the under-17 teams (small/unclear effects). Overall, this study emphasizes the potential of coupling notational analyses with spatial-temporal relations to produce a more functional and holistic understanding of teams' sports performance. Also, the social network analysis allowed to reveal novel key determinants of collective performance."
},
{
"pmid": "24357947",
"title": "Possession Versus Position: Strategic Evaluation in AFL.",
"abstract": "In sports like Australian Rules football and soccer, teams must battle to achieve possession of the ball in sufficient space to make optimal use of it. Ultimately the teams need to score, and to do that the ball must be brought into the area in front of goal - the place where the defence usually concentrates on shutting down space and opportunity time. Coaches would like to quantify the trade-offs between contested play in good positions and uncontested play in less promising positions, in order to inform their decision-making about where to put their players, and when to gamble on sending the ball to a contest rather than simply maintain possession. To evaluate football strategies, Champion Data has collected the on-ground locations of all 350,000 possessions and stoppages in the past two seasons of AFL (2004, 2005). By following each chain of play through to the next score, we can now reliably estimate the scoreboard \"equity \"of possessing the ball at any location, and measure the effect of having sufficient time to dispose of it effectively. As expected, winning the ball under physical pressure (through a \"hard ball get\") is far more difficult to convert into a score than winning it via a mark. We also analyse some equity gradients to show how getting the ball 20 metres closer to goal is much more important in certain areas of the ground than in others. We conclude by looking at the choices faced by players in possession wanting to maximise their likelihood of success. Key PointsEquity analysis provides a way of estimating the net value of actions on the sporting field.Combined with spatial data analysis, the relative merits of gaining position or maintaining possession can be judged.The advantage of having time and space to use the ball is measured in terms of scoreboard value, and is found to vary with field position.Each sport has identifiable areas of the field with high equity gradients, meaning that it is most important to gain territory there."
},
{
"pmid": "28941634",
"title": "Applying graphs and complex networks to football metric interpretation.",
"abstract": "This work presents a methodology for analysing the interactions between players in a football team, from the point of view of graph theory and complex networks. We model the complex network of passing interactions between players of a same team in 32 official matches of the Liga de Fútbol Profesional (Spain), using a passing/reception graph. This methodology allows us to understand the play structure of the team, by analysing the offensive phases of game-play. We utilise two different strategies for characterising the contribution of the players to the team: the clustering coefficient, and centrality metrics (closeness and betweenness). We show the application of this methodology by analyzing the performance of a professional Spanish team according to these metrics and the distribution of passing/reception in the field. Keeping in mind the dynamic nature of collective sports, in the future we will incorporate metrics which allows us to analyse the performance of the team also according to the circumstances of game-play and to different contextual variables such as, the utilisation of the field space, the time, and the ball, according to specific tactical situations."
},
{
"pmid": "26176890",
"title": "Explaining match outcome in elite Australian Rules football using team performance indicators.",
"abstract": "The relationships between team performance indicators and match outcome have been examined in many team sports, however are limited in Australian Rules football. Using data from the 2013 and 2014 Australian Football League (AFL) regular seasons, this study assessed the ability of commonly reported discrete team performance indicators presented in their relative form (standardised against their opposition for a given match) to explain match outcome (Win/Loss). Logistic regression and decision tree (chi-squared automatic interaction detection (CHAID)) analyses both revealed relative differences between opposing teams for \"kicks\" and \"goal conversion\" as the most influential in explaining match outcome, with two models achieving 88.3% and 89.8% classification accuracies, respectively. Models incorporating a smaller performance indicator set displayed a slightly reduced ability to explain match outcome (81.0% and 81.5% for logistic regression and CHAID, respectively). However, both were fit to 2014 data with reduced error in comparison to the full models. Despite performance similarities across the two analysis approaches, the CHAID model revealed multiple winning performance indicator profiles, thereby increasing its comparative feasibility for use in the field. Coaches and analysts may find these results useful in informing strategy and game plan development in Australian Rules football, with the development of team-specific models recommended in future."
},
{
"pmid": "30574842",
"title": "A rule induction framework for the determination of representative learning design in skilled performance.",
"abstract": "Representative learning design provides a framework for the extent to which practice simulates key elements of a performance setting. Improving both the measurement and analysis of representative learning design would allow for the refinement of sports training environments that seek to replicate competition conditions and provide additional context to the evaluation of athlete performance. Using rule induction, this study aimed to develop working models for the determination of high frequency, representative events in Australian Rules football kicking. A sample of 9005 kicks from the 2015 Australian Football League season were categorised and analysed according to the following constraints: type of pressure, kick distance, possession source, time in possession, velocity and kick target. The Apriori algorithm was used to develop two models. The first consisted of 10 rules containing the most commonly occurring constraint sets occurring during the kick in AF, with support values ranging from 0.15 to 0.22. None of the rules contained more than three constraints and confidence values ranged from 0.63 to 0.84. The second model considered ineffective and effective kick outcomes and displayed 70% classification accuracy. This research provides a measurement approach to determine the degree of representativeness of sports practice and is directly applicable to various team sports."
}
] |
Frontiers in Computational Neuroscience | 31456678 | PMC6700294 | 10.3389/fncom.2019.00056 | Automatic Brain Tumor Segmentation Based on Cascaded Convolutional Neural Networks With Uncertainty Estimation | Automatic segmentation of brain tumors from medical images is important for clinical assessment and treatment planning of brain tumors. Recent years have seen an increasing use of convolutional neural networks (CNNs) for this task, but most of them use either 2D networks with relatively low memory requirement while ignoring 3D context, or 3D networks exploiting 3D features while with large memory consumption. In addition, existing methods rarely provide uncertainty information associated with the segmentation result. We propose a cascade of CNNs to segment brain tumors with hierarchical subregions from multi-modal Magnetic Resonance images (MRI), and introduce a 2.5D network that is a trade-off between memory consumption, model complexity and receptive field. In addition, we employ test-time augmentation to achieve improved segmentation accuracy, which also provides voxel-wise and structure-wise uncertainty information of the segmentation result. Experiments with BraTS 2017 dataset showed that our cascaded framework with 2.5D CNNs was one of the top performing methods (second-rank) for the BraTS challenge. We also validated our method with BraTS 2018 dataset and found that test-time augmentation improves brain tumor segmentation accuracy and that the resulting uncertainty information can indicate potential mis-segmentations and help to improve segmentation accuracy. | 2. Related Works2.1. Brain Tumor Segmentation From MRIExisting brain tumor segmentation methods include generative and discriminative approaches. By incorporating domain-specific prior knowledge, generative approaches usually have good generalization to unseen images, as they directly model probabilistic distributions of anatomical structures and textural appearances of healthy tissues and the tumor (Menze et al., 2010). However, it is challenging to precisely model probabilistic distributions of brain tumors. In contrast, discriminative approaches extract features from images and associate the features with the tissue classes using discriminative classifiers. They often require a supervised learning set-up where images and voxel-wise class labels are needed for training. Classical methods of this category include decision trees (Zikic et al., 2012) and support vector machines (Lee et al., 2005).Recently, CNNs as a type of discriminative approach have achieved promising results on multi-modal brain tumor segmentation tasks. Havaei et al. (2016) combined local and global 2D features extracted by a CNN for brain tumor segmentation. Although it outperformed the conventional discriminative methods, the 2D CNN only uses 2D features without considering the volumetric context. To incorporate 3D features, applying the 2D networks in axial, sagittal and coronal views and fusing their results has been proposed (McKinley et al., 2016; Li and Shen, 2017; Hu et al., 2018). However, the features employed by such a method are from cross-planes rather than entire 3D space.DeepMedic (Kamnitsas et al., 2017b) used a 3D CNN to exploit multi-scale volumetric features and further encoded spatial information with a fully connected Conditional Random Field (CRF). It achieved better segmentation performance than using 2D CNNs but has a relatively low inference efficiency due to the multi-scale image patch-based analysis. Isensee et al. (2018) applied 3D U-Net to brain tumor segmentation with a carefully designed training process. Myronenko (2018) used an encoder-decoder architecture for 3D brain tumor segmentation and the network contained an additional branch of variational auto-encoder to reconstruct the input image for regularization. To obtain robust brain tumor segmentation resutls, Kamnitsas et al. (2017a) proposed an ensemble of multiple CNNs including 3D Fully Convolutional Networks (FCN) (Long et al., 2015), DeepMedic (Kamnitsas et al., 2017b), and 3D U-Net (Ronneberger et al., 2015; Abdulkadir et al., 2016). The ensemble model is relatively robust to the choice of hyper-parameters of each individual CNN and reduces the risk of overfitting. However, it is computationally intensive to run a set of models for both training and inference (Malmi et al., 2015; Pereira et al., 2017; Xu et al., 2018).2.2. Uncertainty Estimation for CNNsUncertainty information can come from either the CNN models or the input images. For model-based (epistemic) uncertainty, exact Bayesian modeling is mathematically grounded but often computationally expensive and hard to implement. Alternatively, Gal and Ghahramani (2016) cast test-time dropout as a Bayesian approximation to estimate a CNN's model uncertainty. Zhu and Zabaras (2018) estimated uncertainty of a CNN's parameters using approximated Bayesian inference via stochastic variational gradient descent. Other approximation methods include Monte Carlo batch normalization (Teye et al., 2018), Markov chain Monte Carlo (Neal, 2012) and variational Bayesian (Louizos and Welling, 2016). Lakshminarayanan et al. (2017) proposed a simple and scalable method using ensembles of models for uncertainty estimation. For test image-based (aleatoric) uncertainty, Ayhan and Berens (2018) found that test-time augmentation was an effective and efficient method for exploring the locality of a test sample in aleatoric uncertainty estimation, but its application to medical image segmentation has not been investigated. Kendall and Gal (2017) proposed a unified Bayesian framework that combines aleatoric and epistemic uncertainty estimations for deep learning models. In the context of brain tumor segmentation, Eaton-Rosen et al. (2018) and Jungo et al. (2018) used test-time dropout to estimate the uncertainty. Wang et al. (2019a) analyzed a combination of epistemic and aleatoric uncertainties for whole tumor segmentation, but the uncertainty information of other structures (tumor core and enhancing tumor core) was not investigated. | [
"28872634",
"29544777",
"27310171",
"27865153",
"27157931",
"25494501",
"29969407",
"25461336",
"29993532"
] | [
{
"pmid": "28872634",
"title": "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features.",
"abstract": "Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method."
},
{
"pmid": "29544777",
"title": "NiftyNet: a deep-learning platform for medical imaging.",
"abstract": "BACKGROUND AND OBJECTIVES\nMedical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon.\n\n\nMETHODS\nThe NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default.\n\n\nRESULTS\nWe present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses.\n\n\nCONCLUSIONS\nThe NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications."
},
{
"pmid": "27310171",
"title": "Brain tumor segmentation with Deep Neural Networks.",
"abstract": "In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster."
},
{
"pmid": "27865153",
"title": "Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.",
"abstract": "We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available."
},
{
"pmid": "27157931",
"title": "The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary.",
"abstract": "The 2016 World Health Organization Classification of Tumors of the Central Nervous System is both a conceptual and practical advance over its 2007 predecessor. For the first time, the WHO classification of CNS tumors uses molecular parameters in addition to histology to define many tumor entities, thus formulating a concept for how CNS tumor diagnoses should be structured in the molecular era. As such, the 2016 CNS WHO presents major restructuring of the diffuse gliomas, medulloblastomas and other embryonal tumors, and incorporates new entities that are defined by both histology and molecular features, including glioblastoma, IDH-wildtype and glioblastoma, IDH-mutant; diffuse midline glioma, H3 K27M-mutant; RELA fusion-positive ependymoma; medulloblastoma, WNT-activated and medulloblastoma, SHH-activated; and embryonal tumour with multilayered rosettes, C19MC-altered. The 2016 edition has added newly recognized neoplasms, and has deleted some entities, variants and patterns that no longer have diagnostic and/or biological relevance. Other notable changes include the addition of brain invasion as a criterion for atypical meningioma and the introduction of a soft tissue-type grading system for the now combined entity of solitary fibrous tumor / hemangiopericytoma-a departure from the manner by which other CNS tumors are graded. Overall, it is hoped that the 2016 CNS WHO will facilitate clinical, experimental and epidemiological studies that will lead to improvements in the lives of patients with brain tumors."
},
{
"pmid": "25494501",
"title": "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).",
"abstract": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource."
},
{
"pmid": "29969407",
"title": "Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning.",
"abstract": "Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods."
},
{
"pmid": "25461336",
"title": "A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.",
"abstract": "Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively."
},
{
"pmid": "29993532",
"title": "DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation.",
"abstract": "Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods."
}
] |
Frontiers in Neurorobotics | 31456682 | PMC6700334 | 10.3389/fnbot.2019.00064 | Series Elastic Behavior of Biarticular Muscle-Tendon Structure in a Robotic Leg | We investigate the role of lower leg muscle-tendon structures in providing serial elastic behavior to the hip actuator. We present a leg design with physical elastic elements in leg angle and virtual leg axis direction, and its impact onto energy efficient legged locomotion. By testing and comparing two robotic lower leg spring configurations, we can provide potential explanations of the functionality of similar animal leg morphologies with lower leg muscle-tendon network structures. We investigate the effects of leg angle compliance during locomotion. In a proof of concept, we show that a leg with a gastrocnemius inspired elasticity possesses elastic components that deflect in leg angle directions. The leg design with elastic elements in leg angle direction can store hip actuator energy in the series elastic element. We then show the leg's advantages in mechanical design in a vertical drop experiment. In the drop experiments the biarticular leg requires 46% less power. During drop loading, the leg adapts its posture and stores the energy in its springs. The increased energy storing capacity in leg angle direction reduces energy requirements and cost of transport by 31% during dynamic hopping to a cost of transport of 1.2 at 0.9 kg body weight. The biarticular robot leg design has major advantages, especially compared to more traditional robot designs. Despite its high degree of under-actuation, it is easy to converge into and maintain dynamic hopping locomotion. The presented control is based on a simple-to-implement, feed-forward pattern generator. The biarticular legs lightweight design can be rapidly assembled and is largely made from elements created by rapid prototyping. At the same time it is robust, and passively withstands drops from 200% body height. The biarticular leg shows, to the best of the authors' knowledge, the lowest achieved relative cost of transport documented for all dynamically hopping and running robots of 64% of a comparable natural runner's COT. | 1.1. Related WorkThe functional morphology of multiple degrees of compliance in multi-segmented legs in animals and robotics has not been understood yet by either biologists nor roboticists. While two-segmented legs with one degree of compliance have been studied thoroughly (Raibert et al., 1984; Hutter et al., 2012; Semini et al., 2015; Park et al., 2017), the placement and interplay between multiple compliant elements is still an unsolved research topic.Because of observations in biological examples, implementations of multi-segmented legs with several compliant elements have been tested in robotic hardware as well as in simulations to understand their behavior. Spröwitz et al. (2013, 2018) implemented a leg with a biarticular spring to investigate self stabilizing behavior on a quadruped during dynamic locomotion. They showed, that a simple sensorless central pattern generator with a position controller can allow dynamic feed forward locomotion. Iida et al. (2007) investigated the possibility to create both walking and running gaits in a humanoid biped with biarticular springs as well as the ability to create more human-like gaits. Sato et al. (2015) implemented a robot with only one biarticular spring but no intrinsic compliant knee. There, the biarticular spring provided elastic behavior to the leg for jumping and landing motions.An aspect that has not been in the research focus yet is the interplay between both an intrinsically compliant knee and a biarticular spring in a multi-segmented leg. No systematic and comparative research exists so far comparing multiple compliant elements in highly under-actuated segmented legs, specifically for the combination of leg-angle and virtual leg axis compliance. As energy fluctuates in both directions in animal legs (Alexander, 1984) one can expect that compliant passive mechanisms evolved benefiting from these resources, i.e. energetically. We focus our research on the torque influence onto a series elastic biarticular spring and the increase in energy efficiency the additional stored energy provides.In this paper we present a leg design with compliance in virtual leg axis direction as well as in leg angle direction. We show that the element in leg angle direction charges under torque influence, providing series elastic behavior for the hip. We show how the implementation of this element can drastically increase the amount of elastic energy stored in the leg. | [
"12485689",
"834252",
"29200662",
"2625422",
"19036337",
"17015312",
"26792339",
"11357549",
"24639645",
"1137237",
"15198698"
] | [
{
"pmid": "12485689",
"title": "Tendon elasticity and muscle function.",
"abstract": "Vertebrate animals exploit the elastic properties of their tendons in several different ways. Firstly, metabolic energy can be saved in locomotion if tendons stretch and then recoil, storing and returning elastic strain energy, as the animal loses and regains kinetic energy. Leg tendons save energy in this way when birds and mammals run, and an aponeurosis in the back is also important in galloping mammals. Tendons may have similar energy-saving roles in other modes of locomotion, for example in cetacean swimming. Secondly, tendons can recoil elastically much faster than muscles can shorten, enabling animals to jump further than they otherwise could. Thirdly, tendon elasticity affects the control of muscles, enhancing force control at the expense of position control."
},
{
"pmid": "29200662",
"title": "Gearing effects of the patella (knee extensor muscle sesamoid) of the helmeted guineafowl during terrestrial locomotion.",
"abstract": "Human patellae (kneecaps) are thought to act as gears, altering the mechanical advantage of knee extensor muscles during running. Similar sesamoids have evolved in the knee extensor tendon independently in birds, but it is unknown if these also affect the mechanical advantage of knee extensors. Here, we examine the mechanics of the patellofemoral joint in the helmeted guineafowl Numida meleagris using a method based on muscle and tendon moment arms taken about the patella's rotation centre around the distal femur. Moment arms were estimated from a computer model representing hindlimb anatomy, using hip, knee and patellar kinematics acquired via marker-based biplanar fluoroscopy from a subject running at 1.6 ms-1 on a treadmill. Our results support the inference that the patella of Numida does alter knee extensor leverage during running, but with a mechanical advantage generally greater than that seen in humans, implying relatively greater extension force but relatively lesser extension velocity."
},
{
"pmid": "2625422",
"title": "The spring-mass model for running and hopping.",
"abstract": "A simple spring-mass model consisting of a massless spring attached to a point mass describes the interdependency of mechanical parameters characterizing running and hopping of humans as a function of speed. The bouncing mechanism itself results in a confinement of the free parameter space where solutions can be found. In particular, bouncing frequency and vertical displacement are closely related. Only a few parameters, such as the vector of the specific landing velocity and the specific leg length, are sufficient to determine the point of operation of the system. There are more physiological constraints than independent parameters. As constraints limit the parameter space where hopping is possible, they must be tuned to each other in order to allow for hopping at all. Within the range of physiologically possible hopping frequencies, a human hopper selects a frequency where the largest amount of energy can be delivered and still be stored elastically. During running and hopping animals use flat angles of the landing velocity resulting in maximum contact length. In this situation ground reaction force is proportional to specific contact time and total displacement is proportional to the square of the step duration. Contact time and hopping frequency are not simply determined by the natural frequency of the spring-mass system, but are influenced largely by the vector of the landing velocity. Differences in the aerial phase or in the angle of the landing velocity result in the different kinematic and dynamic patterns observed during running and hopping. Despite these differences, the model predicts the mass specific energy fluctuations of the center of mass per distance to be similar for runners and hoppers and similar to empirical data obtained for animals of various size."
},
{
"pmid": "19036337",
"title": "Biomechanics: running over uneven terrain is a no-brainer.",
"abstract": "When runners encounter a sudden bump in the road, they rapidly adjust leg mechanics to keep from falling. New evidence suggests that they may be able to do this without help from the brain."
},
{
"pmid": "17015312",
"title": "Compliant leg behaviour explains basic dynamics of walking and running.",
"abstract": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed."
},
{
"pmid": "26792339",
"title": "Contribution of elastic tissues to the mechanics and energetics of muscle function during movement.",
"abstract": "Muscle force production occurs within an environment of tissues that exhibit spring-like behavior, and this elasticity is a critical determinant of muscle performance during locomotion. Muscle force and power output both depend on the speed of contraction, as described by the isotonic force-velocity curve. By influencing the speed of contractile elements, elastic structures can have a profound effect on muscle force, power and work. In very rapid movements, elastic mechanisms can amplify muscle power by storing the work of muscle contraction slowly and releasing it rapidly. When energy must be dissipated rapidly, such as in landing from a jump, energy stored rapidly in elastic elements can be released more slowly to stretch muscle contractile elements, reducing the power input to muscle and possibly protecting it from damage. Elastic mechanisms identified so far rely primarily on in-series tendons, but many structures within muscles exhibit spring-like properties. Actomyosin cross-bridges, actin and myosin filaments, titin, and the connective tissue scaffolding of the extracellular matrix all have the potential to store and recover elastic energy during muscle contraction. The potential contribution of these elements can be assessed from their stiffness and estimates of the strain they undergo during muscle function. Such calculations provide boundaries for the possible roles these springs might play in locomotion, and may help to direct future studies of the uses of elastic elements in muscle."
},
{
"pmid": "11357549",
"title": "Stable operation of an elastic three-segment leg.",
"abstract": "Quasi-elastic operation of joints in multi-segmented systems as they occur in the legs of humans, animals, and robots requires a careful tuning of leg properties and geometry if catastrophic counteracting operation of the joints is to be avoided. A simple three-segment model has been used to investigate the segmental organization of the leg during repulsive tasks like human running and jumping. The effective operation of the muscles crossing the knee and ankle joints is described in terms of rotational springs. The following issues were addressed in this study: (1) how can the joint torques be controlled to result in a spring-like leg operation? (2) how can rotational stiffnesses be adjusted to leg-segment geometry? and (3) to what extend can unequal segment lengths and orientations be advantageous? It was found that: (1) the three-segment leg tends to become unstable at a certain amount of bending expressed by a counterrotation of the joints; (2) homogeneous bending requires adaptation of the rotational stiffnesses to the outer segment lengths; (3) nonlinear joint torque-displacement behaviour extends the range of stable leg bending and may result in an almost constant leg stiffness; (4) biarticular structures (like human gastrocnemius muscle) and geometrical constraints (like heel strike) support homogeneous bending in both joints; (5) unequal segment lengths enable homogeneous bending if asymmetric nominal angles meet the asymmetry in leg geometry; and (6) a short foot supports the elastic control of almost stretched knee positions. Furthermore, general leg design strategies for animals and robots are discussed with respect to the range of safe leg operation."
},
{
"pmid": "24639645",
"title": "Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs.",
"abstract": "In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2-3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware."
},
{
"pmid": "15198698",
"title": "Biomimetic robotics should be based on functional morphology.",
"abstract": "Due to technological improvements made during the last decade, bipedal robots today present a surprisingly high level of humanoid skill. Autonomy, with respect to the processing of information, is realized to a relatively high degree. What is mainly lacking in robotics, moving from purely anthropomorphic robots to 'anthropofunctional' machines, is energetic autonomy. In a previously published analysis, we showed that closer attention to the functional morphology of human walking could give robotic engineers the experiences of an at least 6 Myr beta test period on minimization of power requirements for biped locomotion. From our point of view, there are two main features that facilitate sustained walking in modern humans. The first main feature is the existence of 'energetically optimal velocities' provided by the systematic use of various resonance mechanisms: (a). suspended pendula (involving arms as well as legs in the swing phase of the gait cycle) and matching of the pendular length of the upper and lower limbs; (b). inverted pendula (involving the legs in the stance phase), driven by torsional springs around the ankle joints; and (c). torsional springs in the trunk. The second main feature is compensation for undesirable torques induced by the inertial properties of the swinging extremities: (a). mass distribution in the trunk characterized by maximized mass moments of inertia; (b). lever arms of joint forces at the hip and shoulder, which are inversely proportional to their amplitude; and (c). twisting of the trunk, especially torsion. Our qualitative conclusions are three-fold. (1). Human walking is an interplay between masses, gravity and elasticity, which is modulated by musculature. Rigid body mechanics is insufficient to describe human walking. Thus anthropomorphic robots completely following the rules of rigid body mechanics cannot be functionally humanoid. (2). Humans are vertebrates. Thus, anthropomorphic robots that do not use the trunk for purposes of motion are not truly humanoid. (3). The occurrence of a waist, especially characteristic of humans, implies the existence of rotations between the upper trunk (head, neck, pectoral girdle and thorax) and the lower trunk (pelvic girdle) via an elastic joint (spine, paravertebral and abdominal musculature). A torsional twist around longitudinal axes seems to be the most important."
}
] |
BMC Medical Informatics and Decision Making | 31438942 | PMC6704521 | 10.1186/s12911-019-0885-x | Health timeline: an insight-based study of a timeline visualization of clinical data | BackgroundThe increasing complexity and volume of clinical data poses a challenge in the decision-making process. Data visualizations can assist in this process by speeding up the time required to analyze and understand clinical data. Even though empirical experiments show that visualizations facilitate clinical data understanding, a consistent method to assess their effectiveness is still missing.MethodsThe insight-based methodology determines the quality of insights a user acquires from the visualization. Insights receive a value from one to five points based on a domain-specific criteria. Five professional psychiatrists took part in the study using real de-identified clinical data spanning 4 years of medical history.ResultsA total of 50 assessments were transcribed and analyzed. Comparing a total of 558 insights using Health Timeline and 576 without, the mean value using the Timeline (1.7) was higher than without (1.26; p<0.01), similarly the cumulative value with the Timeline (11.87) was higher than without (10.96: p<0.01). The average time required to formulate the first insight with the Timeline was higher (13.16 s) than without (7 s; p<0.01). Seven insights achieved the highest possible value using Health Timeline while none were obtained without it.ConclusionsThe Health Timeline effectively improved understanding of clinical data and helped participants recognize complex patterns from the data. By applying the insight-based methodology, the effectiveness of the Health Timeline was quantified, documented and demonstrated. As an outcome of this exercise, we propose the use of such methodologies to measure the effectiveness of visualizations that assist the clinical decision-making process.Electronic supplementary materialThe online version of this article (10.1186/s12911-019-0885-x) contains supplementary material, which is available to authorized users. | Related workThe findings of Lesselroth and Pieczkiewicz [5], as well as those of Rind and colleagues [6] are further discussed. The study reported in this article relates to other computerized solutions. The key distinction is the assessment methodology.ScopeThe focus of this study is on time-based visualizations of clinical data (EHR) and the assessment methodology used to validate them. Time-based visualizations are graphical representations of data collected over time. The research literature has a large number of data visualization techniques that vary in their strategies. However, we refined our search to include only visualization tools based on time, and the longitudinal nature of the data. The search was narrowed down further to only include those techniques that were used in the context of clinical data. We were particularly interested in the assessment methodology used to evaluate these visualizations.Review of similar solutions in the healthcare contextLifeLines is a computerized tool that displays clinical data [13] using dots positioned along horizontal lines [14]. A study showed that participants responded 50% faster to a “post-experimental memory test” (p<0.004) [15]. LifeLines was extended in a second version with support for aggregation of temporal events [16]. The focus was on emphasizing prevalence and temporal order of the clinical data. A study revealed that the clinicians were able to confirm hypotheses on the hospital length of stay of patients. LifeLines [13] and LifeLines2 [16] are Java software applications, thus they must be installed in a Java-capable device. These tools provide data filters to narrow down the data exploration and enable the user to focus on certain aspects of the timeline.TimeLine is a visualization software that displays EHRs chronologically [17]. The data is grouped by categories and displayed along a visual timeline. No assessment of the software was reported in the article. Timeline [17] is the closest application to Health Timeline. It features web support, EHR interfacing and a timeline representation of data with a focus on oncology. Timeline also has a large number of features such as causal models, imaging files, data search, disease progression visualization and data category toggling. It is probable that using Timeline involves a learning curve as it offers several features that would require the user to become acquainted with.LifeFlow is a visualization tool that summarizes data in sequences using temporal spacing of events [18]. One physician took part in a briefing interview about the visualization of patient transportation data. EventFlow is a drug prescription pattern visualization [19]. A study on the use of asthma drugs was conducted to identify patterns that complied with regulations. LifeFlow and EventFlow are also software applications that require installation. The data are not visualized in a timeline but instead the representation is chronologically ordered as a series of events and outcomes. These visualizations are optimized for understanding the causes and outcomes of patient admissions to hospital.Assessment methodsBertini and colleagues [20] made a strong case for the objective assessment of visualization tools. A literature review on assessment methods for information visualization reports a number of practical cases and proposes a classification of these methods. The Visual Data Analysis and Reasoning (VDAR) classification group is relevant to our study because it emphasizes the decision-making process, knowledge discovery and visual data analysis. We found that no assessments of this kind have been conducted using clinical data and medical experts.SummaryTime-based visualizations have been found helpful in several use cases. However, without a systematic assessment method, it is difficult to demonstrate how they improve the understanding of the data. To provide a contribution towards the good practice of assessment methods for clinical data visualizations, we conducted and documented the assessment of the Health Timeline using the insight-based methodology. This methodology has been previously used in bioinformatics [8, 21–23] and well-being data analysis [9, 24]. | [
"16711210",
"17073373",
"19834171",
"17674629",
"16138554",
"17993707",
"22144529",
"23782289",
"21350275"
] | [
{
"pmid": "17073373",
"title": "An insight-based longitudinal study of visual analytics.",
"abstract": "Visualization tools are typically evaluated in controlled studies that observe the short-term usage of these tools by participants on preselected data sets and benchmark tasks. Though such studies provide useful suggestions, they miss the long-term usage of the tools. A longitudinal study of a bioinformatics data set analysis is reported here. The main focus of this work is to capture the entire analysis process that an analyst goes through from a raw data set to the insights sought from the data. The study provides interesting observations about the use of visual representations and interaction mechanisms provided by the tools, and also about the process of insight generation in general. This deepens our understanding of visual analytics, guides visualization developers in creating more effective visualization tools in terms of user requirements, and guides evaluators in designing future studies that are more representative of insights sought by users from their data sets."
},
{
"pmid": "19834171",
"title": "Temporal summaries: supporting temporal categorical searching, aggregation and comparison.",
"abstract": "When analyzing thousands of event histories, analysts often want to see the events as an aggregate to detect insights and generate new hypotheses about the data. An analysis tool must emphasize both the prevalence and the temporal ordering of these events. Additionally, the analysis tool must also support flexible comparisons to allow analysts to gather visual evidence. In a previous work, we introduced align, rank, and filter (ARF) to accentuate temporal ordering. In this paper, we present temporal summaries, an interactive visualization technique that highlights the prevalence of event occurrences. Temporal summaries dynamically aggregate events in multiple granularities (year, month, week, day, hour, etc.) for the purpose of spotting trends over time and comparing several groups of records. They provide affordances for analysts to perform temporal range filters. We demonstrate the applicability of this approach in two extensive case studies with analysts who applied temporal summaries to search, filter, and look for patterns in electronic health records and academic records."
},
{
"pmid": "17674629",
"title": "TimeLine: visualizing integrated patient records.",
"abstract": "An increasing amount of data is now accrued in medical information systems; however, the organization of this data is still primarily driven by data source, and does not support the cognitive processes of physicians. As such, new methods to visualize patient medical records are becoming imperative in order to assist physicians with clinical tasks and medical decision-making. The TimeLine system is a problem-centric temporal visualization for medical data: information contained with medical records is reorganized around medical disease entities and conditions. Automatic construction of the TimeLine display from existing clinical repositories occurs in three steps: 1) data access, which uses an eXtensible Markup Language (XML) data representation to handle distributed, heterogeneous medical databases; 2) data mapping and reorganization, reformulating data into hierarchical, problemcentric views; and 3) data visualization, which renders the display to a target presentation platform. Leveraging past work, we describe the latter two components of the TimeLine system in this paper, and the issues surrounding the creation of medical problems lists and temporal visualization of medical data. A driving factor in the development of TimeLine was creating a foundation upon which new data types and the visualization metaphors could be readily incorporated."
},
{
"pmid": "16138554",
"title": "An insight-based methodology for evaluating bioinformatics visualizations.",
"abstract": "High-throughput experiments, such as gene expression microarrays in the life sciences, result in very large data sets. In response, a wide variety of visualization tools have been created to facilitate data analysis. A primary purpose of these tools is to provide biologically relevant insight into the data. Typically, visualizations are evaluated in controlled studies that measure user performance on predetermined tasks or using heuristics and expert reviews. To evaluate and rank bioinformatics visualizations based on real-world data analysis scenarios, we developed a more relevant evaluation method that focuses on data insight. This paper presents several characteristics of insight that enabled us to recognize and quantify it in open-ended user tests. Using these characteristics, we evaluated five microarray visualization tools on the amount and types of insight they provide and the time it takes to acquire it. The results of the study guide biologists in selecting a visualization tool based on the type of their microarray data, visualization designers on the key role of user interaction techniques, and evaluators on a new approach for evaluating the effectiveness of visualizations for providing insight. Though we used the method to analyze bioinformatics visualizations, it can be applied to other domains."
},
{
"pmid": "17993707",
"title": "Promoting insight-based evaluation of visualizations: from contest to benchmark repository.",
"abstract": "Information Visualization (InfoVis) is now an accepted and growing field but questions remain about the best uses for and the maturity of novel visualizations. Usability studies and controlled experiments are helpful but generalization is difficult. We believe that the systematic development of benchmarks will facilitate the comparison of techniques and help identify their strengths under different conditions. We were involved in the organization and management of three information visualization contests for the 2003, 2004 and 2005 IEEE InfoVis Symposia, which requested teams to report on insights gained while exploring data. We give a summary of the state of the art of evaluation in information visualization, describe the three contests, summarize their results, discuss outcomes and lessons learned, and conjecture the future of visualization contests. All materials produced by the contests are archived in the InfoVis Benchmark Repository."
},
{
"pmid": "22144529",
"title": "Empirical Studies in Information Visualization: Seven Scenarios.",
"abstract": "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study."
}
] |
PLoS Computational Biology | 31430280 | PMC6716678 | 10.1371/journal.pcbi.1006604 | Learning to synchronize: How biological agents can couple neural task modules for dealing with the stability-plasticity dilemma | We provide a novel computational framework on how biological and artificial agents can learn to flexibly couple and decouple neural task modules for cognitive processing. In this way, they can address the stability-plasticity dilemma. For this purpose, we combine two prominent computational neuroscience principles, namely Binding by Synchrony and Reinforcement Learning. The model learns to synchronize task-relevant modules, while also learning to desynchronize currently task-irrelevant modules. As a result, old (but currently task-irrelevant) information is protected from overwriting (stability) while new information can be learned quickly in currently task-relevant modules (plasticity). We combine learning to synchronize with task modules that learn via one of several classical learning algorithms (Rescorla-Wagner, backpropagation, Boltzmann machines). The resulting combined model is tested on a reversal learning paradigm where it must learn to switch between three different task rules. We demonstrate that our combined model has significant computational advantages over the original network without synchrony, in terms of both stability and plasticity. Importantly, the resulting models’ processing dynamics are also consistent with empirical data and provide empirically testable hypotheses for future MEG/EEG studies. | Related workThe current work relies heavily on previous modeling work of cognitive control processes. For instance, in the current model the LFC functions as a holder of task sets which bias lower-level processing pathways [29], [67]. It does this in cooperation with the MFC. Here, the aMFC determines when to switch between lower-level task modules. Additionally, also the amount of control/ effort that is exerted in the model is determined by the RL processes in the aMFC[44–46]. More specifically, negative prediction errors will determine the amount of control that is needed by strongly increasing the pMFC signal [42]. This is consistent with earlier work proposing a key role of MFC in effort allocation [44], [45], [68].In the current model, the MFC, together with the LFC, functions as a hierarchically higher network that uses RL to estimate its own task-solving proficiency. Based on its estimate of the value of a module, and the reward that accumulates across trials, it evaluates whether the current task strategy is suited for the current environment. Based on this evaluation, it will decide to stay with the current strategy or switch to another. More specifically, the value learned by the RL unit acts as measure of confidence that the model has in its own accuracy. The model uses this measure of confidence to adjust future behavior, a process that has been labeled as meta-cognition [69], [70].This is in line with previous modeling work that described the prefrontal cortex as a reinforcement meta-learner [43], [46–48].One problem we addressed in this work was the stability-plasticity dilemma. As we described before, previous work on this dilemma can broadly be divided in two classes of solutions. The first class is based on mixing old and new information [2–5]. The second class is based on protection of old information. Our solution also exploited the principle of protection. Future work must develop biologically plausible implementations of the mixing principle too, and investigate to what extent mixing and protection scale up to larger problems. | [
"20857486",
"7624455",
"12475710",
"15318331",
"21939679",
"15721245",
"28292907",
"26447583",
"16150631",
"2922407",
"17569862",
"21693490",
"9377276",
"21886616",
"11488380",
"28253078",
"16397487",
"2551392",
"20194767",
"24835663",
"19969093",
"18380674",
"17257860",
"26100868",
"22717205",
"12374324",
"26378874",
"25437491",
"25653603",
"24239852",
"19524531",
"9038284",
"10846167",
"16022602",
"11283309",
"15488417",
"9989408",
"17532060",
"17548233",
"23522038",
"24672013",
"15944135",
"23177956",
"22426255",
"25460074",
"2200075",
"23889930",
"22134477",
"15134842",
"26231622"
] | [
{
"pmid": "20857486",
"title": "How hippocampus and cortex contribute to recognition memory: revisiting the complementary learning systems model.",
"abstract": "We describe how the Complementary Learning Systems neural network model of recognition memory (Norman and O'Reilly (2003) Psychol Rev 104:611-646) can shed light on current debates regarding hippocampal and cortical contributions to recognition memory. We review simulation results illustrating three critical differences in how (according to the model) hippocampus and cortex contribute to recognition memory, all of which derive from the hippocampus' use of pattern separated representations. Pattern separation makes the hippocampus especially well-suited for discriminating between studied items and related lures; it makes the hippocampus especially poorly suited for computing global match; and it imbues the hippocampal ROC curve with a Y-intercept > 0. We also describe a key boundary condition on these differences: When the average level of similarity between items in an experiment is very high, hippocampal pattern separation can fail, at which point the hippocampal model will start to behave like the cortical model. We describe the implications of these simulation results for extant debates over how to describe hippocampal versus cortical contributions and how to measure these contributions."
},
{
"pmid": "7624455",
"title": "Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.",
"abstract": "Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems."
},
{
"pmid": "12475710",
"title": "Hippocampal and neocortical contributions to memory: advances in the complementary learning systems framework.",
"abstract": "The complementary learning systems framework provides a simple set of principles, derived from converging biological, psychological and computational constraints, for understanding the differential contributions of the neocortex and hippocampus to learning and memory. The central principles are that the neocortex has a low learning rate and uses overlapping distributed representations to extract the general statistical structure of the environment, whereas the hippocampus learns rapidly using separated representations to encode the details of specific events while minimizing interference. In recent years, we have instantiated these principles in working computational models, and have used these models to address human and animal learning and memory findings, across a wide range of domains and paradigms. Here, we review a few representative applications of our models, focusing on two domains: recognition memory and animal learning in the fear-conditioning paradigm. In both domains, the models have generated novel predictions that have been tested and confirmed."
},
{
"pmid": "15318331",
"title": "Mode shifting between storage and recall based on novelty detection in oscillating hippocampal circuits.",
"abstract": "It has been suggested that hippocampal mode shifting between a storage and a retrieval state might be under the control of acetylcholine (ACh) levels, as set by an autoregulatory hippocampo-septo-hippocampal loop. The present study investigates how such a mechanism might operate in a large-scale connectionist model of this circuitry that takes into account the major hippocampal subdivisions, oscillatory population dynamics and the time scale on which ACh exerts its effects in the hippocampus. The model assumes that hippocampal mode shifting is regulated by a novelty signal generated in the hippocampus. The simulations suggest that this signal originates in the dentate. Novel patterns presented to this structure lead to brief periods of depressed firing in the hippocampal circuitry. During these periods, an inhibitory influence of the hippocampus on the septum is lifted, leading to increased firing of cholinergic neurons. The resulting increase in ACh release in the hippocampus produces network dynamics that favor learning over retrieval. Resumption of activity in the hippocampus leads to the reinstatement of inhibition. Despite theta-locked rhythmic firing of ACh neurons in the septum, ACh modulation in the model fluctuates smoothly on a time scale of seconds. It is shown that this is compatible with the time scale on which memory processes take place. A number of strong predictions regarding memory function are derived from the model."
},
{
"pmid": "21939679",
"title": "Relearning in semantic dementia reflects contributions from both medial temporal lobe episodic and degraded neocortical semantic systems: evidence in support of the complementary learning systems theory.",
"abstract": "When relearning words, patients with semantic dementia (SD) exhibit a characteristic rigidity, including a failure to generalise names to untrained exemplars of trained concepts. This has been attributed to an over-reliance on the medial temporal region which captures information in sparse, non-overlapping and therefore rigid representations. The current study extends previous investigations of SD relearning by re-examining the additional contribution made by the degraded cortical semantic system. The standard relearning protocol was modified by careful selection of foils to show that people with semantic dementia were sometimes able to extend their learning appropriately but that this correct generalisation was minimal (i.e. the patients under-generalised their learning). The revised assessment procedure highlighted the fact that, after relearning, the participants also incorrectly over-generalised the learned label to closely related concepts. It is unlikely that these behaviours would occur if the participants had only formed sparse hippocampal representations. These novel data build on the notion that people with semantic dementia engage both the degraded cortical semantic (neocortex) and the episodic (medial temporal) systems to learn. Because of neocortical damage to the anterior temporal lobes, relearning is disordered with a characteristic pattern of under- and over-generalisation."
},
{
"pmid": "15721245",
"title": "Cascade models of synaptically stored memories.",
"abstract": "Storing memories of ongoing, everyday experiences requires a high degree of plasticity, but retaining these memories demands protection against changes induced by further activity and experience. Models in which memories are stored through switch-like transitions in synaptic efficacy are good at storing but bad at retaining memories if these transitions are likely, and they are poor at storage but good at retention if they are unlikely. We construct and study a model in which each synapse has a cascade of states with different levels of plasticity, connected by metaplastic transitions. This cascade model combines high levels of memory storage with long retention times and significantly outperforms alternative models. As a result, we suggest that memory storage requires synapses with multiple states exhibiting dynamics over a wide range of timescales, and we suggest experimental tests of this hypothesis."
},
{
"pmid": "28292907",
"title": "Overcoming catastrophic forgetting in neural networks.",
"abstract": "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially."
},
{
"pmid": "26447583",
"title": "Rhythms for Cognition: Communication through Coherence.",
"abstract": "I propose that synchronization affects communication between neuronal groups. Gamma-band (30-90 Hz) synchronization modulates excitation rapidly enough that it escapes the following inhibition and activates postsynaptic neurons effectively. Synchronization also ensures that a presynaptic activation pattern arrives at postsynaptic neurons in a temporally coordinated manner. At a postsynaptic neuron, multiple presynaptic groups converge, e.g., representing different stimuli. If a stimulus is selected by attention, its neuronal representation shows stronger and higher-frequency gamma-band synchronization. Thereby, the attended stimulus representation selectively entrains postsynaptic neurons. The entrainment creates sequences of short excitation and longer inhibition that are coordinated between pre- and postsynaptic groups to transmit the attended representation and shut out competing inputs. The predominantly bottom-up-directed gamma-band influences are controlled by predominantly top-down-directed alpha-beta-band (8-20 Hz) influences. Attention itself samples stimuli at a 7-8 Hz theta rhythm. Thus, several rhythms and their interplay render neuronal communication effective, precise, and selective."
},
{
"pmid": "16150631",
"title": "A mechanism for cognitive dynamics: neuronal communication through neuronal coherence.",
"abstract": "At any one moment, many neuronal groups in our brain are active. Microelectrode recordings have characterized the activation of single neurons and fMRI has unveiled brain-wide activation patterns. Now it is time to understand how the many active neuronal groups interact with each other and how their communication is flexibly modulated to bring about our cognitive dynamics. I hypothesize that neuronal communication is mechanistically subserved by neuronal coherence. Activated neuronal groups oscillate and thereby undergo rhythmic excitability fluctuations that produce temporal windows for communication. Only coherently oscillating neuronal groups can interact effectively, because their communication windows for input and for output are open at the same times. Thus, a flexible pattern of coherence defines a flexible communication structure, which subserves our cognitive flexibility."
},
{
"pmid": "2922407",
"title": "Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex.",
"abstract": "In areas 17 and 18 of the cat visual cortex the firing probability of neurons, in response to the presentation of optimally aligned light bars within their receptive field, oscillates with a peak frequency near 40 Hz. The neuronal firing pattern is tightly correlated with the phase and amplitude of an oscillatory local field potential recorded through the same electrode. The amplitude of the local field-potential oscillations are maximal in response to stimuli that match the orientation and direction preference of the local cluster of neurons. Single and multiunit recordings from the dorsal lateral geniculate nucleus of the thalamus showed no evidence of oscillations of the neuronal firing probability in the range of 20-70 Hz. The results demonstrate that local neuronal populations in the visual cortex engage in stimulus-specific synchronous oscillations resulting from an intracortical mechanism. The oscillatory responses may provide a general mechanism by which activity patterns in spatially separate regions of the cortex are temporally coordinated."
},
{
"pmid": "17569862",
"title": "Modulation of neuronal interactions through neuronal synchronization.",
"abstract": "Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions."
},
{
"pmid": "21693490",
"title": "Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis.",
"abstract": "Growing evidence suggests that the prefrontal cortex (PFC) is organized hierarchically, with more anterior regions having increasingly abstract representations. How does this organization support hierarchical cognitive control and the rapid discovery of abstract action rules? We present computational models at different levels of description. A neural circuit model simulates interacting corticostriatal circuits organized hierarchically. In each circuit, the basal ganglia gate frontal actions, with some striatal units gating the inputs to PFC and others gating the outputs to influence response selection. Learning at all of these levels is accomplished via dopaminergic reward prediction error signals in each corticostriatal circuit. This functionality allows the system to exhibit conditional if-then hypothesis testing and to learn rapidly in environments with hierarchical structure. We also develop a hybrid Bayesian-reinforcement learning mixture of experts (MoE) model, which can estimate the most likely hypothesis state of individual participants based on their observed sequence of choices and rewards. This model yields accurate probabilistic estimates about which hypotheses are attended by manipulating attentional states in the generative neural model and recovering them with the MoE model. This 2-pronged modeling approach leads to multiple quantitative predictions that are tested with functional magnetic resonance imaging in the companion paper."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "21886616",
"title": "Value and prediction error in medial frontal cortex: integrating the single-unit and systems levels of analysis.",
"abstract": "The role of the anterior cingulate cortex (ACC) in cognition has been extensively investigated with several techniques, including single-unit recordings in rodents and monkeys and EEG and fMRI in humans. This has generated a rich set of data and points of view. Important theoretical functions proposed for ACC are value estimation, error detection, error-likelihood estimation, conflict monitoring, and estimation of reward volatility. A unified view is lacking at this time, however. Here we propose that online value estimation could be the key function underlying these diverse data. This is instantiated in the reward value and prediction model (RVPM). The model contains units coding for the value of cues (stimuli or actions) and units coding for the differences between such values and the actual reward (prediction errors). We exposed the model to typical experimental paradigms from single-unit, EEG, and fMRI research to compare its overall behavior with the data from these studies. The model reproduced the ACC behavior of previous single-unit, EEG, and fMRI studies on reward processing, error processing, conflict monitoring, error-likelihood estimation, and volatility estimation, unifying the interpretations of the role performed by the ACC in some aspects of cognition."
},
{
"pmid": "11488380",
"title": "Conflict monitoring and cognitive control.",
"abstract": "A neglected question regarding cognitive control is how control processes might detect situations calling for their involvement. The authors propose here that the demand for control may be evaluated in part by monitoring for conflicts in information processing. This hypothesis is supported by data concerning the anterior cingulate cortex, a brain area involved in cognitive control, which also appears to respond to the occurrence of conflict. The present article reports two computational modeling studies, serving to articulate the conflict monitoring hypothesis and examine its implications. The first study tests the sufficiency of the hypothesis to account for brain activation data, applying a measure of conflict to existing models of tasks shown to engage the anterior cingulate. The second study implements a feedback loop connecting conflict monitoring to cognitive control, using this to simulate a number of important behavioral phenomena."
},
{
"pmid": "28253078",
"title": "Binding by Random Bursts: A Computational Model of Cognitive Control.",
"abstract": "A neural synchrony model of cognitive control is proposed. It construes cognitive control as a higher-level action to synchronize lower-level brain areas. Here, a controller prefrontal area (medial frontal cortex) can synchronize two cortical processing areas. The synchrony is achieved by a random theta frequency-locked neural burst sent to both areas. The choice of areas that receive this burst is determined by lateral frontal cortex. As a result of this synchrony, communication between the two areas becomes more efficient. The model is tested on the classical Stroop cognitive control task, and its operation is explored in several simulations. Both reactive and proactive controls are implemented via theta power modulation. Increasing theta power improves behavioral performance; furthermore, via theta-gamma phase-amplitude coupling, theta also increases gamma frequency power and synchrony in posterior processing areas. Thus, the model solves a central computational problem for cognitive control (how to allow rapid communication between arbitrary brain areas), while making rich contact with behavioral and neurophysiological data."
},
{
"pmid": "2551392",
"title": "Modeling the olfactory bulb and its neural oscillatory processings.",
"abstract": "The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35-60 Hz modulated activity which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex. There is significant correspondence between the model behavior and observed electrophysiology."
},
{
"pmid": "20194767",
"title": "Theta-activity in anterior cingulate cortex predicts task rules and their adjustments following errors.",
"abstract": "Accomplishing even simple tasks depend on neuronal circuits to configure how incoming sensory stimuli map onto responses. Controlling these stimulus-response (SR) mapping rules relies on a cognitive control network comprising the anterior cingulate cortex (ACC). Single neurons within the ACC convey information about currently relevant SR mapping rules and signal unexpected action outcomes, which can be used to optimize behavioral choices. However, its functional significance and the mechanistic means of interaction with other nodes of the cognitive control network remain elusive and poorly understood. Here, we report that core aspects of cognitive control are encoded by rhythmic theta-band activity within neuronal circuits in the ACC. Throughout task performance, theta-activity predicted which of two SR mapping rules will be established before processing visual target information. Task-selective theta-activity emerged particularly early during those trials, which required the adjustment of SR rules following an erroneous rule representation in the preceding trial. These findings demonstrate a functional correlation of cognitive control processes and oscillatory theta-band activity in macaque ACC. Moreover, we report that spike output of a subset of cells in ACC is synchronized to predictive theta-activity, suggesting that the theta-cycle could serve as a temporal reference for coordinating local task selective computations across a larger network of frontal areas and the hippocampus to optimize and adjust the processing routes of sensory and motor circuits to achieve efficient sensory-motor control."
},
{
"pmid": "24835663",
"title": "Frontal theta as a mechanism for cognitive control.",
"abstract": "Recent advancements in cognitive neuroscience have afforded a description of neural responses in terms of latent algorithmic operations. However, the adoption of this approach to human scalp electroencephalography (EEG) has been more limited, despite the ability of this methodology to quantify canonical neuronal processes. Here, we provide evidence that theta band activities over the midfrontal cortex appear to reflect a common computation used for realizing the need for cognitive control. Moreover, by virtue of inherent properties of field oscillations, these theta band processes may be used to communicate this need and subsequently implement such control across disparate brain regions. Thus, frontal theta is a compelling candidate mechanism by which emergent processes, such as 'cognitive control', may be biophysically realized."
},
{
"pmid": "19969093",
"title": "Frontal theta links prediction errors to behavioral adaptation in reinforcement learning.",
"abstract": "Investigations into action monitoring have consistently detailed a frontocentral voltage deflection in the event-related potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the feedback-related negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single-trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single-trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Mediofrontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single-trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations, with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice."
},
{
"pmid": "18380674",
"title": "Learning-related changes in reward expectancy are reflected in the feedback-related negativity.",
"abstract": "The feedback-related negativity (FRN) has been hypothesized to be linked to reward-based learning. While many studies have shown that the FRN only occurs in response to unexpected negative outcomes, the relationship between the magnitude of negative prediction errors and FRN amplitude remains a matter of debate. The present study aimed to elucidate this relationship with a new behavioural procedure that allowed subjects to predict precise reward probabilities by learning an explicit rule. Insight into the rule did not only influence subjects' choice behaviour, but also outcome-related event-related potentials. After subjects had learned the rule, the FRN amplitude difference between non-reward and reward mirrored the magnitude of the negative prediction error, i.e. it was larger for less likely negative outcomes. Source analysis linked this effect to the anterior cingulate cortex. P300 amplitude was also modulated by outcome valence and expectancy. It was larger for positive and unexpected outcomes. It remains to be clarified, however, whether the P300 reflects a positive prediction error."
},
{
"pmid": "17257860",
"title": "Reward expectation modulates feedback-related negativity and EEG spectra.",
"abstract": "The ability to evaluate outcomes of previous decisions is critical to adaptive decision-making. The feedback-related negativity (FRN) is an event-related potential (ERP) modulation that distinguishes losses from wins, but little is known about the effects of outcome probability on these ERP responses. Further, little is known about the frequency characteristics of feedback processing, for example, event-related oscillations and phase synchronizations. Here, we report an EEG experiment designed to address these issues. Subjects engaged in a probabilistic reinforcement learning task in which we manipulated, across blocks, the probability of winning and losing to each of two possible decision options. Behaviorally, all subjects quickly adapted their decision-making to maximize rewards. ERP analyses revealed that the probability of reward modulated neural responses to wins, but not to losses. This was seen both across blocks as well as within blocks, as learning progressed. Frequency decomposition via complex wavelets revealed that EEG responses to losses, compared to wins, were associated with enhanced power and phase coherence in the theta frequency band. As in the ERP analyses, power and phase coherence values following wins but not losses were modulated by reward probability. Some findings between ERP and frequency analyses diverged, suggesting that these analytic approaches provide complementary insights into neural processing. These findings suggest that the neural mechanisms of feedback processing may differ between wins and losses."
},
{
"pmid": "26100868",
"title": "Theta-gamma coordination between anterior cingulate and prefrontal cortex indexes correct attention shifts.",
"abstract": "Anterior cingulate and lateral prefrontal cortex (ACC/PFC) are believed to coordinate activity to flexibly prioritize the processing of goal-relevant over irrelevant information. This between-area coordination may be realized by common low-frequency excitability changes synchronizing segregated high-frequency activations. We tested this coordination hypothesis by recording in macaque ACC/PFC during the covert utilization of attention cues. We found robust increases of 5-10 Hz (theta) to 35-55 Hz (gamma) phase-amplitude correlation between ACC and PFC during successful attention shifts but not before errors. Cortical sites providing theta phases (i) showed a prominent cue-induced phase reset, (ii) were more likely in ACC than PFC, and (iii) hosted neurons with burst firing events that synchronized to distant gamma activity. These findings suggest that interareal theta-gamma correlations could follow mechanistically from a cue-triggered reactivation of rule memory that synchronizes theta across ACC/PFC."
},
{
"pmid": "22717205",
"title": "Value and prediction error estimation account for volatility effects in ACC: a model-based fMRI study.",
"abstract": "In order to choose the best action for maximizing fitness, mammals can estimate the reward expectations (value) linked to available actions based on past environmental outcomes. Value updates are performed by comparing the current value with the actual environmental outcomes (prediction error). The anterior cingulate cortex (ACC) has been shown to be critically involved in the computation of value and its variability across time (volatility). Previously, we proposed a new neural model of the ACC based on single-unit ACC neurophysiology, the Reward Value and Prediction Model (RVPM). Here, using the RVPM in computer simulations and in a model-based fMRI study, we found that highly uncertain but non-volatile environments activate ACC more than volatile environments, demonstrating that value estimation by means of prediction error computation can account for the effect of volatility in ACC. These findings suggest that ACC response to volatility can be parsimoniously explained by basic ACC reward processing."
},
{
"pmid": "12374324",
"title": "The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity.",
"abstract": "The authors present a unified account of 2 neural systems concerned with the development and expression of adaptive behaviors: a mesencephalic dopamine system for reinforcement learning and a \"generic\" error-processing system associated with the anterior cingulate cortex. The existence of the error-processing system has been inferred from the error-related negativity (ERN), a component of the event-related brain potential elicited when human participants commit errors in reaction-time tasks. The authors propose that the ERN is generated when a negative reinforcement learning signal is conveyed to the anterior cingulate cortex via the mesencephalic dopamine system and that this signal is used by the anterior cingulate cortex to modify performance on the task at hand. They provide support for this proposal using both computational modeling and psychophysiological experimentation."
},
{
"pmid": "26378874",
"title": "Hierarchical Error Representation: A Computational Model of Anterior Cingulate and Dorsolateral Prefrontal Cortex.",
"abstract": "Anterior cingulate and dorsolateral prefrontal cortex (ACC and dlPFC, respectively) are core components of the cognitive control network. Activation of these regions is routinely observed in tasks that involve monitoring the external environment and maintaining information in order to generate appropriate responses. Despite the ubiquity of studies reporting coactivation of these two regions, a consensus on how they interact to support cognitive control has yet to emerge. In this letter, we present a new hypothesis and computational model of ACC and dlPFC. The error representation hypothesis states that multidimensional error signals generated by ACC in response to surprising outcomes are used to train representations of expected error in dlPFC, which are then associated with relevant task stimuli. Error representations maintained in dlPFC are in turn used to modulate predictive activity in ACC in order to generate better estimates of the likely outcomes of actions. We formalize the error representation hypothesis in a new computational model based on our previous model of ACC. The hierarchical error representation (HER) model of ACC/dlPFC suggests a mechanism by which hierarchically organized layers within ACC and dlPFC interact in order to solve sophisticated cognitive tasks. In a series of simulations, we demonstrate the ability of the HER model to autonomously learn to perform structured tasks in a manner comparable to human performance, and we show that the HER model outperforms current deep learning networks by an order of magnitude."
},
{
"pmid": "25437491",
"title": "Hierarchical control over effortful behavior by rodent medial frontal cortex: A computational model.",
"abstract": "The anterior cingulate cortex (ACC) has been the focus of intense research interest in recent years. Although separate theories relate ACC function variously to conflict monitoring, reward processing, action selection, decision making, and more, damage to the ACC mostly spares performance on tasks that exercise these functions, indicating that they are not in fact unique to the ACC. Further, most theories do not address the most salient consequence of ACC damage: impoverished action generation in the presence of normal motor ability. In this study we develop a computational model of the rodent medial prefrontal cortex that accounts for the behavioral sequelae of ACC damage, unifies many of the cognitive functions attributed to it, and provides a solution to an outstanding question in cognitive control research-how the control system determines and motivates what tasks to perform. The theory derives from recent developments in the formal study of hierarchical control and learning that highlight computational efficiencies afforded when collections of actions are represented based on their conjoint goals. According to this position, the ACC utilizes reward information to select tasks that are then accomplished through top-down control over action selection by the striatum. Computational simulations capture animal lesion data that implicate the medial prefrontal cortex in regulating physical and cognitive effort. Overall, this theory provides a unifying theoretical framework for understanding the ACC in terms of the pivotal role it plays in the hierarchical organization of effortful behavior."
},
{
"pmid": "24239852",
"title": "From conflict management to reward-based decision making: actors and critics in primate medial frontal cortex.",
"abstract": "The role of the medial prefrontal cortex (mPFC) and especially the anterior cingulate cortex has been the subject of intense debate for the last decade. A number of theories have been proposed to account for its function. Broadly speaking, some emphasize cognitive control, whereas others emphasize value processing; specific theories concern reward processing, conflict detection, error monitoring, and volatility detection, among others. Here we survey and evaluate them relative to experimental results from neurophysiological, anatomical, and cognitive studies. We argue for a new conceptualization of mPFC, arising from recent computational modeling work. Based on reinforcement learning theory, these new models propose that mPFC is an Actor-Critic system. This system is aimed to predict future events including rewards, to evaluate errors in those predictions, and finally, to implement optimal skeletal-motor and visceromotor commands to obtain reward. This framework provides a comprehensive account of mPFC function, accounting for and predicting empirical results across different levels of analysis, including monkey neurophysiology, human ERP, human neuroimaging, and human behavior."
},
{
"pmid": "19524531",
"title": "How green is the grass on the other side? Frontopolar cortex and the evidence in favor of alternative courses of action.",
"abstract": "Behavioral flexibility is the hallmark of goal-directed behavior. Whereas a great deal is known about the neural substrates of behavioral adjustment when it is explicitly cued by features of the external environment, little is known about how we adapt our behavior when such changes are made on the basis of uncertain evidence. Using a Bayesian reinforcement-learning model and fMRI, we show that frontopolar cortex (FPC) tracks the relative advantage in favor of switching to a foregone alternative when choices are made voluntarily. Changes in FPC functional connectivity occur when subjects finally decide to switch to the alternative behavior. Moreover, interindividual variation in the FPC signal predicts interindividual differences in effectively adapting behavior. By contrast, ventromedial prefrontal cortex (vmPFC) encodes the relative value of the current decision. Collectively, these findings reveal complementary prefrontal computations essential for promoting short- and long-term behavioral flexibility."
},
{
"pmid": "9038284",
"title": "A parametric study of prefrontal cortex involvement in human working memory.",
"abstract": "Although recent neuroimaging studies suggest that prefrontal cortex (PFC) is involved in working memory (WM), the relationship between PFC activity and memory load has not yet been well-described in humans. Here we use functional magnetic resonance imaging (fMRI) to probe PFC activity during a sequential letter task in which memory load was varied in an incremental fashion. In all nine subjects studied, dorsolateral and left inferior regions of PFC were identified that exhibited a linear relationship between activity and WM load. Furthermore, these same regions were independently identified through direct correlations of the fMRI signal with a behavioral measure that indexes WM function during task performance. A second experiment, using whole-brain imaging techniques, both replicated these findings and identified additional brain regions showing a linear relationship with load, suggesting a distributed circuit that participates with PFC in subserving WM. Taken together, these results provide a \"dose-response curve\" describing the involvement of both PFC and related brain regions in WM function, and highlight the benefits of using graded, parametric designs in neuroimaging research."
},
{
"pmid": "10846167",
"title": "Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control.",
"abstract": "Theories of the regulation of cognition suggest a system with two necessary components: one to implement control and another to monitor performance and signal when adjustments in control are needed. Event-related functional magnetic resonance imaging and a task-switching version of the Stroop task were used to examine whether these components of cognitive control have distinct neural bases in the human brain. A double dissociation was found. During task preparation, the left dorsolateral prefrontal cortex (Brodmann's area 9) was more active for color naming than for word reading, consistent with a role in the implementation of control. In contrast, the anterior cingulate cortex (Brodmann's areas 24 and 32) was more active when responding to incongruent stimuli, consistent with a role in performance monitoring."
},
{
"pmid": "16022602",
"title": "An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance.",
"abstract": "Historically, the locus coeruleus-norepinephrine (LC-NE) system has been implicated in arousal, but recent findings suggest that this system plays a more complex and specific role in the control of behavior than investigators previously thought. We review neurophysiological and modeling studies in monkey that support a new theory of LC-NE function. LC neurons exhibit two modes of activity, phasic and tonic. Phasic LC activation is driven by the outcome of task-related decision processes and is proposed to facilitate ensuing behaviors and to help optimize task performance (exploitation). When utility in the task wanes, LC neurons exhibit a tonic activity mode, associated with disengagement from the current task and a search for alternative behaviors (exploration). Monkey LC receives prominent, direct inputs from the anterior cingulate (ACC) and orbitofrontal cortices (OFC), both of which are thought to monitor task-related utility. We propose that these frontal areas produce the above patterns of LC activity to optimize utility on both short and long timescales."
},
{
"pmid": "11283309",
"title": "An integrative theory of prefrontal cortex function.",
"abstract": "The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed"
},
{
"pmid": "15488417",
"title": "Cooperation of the anterior cingulate cortex and dorsolateral prefrontal cortex for attention shifting.",
"abstract": "Attention shifting in the working memory system plays an important role in goal-oriented behavior, such as reading, reasoning, and driving, because it involves several cognitive processes. This study identified brain activity leading to individual differences in attention shifting for dual-task performance by using the group comparison approach. A large-scale pilot study was initially conducted to select suitable good and poor performers. The fMRI experiment consisted of a dual-task condition and two single-task conditions. Under the dual-task condition, participants verified the status of letters while concurrently retaining arrow orientations. The behavioral results indicated that accuracy in arrow recognition was better in the good performers than in the poor performers under the dual-task condition but not under the single-task condition. Dual-task performance showed a positive correlation with mean signal change in the right anterior cingulate cortex (ACC) and right dorsolateral prefrontal cortex (DLPFC). Structural equation modeling indicated that effective connectivity between the right ACC and right DLPFC was present in the good performers but not in the poor performers, although activations of the task-dependent posterior regions were modulated by the right ACC and right DLPFC. We conclude that individual differences in attention shifting heavily depend on the functional efficiency of the cingulo-prefrontal network."
},
{
"pmid": "9989408",
"title": "Perception's shadow: long-distance synchronization of human brain activity.",
"abstract": "Transient periods of synchronization of oscillating neuronal discharges in the frequency range 30-80 Hz (gamma oscillations) have been proposed to act as an integrative mechanism that may bring a widely distributed set of neurons together into a coherent ensemble that underlies a cognitive act. Results of several experiments in animals provide support for this idea. In humans, gamma oscillations have been described both on the scalp (measured by electroencephalography and magnetoencephalography) and in intracortical recordings, but no direct participation of synchrony in a cognitive task has been demonstrated so far. Here we record electrical brain activity from subjects who are viewing ambiguous visual stimuli (perceived either as faces or as meaningless shapes). We show for the first time, to our knowledge, that only face perception induces a long-distance pattern of synchronization, corresponding to the moment of perception itself and to the ensuing motor response. A period of strong desynchronization marks the transition between the moment of perception and the motor response. We suggest that this desynchronization reflects a process of active uncoupling of the underlying neural ensembles that is necessary to proceed from one cognitive state to another."
},
{
"pmid": "17532060",
"title": "Pathological synchronization in Parkinson's disease: networks, models and treatments.",
"abstract": "Parkinson's disease is a common and disabling disorder of movement owing to dopaminergic denervation of the striatum. However, it is still unclear how this denervation perverts normal functioning to cause slowing of voluntary movements. Recent work using tissue slice preparations, animal models and in humans with Parkinson's disease has demonstrated abnormally synchronized oscillatory activity at multiple levels of the basal ganglia-cortical loop. This excessive synchronization correlates with motor deficit, and its suppression by dopaminergic therapies, ablative surgery or deep-brain stimulation might provide the basic mechanism whereby diverse therapeutic strategies ameliorate motor impairment in patients with Parkinson's disease. This review is part of the INMED/TINS special issue, Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com/)."
},
{
"pmid": "17548233",
"title": "Cross-frequency coupling between neuronal oscillations.",
"abstract": "Electrophysiological recordings in animals, including humans, are modulated by oscillatory activities in several frequency bands. Little is known about how oscillations in various frequency bands interact. Recent findings from the human neocortex show that the power of fast gamma oscillations (30-150Hz) is modulated by the phase of slower theta oscillations (5-8Hz). Given that this coupling reflects a specific interplay between large ensembles of neurons, it is likely to have profound implications for neuronal processing."
},
{
"pmid": "23522038",
"title": "The θ-γ neural code.",
"abstract": "Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes."
},
{
"pmid": "24672013",
"title": "Human EEG uncovers latent generalizable rule structure during learning.",
"abstract": "Human cognition is flexible and adaptive, affording the ability to detect and leverage complex structure inherent in the environment and generalize this structure to novel situations. Behavioral studies show that humans impute structure into simple learning problems, even when this tendency affords no behavioral advantage. Here we used electroencephalography to investigate the neural dynamics indicative of such incidental latent structure. Event-related potentials over lateral prefrontal cortex, typically observed for instructed task rules, were stratified according to individual participants' constructed rule sets. Moreover, this individualized latent rule structure could be independently decoded from multielectrode pattern classification. Both neural markers were predictive of participants' ability to subsequently generalize rule structure to new contexts. These EEG dynamics reveal that the human brain spontaneously constructs hierarchically structured representations during learning of simple task rules."
},
{
"pmid": "15944135",
"title": "Uncertainty, neuromodulation, and attention.",
"abstract": "Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain's implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic."
},
{
"pmid": "23177956",
"title": "Canonical microcircuits for predictive coding.",
"abstract": "This Perspective considers the influential notion of a canonical (cortical) microcircuit in light of recent theories about neuronal processing. Specifically, we conciliate quantitative studies of microcircuitry and the functional logic of neuronal computations. We revisit the established idea that message passing among hierarchical cortical areas implements a form of Bayesian inference-paying careful attention to the implications for intrinsic connections among neuronal populations. By deriving canonical forms for these computations, one can associate specific neuronal populations with specific computational roles. This analysis discloses a remarkable correspondence between the microcircuitry of the cortical column and the connectivity implied by predictive coding. Furthermore, it provides some intuitive insights into the functional asymmetries between feedforward and feedback connections and the characteristic frequencies over which they operate."
},
{
"pmid": "22426255",
"title": "Cortical oscillations and speech processing: emerging computational principles and operations.",
"abstract": "Neuronal oscillations are ubiquitous in the brain and may contribute to cognition in several ways: for example, by segregating information and organizing spike timing. Recent data show that delta, theta and gamma oscillations are specifically engaged by the multi-timescale, quasi-rhythmic properties of speech and can track its dynamics. We argue that they are foundational in speech and language processing, 'packaging' incoming information into units of the appropriate temporal granularity. Such stimulus-brain alignment arguably results from auditory and motor tuning throughout the evolution of speech and language and constitutes a natural model system allowing auditory research to make a unique contribution to the issue of how neural oscillatory activity affects human cognition."
},
{
"pmid": "25460074",
"title": "Communication through coherence with inter-areal delays.",
"abstract": "The communication-through-coherence (CTC) hypothesis proposes that anatomical connections are dynamically rendered effective or ineffective through the presence or absence of rhythmic synchronization, in particular in the gamma and beta bands. The original CTC statement proposed that uni-directional communication is due to rhythmic entrainment with an inter-areal delay and a resulting non-zero phase relation, whereas bi-directional communication is due to zero-phase synchronization. Recent studies found that inter-areal gamma-band synchronization entails a non-zero phase lag. We therefore modify the CTC hypothesis and propose that bi-directional cortical communication is realized separately for the two directions by uni-directional CTC mechanisms entailing delays in both directions. We review evidence suggesting that inter-areal influences in the feedforward and feedback directions are segregated both anatomically and spectrally."
},
{
"pmid": "2200075",
"title": "On the control of automatic processes: a parallel distributed processing account of the Stroop effect.",
"abstract": "Traditional views of automaticity are in need of revision. For example, automaticity often has been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. A model of attention is presented to address these issues. Within a parallel distributed processing framework, it is proposed that the attributes of automaticity depend on the strength of a processing pathway and that strength increases with training. With the Stroop effect as an example, automatic processes are shown to be continuous and to emerge gradually with practice. Specifically, a computational model of the Stroop task simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the backpropagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). The model can simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task that manipulate stimulus-onset asynchrony, response set, and degree of practice. The model presented is contrasted against other models, and its relation to many of the central issues in the literature on attention, automaticity, and interference is discussed."
},
{
"pmid": "23889930",
"title": "The expected value of control: an integrative theory of anterior cingulate cortex function.",
"abstract": "The dorsal anterior cingulate cortex (dACC) has a near-ubiquitous presence in the neuroscience of cognitive control. It has been implicated in a diversity of functions, from reward processing and performance monitoring to the execution of control and action selection. Here, we propose that this diversity can be understood in terms of a single underlying function: allocation of control based on an evaluation of the expected value of control (EVC). We present a normative model of EVC that integrates three critical factors: the expected payoff from a controlled process, the amount of control that must be invested to achieve that payoff, and the cost in terms of cognitive effort. We propose that dACC integrates this information, using it to determine whether, where and how much control to allocate. We then consider how the EVC model can explain the diverse array of findings concerning dACC function."
},
{
"pmid": "22134477",
"title": "Reversal learning as a measure of impulsive and compulsive behavior in addictions.",
"abstract": "BACKGROUND\nOur ability to measure the cognitive components of complex decision-making across species has greatly facilitated our understanding of its neurobiological mechanisms. One task in particular, reversal learning, has proven valuable in assessing the inhibitory processes that are central to executive control. Reversal learning measures the ability to actively suppress reward-related responding and to disengage from ongoing behavior, phenomena that are biologically and descriptively related to impulsivity and compulsivity. Consequently, reversal learning could index vulnerability for disorders characterized by impulsivity such as proclivity for initial substance abuse as well as the compulsive aspects of dependence.\n\n\nOBJECTIVE\nThough we describe common variants and similar tasks, we pay particular attention to discrimination reversal learning, its supporting neural circuitry, neuropharmacology and genetic determinants. We also review the utility of this task in measuring impulsivity and compulsivity in addictions.\n\n\nMETHODS\nWe restrict our review to instrumental, reward-related reversal learning studies as they are most germane to addiction.\n\n\nCONCLUSION\nThe research reviewed here suggests that discrimination reversal learning may be used as a diagnostic tool for investigating the neural mechanisms that mediate impulsive and compulsive aspects of pathological reward-seeking and -taking behaviors. Two interrelated mechanisms are posited for the neuroadaptations in addiction that often translate to poor reversal learning: frontocorticostriatal circuitry dysregulation and poor dopamine (D2 receptor) modulation of this circuitry. These data suggest new approaches to targeting inhibitory control mechanisms in addictions."
},
{
"pmid": "15134842",
"title": "The neuropsychology of ventral prefrontal cortex: decision-making and reversal learning.",
"abstract": "Converging evidence from human lesion, animal lesion, and human functional neuroimaging studies implicates overlapping neural circuitry in ventral prefrontal cortex in decision-making and reversal learning. The ascending 5-HT and dopamine neurotransmitter systems have a modulatory role in both processes. There is accumulating evidence that measures of decision-making and reversal learning may be useful as functional markers of ventral prefrontal cortex integrity in psychiatric and neurological disorders. Whilst existing measures of decision-making may have superior sensitivity, reversal learning may offer superior selectivity, particularly within prefrontal cortex. Effective decision-making on existing measures requires the ability to adapt behaviour on the basis of changes in emotional significance, and this may underlie the shared neural circuitry with reversal learning."
},
{
"pmid": "26231622",
"title": "Phase-clustering bias in phase-amplitude cross-frequency coupling and its removal.",
"abstract": "BACKGROUND\nCross-frequency coupling methods allow for the identification of non-linear interactions across frequency bands, which are thought to reflect a fundamental principle of how electrophysiological brain activity is temporally orchestrated. In this paper we uncover a heretofore unknown source of bias in a commonly used method that quantifies cross-frequency coupling (phase-amplitude-coupling, or PAC).\n\n\nNEW METHOD\nWe demonstrate that non-uniform phase angle distributions--a phenomenon that can readily occur in real data--can under some circumstances produce statistical errors and uninterpretable results when using PAC. We propose a novel debiasing procedure that, through a simple linear subtraction, effectively ameliorates this phase clustering bias.\n\n\nRESULTS\nSimulations showed that debiased PAC (dPAC) accurately detected the presence of coupling. This was true even in the presence of moderate noise levels, which inflated the phase clustering bias. Finally, dPAC was applied to intracranial sleep recordings from a macaque monkey, and to hippocampal LFP data from a freely moving rat, revealing robust cross-frequency coupling in both data sets.\n\n\nCOMPARISON WITH EXISTING METHODS\nCompared to dPAC, regular PAC showed inflated or deflated estimations and statistically negative coupling values, depending on the strength of the bias and the angle of coupling. Noise increased these unwanted effects. Two other frequently used phase-amplitude coupling methods (the Modulation Index and Phase Locking Value) were also affected by the bias, though allowed for statistical inferences that were similar to dPAC.\n\n\nCONCLUSION\nWe conclude that dPAC provides a simple modification of PAC, and thereby offers a cleaner and possibly more sensitive alternative method, to more accurately assess phase-amplitude coupling."
}
] |
Frontiers in Computational Neuroscience | 31507398 | PMC6718726 | 10.3389/fncom.2019.00058 | A Multi-parametric MRI-Based Radiomics Signature and a Practical ML Model for Stratifying Glioblastoma Patients Based on Survival Toward Precision Oncology | Purpose: Predicting patients' survival outcomes is recognized of key importance to clinicians in oncology toward determining an ideal course of treatment and patient management. This study applies radiomics analysis on pre-operative multi-parametric MRI of patients with glioblastoma from multiple institutions to identify a signature and a practical machine learning model for stratifying patients into groups based on overall survival.Methods: This study included 163 patients' data with glioblastoma, collected by BRATS 2018 Challenge from multiple institutions. In this proposed method, a set of 147 radiomics image features were extracted locally from three tumor sub-regions on standardized pre-operative multi-parametric MR images. LASSO regression was applied for identifying an informative subset of chosen features whereas a Cox model used to obtain the coefficients of those selected features. Then, a radiomics signature model of 9 features was constructed on the discovery set and it performance was evaluated for patients stratification into short- (<10 months), medium- (10–15 months), and long-survivors (>15 months) groups. Eight ML classification models, trained and then cross-validated, were tested to assess a range of survival prediction performance as a function of the choice of features.Results: The proposed mpMRI radiomics signature model had a statistically significant association with survival (P < 0.001) in the training set, but was not confirmed (P = 0.110) in the validation cohort. Its performance in the validation set had a sensitivity of 0.476 (short-), 0.231 (medium-), and 0.600 (long-survivors), and specificity of 0.667 (short-), 0.732 (medium-), and 0.794 (long-survivors). Among the tested ML classifiers, the ensemble learning model's results showed superior performance in predicting the survival classes, with an overall accuracy of 57.8% and AUC of 0.81 for short-, 0.47 for medium-, and 0.72 for long-survivors using the LASSO selected features combined with clinical factors.Conclusion: A derived GLCM feature, representing intra-tumoral inhomogeneity, was found to have a high association with survival. Clinical factors, when added to the radiomics image features, boosted the performance of the ML classification model in predicting individual glioblastoma patient's survival prognosis, which can improve prognostic quality a further step toward precision oncology. | The Related WorksMany studies have been conducted identifying tumor phenotypical radiomics signature or/and developing practical machine learning (ML) models for glioblastoma patients stratification based on survival on pre-operative multi-parametric MRI sequences from single or multiple institutions. Recognizing patients who would/wouldn't benefit from standard treatment as well as identifying patients who need more aggressive treatment at the time of diagnosis is essential toward management of glioblastoma through personalized medicine. In this section, the author included some works of the most relevant ones recently published in this field. Macyszyn et al. (2016) used image analysis and ML models to establish imaging patterns that are predictive of overall survival (OS) and molecular subtype using preoperative mpMRIs sequences of patients with GBM. The developed system achieved an overall accuracy of 80% in stratifying patients into long-, medium-, and short-term survivors in the prospective cohort from a single institution. Prasanna et al. (2017) studied texture features analysis to assess the efficacy of peritumoral brain zone features from pre-operative MRI in predicting GBM patient survival into long- (>18 months) vs. short-term (<7 months). The study findings identified a subset of 10 features proven to be predictive of long- vs. short-term survival as compared to known clinical factors. Ingrisch et al. (2017) investigated whether radiomics analysis with random survival forests can predict overall survival from MRI scans of newly diagnosed glioblastoma patients. Their results demonstrated that low predicted individual mortality proven to be a favorable prognostic factor for OS, it also indicated that the MRI contains prognostic information, which can be accessed by radiomics analysis.Most recently, Chaddad et al. (2018) proposed multiscale texture features for predicting GBM patients' progression-free survival and overall survival on T1 and T2-FLAIR MRIs using the random forest. The study results showed that the identified seven-feature set, when combined with clinical factors, improved the model performance yielding an AUC value of 85.54% for OS predictions. Kickingereder et al. (2018) investigated the impact of mpMRI radiomics features for predicting patients' survival in newly diagnosed GBM patients before treatment. The study results revealed that a constructed eight-feature radiomics signature increased the prediction accuracy for OS further than the alternative approaches. Sanghani et al. (2018) studied survival prediction of glioblastoma patients for two-class (short- vs. long-term) and three-class (short-, medium-, and long-term) survival groups using Support Vector Machines (SVMs). The results showed a prediction accuracy of 98.7 and 88.95% for two-class and three-class OS group, respectively. Chen et al. (2019) studied developing a post-T1-weighted MRI-based prognostic radiomics classification system in GBM patients to assess if it could allow stratifying patients into a low- or high-risk group. Their results showed that the developed system classified patients' survival with improved performance with AUC of 0.851 for 12-month survival, compared to conventional risk models.The majority of those studies have performed on single-institution data, and also survival grouping was designed for two-class rather than three-class approach. Besides, implementing a particular feature selection method and testing various machine learning classification models allow greater flexibility for exploring distinct methods. The purpose of this work is to quantitatively study the radiomics features from pre-operative multi-parametric MRI of the de novo glioblastoma tumor on multi-institutional datasets. Then, to apply radiomics analysis on mpMRI to identify a signature and a practical machine learning model to stratify patients into short-, medium, and long-survivors groups. For machine learning, different models were tested to assess a range of performance as a function of the choice of features. | [
"25434380",
"28872634",
"29993848",
"27781499",
"28325002",
"31165039",
"30660472",
"14972397",
"26579733",
"29292533",
"11723374",
"28079702",
"29036412",
"28871110",
"30366279",
"29430935",
"26188015",
"27774518",
"25494501",
"15977639",
"27778090",
"29572492",
"30449497",
"27863561",
"9044528",
"20445000",
"26520762",
"28280088"
] | [
{
"pmid": "25434380",
"title": "Molecular and cellular heterogeneity: the hallmark of glioblastoma.",
"abstract": "There has been increasing awareness that glioblastoma, which may seem histopathologically similar across many tumors, actually represents a group of molecularly distinct tumors. Emerging evidence suggests that cells even within the same tumor exhibit wide-ranging molecular diversity. Parallel to the discoveries of molecular heterogeneity among tumors and their individual cells, intense investigation of the cellular biology of glioblastoma has revealed that not all cancer cells within a given tumor behave the same. The identification of a subpopulation of brain tumor cells termed \"glioblastoma cancer stem cells\" or \"tumor-initiating cells\" has implications for the management of glioblastoma. This focused review will therefore summarize emerging concepts on the molecular and cellular heterogeneity of glioblastoma and emphasize that we should begin to consider each individual glioblastoma to be an ensemble of molecularly distinct subclones that reflect a spectrum of dynamic cell states."
},
{
"pmid": "28872634",
"title": "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features.",
"abstract": "Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method."
},
{
"pmid": "29993848",
"title": "Novel Radiomic Features Based on Joint Intensity Matrices for Predicting Glioblastoma Patient Survival Time.",
"abstract": "This paper presents a novel set of image texture features generalizing standard grey-level co-occurrence matrices (GLCM) to multimodal image data through joint intensity matrices (JIMs). These are used to predict the survival of glioblastoma multiforme (GBM) patients from multimodal MRI data. The scans of 73 GBM patients from the Cancer Imaging Archive are used in our study. Necrosis, active tumor, and edema/invasion subregions of GBM phenotypes are segmented using the coregistration of contrast-enhanced T1-weighted (CE-T1) images and its corresponding fluid-attenuated inversion recovery (FLAIR) images. Texture features are then computed from the JIM of these GBM subregions and a random forest model is employed to classify patients into short or long survival groups. Our survival analysis identified JIM features in necrotic (e.g., entropy and inverse-variance) and edema (e.g., entropy and contrast) subregions that are moderately correlated with survival time (i.e., Spearman rank correlation of 0.35). Moreover, nine features were found to be associated with GBM survival with a Hazard-ratio range of 0.38-2.1 and a significance level of p < 0.05 following Holm-Bonferroni correction. These features also led to the highest accuracy in a univariate analysis for predicting the survival group of patients, with AUC values in the range of 68-70%. Considering multiple features for this task, JIM features led to significantly higher AUC values than those based on standard GLCMs and gene expression. Furthermore, an AUC of 77.56% with p = 0.003 was achieved when combining JIM, GLCM, and gene expression features into a single radiogenomic signature. In summary, our study demonstrated the usefulness of modeling the joint intensity characteristics of CE-T1 and FLAIR images for predicting the prognosis of patients with GBM."
},
{
"pmid": "27781499",
"title": "A quantitative study of shape descriptors from glioblastoma multiforme phenotypes for predicting survival outcome.",
"abstract": "OBJECTIVE\nPredicting the survival outcome of patients with glioblastoma multiforme (GBM) is of key importance to clinicians for selecting the optimal course of treatment. The goal of this study was to evaluate the usefulness of geometric shape features, extracted from MR images, as a potential non-invasive way to characterize GBM tumours and predict the overall survival times of patients with GBM.\n\n\nMETHODS\nThe data of 40 patients with GBM were obtained from the Cancer Genome Atlas and Cancer Imaging Archive. The T1 weighted post-contrast and fluid-attenuated inversion-recovery volumes of patients were co-registered and segmented into delineate regions corresponding to three GBM phenotypes: necrosis, active tumour and oedema/invasion. A set of two-dimensional shape features were then extracted slicewise from each phenotype region and combined over slices to describe the three-dimensional shape of these phenotypes. Thereafter, a Kruskal-Wallis test was employed to identify shape features with significantly different distributions across phenotypes. Moreover, a Kaplan-Meier analysis was performed to find features strongly associated with GBM survival. Finally, a multivariate analysis based on the random forest model was used for predicting the survival group of patients with GBM.\n\n\nRESULTS\nOur analysis using the Kruskal-Wallis test showed that all but one shape feature had statistically significant differences across phenotypes, with p-value < 0.05, following Holm-Bonferroni correction, justifying the analysis of GBM tumour shapes on a per-phenotype basis. Furthermore, the survival analysis based on the Kaplan-Meier estimator identified three features derived from necrotic regions (i.e. Eccentricity, Extent and Solidity) that were significantly correlated with overall survival (corrected p-value < 0.05; hazard ratios between 1.68 and 1.87). In the multivariate analysis, features from necrotic regions gave the highest accuracy in predicting the survival group of patients, with a mean area under the receiver-operating characteristic curve (AUC) of 63.85%. Combining the features of all three phenotypes increased the mean AUC to 66.99%, suggesting that shape features from different phenotypes can be used in a synergic manner to predict GBM survival.\n\n\nCONCLUSION\nResults show that shape features, in particular those extracted from necrotic regions, can be used effectively to characterize GBM tumours and predict the overall survival of patients with GBM. Advances in knowledge: Simple volumetric features have been largely used to characterize the different phenotypes of a GBM tumour (i.e. active tumour, oedema and necrosis). This study extends previous work by considering a wide range of shape features, extracted in different phenotypes, for the prediction of survival in patients with GBM."
},
{
"pmid": "28325002",
"title": "Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme.",
"abstract": "Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI."
},
{
"pmid": "31165039",
"title": "Radiomics in Glioblastoma: Current Status and Challenges Facing Clinical Implementation.",
"abstract": "Radiomics analysis has had remarkable progress along with advances in medical imaging, most notability in central nervous system malignancies. Radiomics refers to the extraction of a large number of quantitative features that describe the intensity, texture and geometrical characteristics attributed to the tumor radiographic data. These features have been used to build predictive models for diagnosis, prognosis, and therapeutic response. Such models are being combined with clinical, biological, genetics and proteomic features to enhance reproducibility. Broadly, the four steps necessary for radiomic analysis are: (1) image acquisition, (2) segmentation or labeling, (3) feature extraction, and (4) statistical analysis. Major methodological challenges remain prior to clinical implementation. Essential steps include: adoption of an optimized standard imaging process, establishing a common criterion for performing segmentation, fully automated extraction of radiomic features without redundancy, and robust statistical modeling validated in the prospective setting. This review walks through these steps in detail, as it pertains to high grade gliomas. The impact on precision medicine will be discussed, as well as the challenges facing clinical implementation of radiomic in the current management of glioblastoma."
},
{
"pmid": "30660472",
"title": "Development and Validation of a MRI-Based Radiomics Prognostic Classifier in Patients with Primary Glioblastoma Multiforme.",
"abstract": "RATIONALE AND OBJECTIVES\nGlioblastoma multiforme (GBM) is the most common and deadly type of primary malignant tumor of the central nervous system. Accurate risk stratification is vital for a more personalized approach in GBM management. The purpose of this study is to develop and validate a MRI-based prognostic quantitative radiomics classifier in patients with newly diagnosed GBM and to evaluate whether the classifier allows stratification with improved accuracy over the clinical and qualitative imaging features risk models.\n\n\nMETHODS\nClinical and MR imaging data of 127 GBM patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive. Regions of interest corresponding to high signal intensity portions of tumor were drawn on postcontrast T1-weighted imaging (post-T1WI) on the 127 patients (allocated in a 2:1 ratio into a training [n = 85] or validation [n = 42] set), then 3824 radiomics features per patient were extracted. The dimension of these radiomics features were reduced using the minimum redundancy maximum relevance algorithm, then Cox proportional hazard regression model was used to build a radiomics classifier for predicting overall survival (OS). The value of the radiomics classifier beyond clinical (gender, age, Karnofsky performance status, radiation therapy, chemotherapy, and type of resection) and VASARI features for OS was assessed with multivariate Cox proportional hazards model. Time-dependent receiver operating characteristic curve analysis was used to assess the predictive accuracy.\n\n\nRESULTS\nA classifier using four post-T1WI-MRI radiomics features built on the training dataset could successfully separate GBM patients into low- or high-risk group with a significantly different OS in training (HR, 6.307 [95% CI, 3.475-11.446]; p < 0.001) and validation set (HR, 3.646 [95% CI, 1.709-7.779]; p < 0.001). The area under receiver operating characteristic curve of radiomics classifier (training, 0.799; validation, 0.815 for 12-month) was higher compared to that of the clinical risk model (Karnofsky performance status, radiation therapy; training, 0.749; validation, 0.670 for 12-month), and none of the qualitative imaging features was associated with OS. The predictive accuracy was further improved when combined the radiomics classifier with clinical data (training, 0.819; validation: 0.851 for 12-month).\n\n\nCONCLUSION\nA classifier using radiomics features allows preoperative prediction of survival and risk stratification of patients with GBM, and it shows improved performance compared to that of clinical and qualitative imaging features models."
},
{
"pmid": "14972397",
"title": "Influence of MRI acquisition protocols and image intensity normalization methods on texture classification.",
"abstract": "Texture analysis methods quantify the spatial variations in gray level values within an image and thus can provide useful information on the structures observed. However, they are sensitive to acquisition conditions due to the use of different protocols and to intra- and interscanner variations in the case of MRI. The influence was studied of two protocols and four different conditions of normalization of gray levels on the discrimination power of texture analysis methods applied to soft cheeses. Thirty-two samples of soft cheese were chosen at two different ripening periods (16 young and 16 old samples) in order to obtain two different microscopic structures of the protein gel. Proton density and T(2)-weighted MR images were acquired using a spin echo sequence on a 0.2 T scanner. Gray levels were normalized according to four methods: original gray levels, same maximum for all images, same mean for all images, and dynamics limited to micro +/- 3sigma. Regions of interest were automatically defined, and texture descriptors were then computed for the co-occurrence matrix, run length matrix, gradient matrix, autoregressive model, and wavelet transform. The features with the lowest probability of error and average correlation coefficient were selected and used for classification with 1-nearest neighbor (1-NN) classifier. The best results were obtained when using the limitation of dynamics to micro +/- 3sigma, which enhanced the differences between the two classes. The results demonstrated the influence of the normalization method and of the acquisition protocol on the effectiveness of the classification and also on the parameters selected for classification. These results indicate the need to evaluate sensitivity to MR acquisition protocols and to gray level normalization methods when texture analysis is required."
},
{
"pmid": "26579733",
"title": "Radiomics: Images Are More than Pictures, They Are Data.",
"abstract": "In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer."
},
{
"pmid": "29292533",
"title": "Variable selection - A review and recommendations for the practicing statistician.",
"abstract": "Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well-established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10-30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change-in-estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p-values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low-dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms."
},
{
"pmid": "11723374",
"title": "Progenitor cells and glioma formation.",
"abstract": "The gliomas are a collection of tumors that arise within the central nervous system and have characteristics similar to astrocytes, oligodendrocytes, or their precursors. Whether or not the glial characteristics of these tumors mean that they arise from the differentiated glia that they resemble or their precursors has been debated. Even under normal circumstances the cells within the central nervous system of an adult can trans-differentiate to other cell types. In addition, mutations found in gliomas further destabilize the differentiation status of these cells making a determination of what cell type gives rise to a given tumor histology difficult. Lineage tracing studies in animals can be used to correlate some specific cell characteristics with the histology of gliomas that arise from these cells. From these experiments it appears that undifferentiated cells are more sensitive to the oncogenic effects of certain signaling abnormalities than are differentiated cells, but that with the appropriate genetic abnormalities differentiated astrocytes can act as the cell-of-origin for gliomas. These data imply that small molecules that promote differentiation may be a rational component of glioma therapy in combination with other drugs aimed at specific molecular signaling targets."
},
{
"pmid": "28079702",
"title": "Radiomic Analysis Reveals Prognostic Information in T1-Weighted Baseline Magnetic Resonance Imaging in Patients With Glioblastoma.",
"abstract": "OBJECTIVES\nThe aim of this study was to investigate whether radiomic analysis with random survival forests (RSFs) can predict overall survival from T1-weighted contrast-enhanced baseline magnetic resonance imaging (MRI) scans in a cohort of glioblastoma multiforme (GBM) patients with uniform treatment.\n\n\nMATERIALS AND METHODS\nThis retrospective study was approved by the institutional review board and informed consent was waived. The MRI scans from 66 patients with newly diagnosed GBM from a previous prospective study were analyzed. Tumors were segmented manually on contrast-enhanced 3-dimensional T1-weighted images. Using these segmentations, P = 208 quantitative image features characterizing tumor shape, signal intensity, and texture were calculated in an automated fashion. On this data set, an RSF was trained using 10-fold cross validation to establish a link between image features and overall survival, and the individual risk for each patient was predicted. The mean concordance index was assessed as a measure of prediction accuracy. Association of individual risk with overall survival was assessed using Kaplan-Meier analysis and a univariate proportional hazards model.\n\n\nRESULTS\nMean overall survival was 14 months (range, 0.8-85 months). Mean concordance index of the 10-fold cross-validated RSF was 0.67. Kaplan-Meier analysis clearly distinguished 2 patient groups with high and low predicted individual risk (P = 5.5 × 10). Low predicted individual mortality was found to be a favorable prognostic factor for overall survival in a univariate Cox proportional hazards model (hazards ratio, 1.038; 95% confidence interval, 1.015-1.062; P = 0.0059).\n\n\nCONCLUSIONS\nThis study demonstrates that baseline MRI in GBM patients contains prognostic information, which can be accessed by radiomic analysis using RSFs."
},
{
"pmid": "29036412",
"title": "Radiomic subtyping improves disease stratification beyond key molecular, clinical, and standard imaging characteristics in patients with glioblastoma.",
"abstract": "Background\nThe purpose of this study was to analyze the potential of radiomics for disease stratification beyond key molecular, clinical, and standard imaging features in patients with glioblastoma.\n\n\nMethods\nQuantitative imaging features (n = 1043) were extracted from the multiparametric MRI of 181 patients with newly diagnosed glioblastoma prior to standard-of-care treatment (allocated to a discovery and a validation set, 2:1 ratio). A subset of 386/1043 features were identified as reproducible (in an independent MRI test-retest cohort) and selected for analysis. A penalized Cox model with 10-fold cross-validation (Coxnet) was fitted on the discovery set to construct a radiomic signature for predicting progression-free and overall survival (PFS and OS). The incremental value of a radiomic signature beyond molecular (O6-methylguanine-DNA methyltransferase [MGMT] promoter methylation, DNA methylation subgroups), clinical (patient's age, KPS, extent of resection, adjuvant treatment), and standard imaging parameters (tumor volumes) for stratifying PFS and OS was assessed with multivariate Cox models (performance quantified with prediction error curves).\n\n\nResults\nThe radiomic signature (constructed from 8/386 features identified through Coxnet) increased the prediction accuracy for PFS and OS (in both discovery and validation sets) beyond the assessed molecular, clinical, and standard imaging parameters (P ≤ 0.01). Prediction errors decreased by 36% for PFS and 37% for OS when adding the radiomic signature (compared with 29% and 27%, respectively, with molecular + clinical features alone). The radiomic signature was-along with MGMT status-the only parameter with independent significance on multivariate analysis (P ≤ 0.01).\n\n\nConclusions\nOur study stresses the role of integrating radiomics into a multilayer decision framework with key molecular and clinical features to improve disease stratification and to potentially advance personalized treatment of patients with glioblastoma."
},
{
"pmid": "28871110",
"title": "A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme.",
"abstract": "Traditional radiomics models mainly rely on explicitly-designed handcrafted features from medical images. This paper aimed to investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival (OS) in patients with Glioblastoma Multiforme (GBM). This study comprised a discovery data set of 75 patients and an independent validation data set of 37 patients. A total of 1403 handcrafted features and 98304 deep features were extracted from preoperative multi-modality MR images. After feature selection, a six-deep-feature signature was constructed by using the least absolute shrinkage and selection operator (LASSO) Cox regression model. A radiomics nomogram was further presented by combining the signature and clinical risk factors such as age and Karnofsky Performance Score. Compared with traditional risk factors, the proposed signature achieved better performance for prediction of OS (C-index = 0.710, 95% CI: 0.588, 0.932) and significant stratification of patients into prognostically distinct groups (P < 0.001, HR = 5.128, 95% CI: 2.029, 12.960). The combined model achieved improved predictive performance (C-index = 0.739). Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signature for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarker in preoperative care of GBM patients."
},
{
"pmid": "30366279",
"title": "A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas.",
"abstract": "OBJECTIVE\nThe aim of this study was to develop a radiomics signature for prediction of progression-free survival (PFS) in lower-grade gliomas and to investigate the genetic background behind the radiomics signature.\n\n\nMETHODS\nIn this retrospective study, training (n = 216) and validation (n = 84) cohorts were collected from the Chinese Glioma Genome Atlas and the Cancer Genome Atlas, respectively. For each patient, a total of 431 radiomics features were extracted from preoperative T2-weighted magnetic resonance images. A radiomics signature was generated in the training cohort, and its prognostic value was evaluated in both the training and validation cohorts. The genetic characteristics of the group with high-risk scores were identified by radiogenomic analysis, and a nomogram was established for prediction of PFS.\n\n\nRESULTS\nThere was a significant association between the radiomics signature (including 9 screened radiomics features) and PFS, which was independent of other clinicopathologic factors in both the training (P < 0.001, multivariable Cox regression) and validation (P = 0.045, multivariable Cox regression) cohorts. Radiogenomic analysis revealed that the radiomics signature was associated with the immune response, programmed cell death, cell proliferation, and vasculature development. A nomogram established using the radiomics signature and clinicopathologic risk factors demonstrated high accuracy and good calibration for prediction of PFS in both the training (C-index, 0.684) and validation (C-index, 0.823) cohorts.\n\n\nCONCLUSIONS\nPFS can be predicted non-invasively in patients with LGGs by a group of radiomics features that could reflect the biological processes of these tumors."
},
{
"pmid": "29430935",
"title": "The effect of glioblastoma heterogeneity on survival stratification: a multimodal MR imaging texture analysis.",
"abstract": "Background Quantitative evaluation of the effect of glioblastoma (GBM) heterogeneity on survival stratification would be critical for the diagnosis, treatment decision, and follow-up management. Purpose To evaluate the effect of GBM heterogeneity on survival stratification, using texture analysis on multimodal magnetic resonance (MR) imaging. Material and Methods A total of 119 GBM patients (65 in long-term and 54 in short-term survival group, separated by overall survival of 12 months) were selected from the Cancer Genome Atlas, who underwent the T1-weighted (T1W) contrast-enhanced (CE), T1W, T2-weighted (T2W), and FLAIR sequences. For each sequence, the co-occurrence matrix, run-length matrix, and histogram features were extracted to reflect GBM heterogeneity on different scale. The recursive feature elimination based support vector machine was adopted to find an optimal subset. Then the stratification performance of four MR sequences was evaluated, both alone and in combination. Results When each sequence used alone, the T1W-CE sequence performed best, with an area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of 0.7915, 80.67%, 78.45%, and 83.33%, respectively. When the four sequences combined, the stratification performance was basically equal to that of T1W-CE sequence. In the optimal subset of features extracted from multimodality, those from the T2W sequence weighted the most. Conclusion All the four sequences could reflect heterogeneous distribution of GBM and thereby affect the survival stratification, especially T1W-CE and T2W sequences. However, the stratification performance using only the T1W-CE sequence can be preserved with omission of other three sequences, when investigating the effect of GBM heterogeneity on survival stratification."
},
{
"pmid": "26188015",
"title": "Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.",
"abstract": "BACKGROUND\nMRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB).\n\n\nMETHODS\nOne hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients.\n\n\nRESULTS\nSurvival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy.\n\n\nCONCLUSIONS\nBy employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients."
},
{
"pmid": "27774518",
"title": "Magnetic Resonance Imaging-Based Radiomic Profiles Predict Patient Prognosis in Newly Diagnosed Glioblastoma Before Therapy.",
"abstract": "Magnetic resonance imaging (MRI) is used to diagnose and monitor brain tumors. Extracting additional information from medical imaging and relating it to a clinical variable of interest is broadly defined as radiomics. Here, multiparametric MRI radiomic profiles (RPs) of de novo glioblastoma (GBM) brain tumors is related with patient prognosis. Clinical imaging from 81 patients with GBM before surgery was analyzed. Four MRI contrasts were aligned, masked by margins defined by gadolinium contrast enhancement and T2/fluid attenuated inversion recovery hyperintensity, and contoured based on image intensity. These segmentations were combined for visualization and quantification by assigning a 4-digit numerical code to each voxel to indicate the segmented RP. Each RP volume was then compared with overall survival. A combined classifier was then generated on the basis of significant RPs and optimized volume thresholds. Five RPs were predictive of overall survival before therapy. Combining the RP classifiers into a single prognostic score predicted patient survival better than each alone (P < .005). Voxels coded with 1 RP associated with poor prognosis were pathologically confirmed to contain hypercellular tumor. This study applies radiomic profiling to de novo patients with GBM to determine imaging signatures associated with poor prognosis at tumor diagnosis. This tool may be useful for planning surgical resection or radiation treatment margins."
},
{
"pmid": "25494501",
"title": "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).",
"abstract": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource."
},
{
"pmid": "15977639",
"title": "Population-based studies on incidence, survival rates, and genetic alterations in astrocytic and oligodendroglial gliomas.",
"abstract": "Published data on prognostic and predictive factors in patients with gliomas are largely based on clinical trials and hospital-based studies. This review summarizes data on incidence rates, survival, and genetic alterations from population-based studies of astrocytic and oligodendrogliomas that were carried out in the Canton of Zurich, Switzerland (approximately 1.16 million inhabitants). A total of 987 cases were diagnosed between 1980 and 1994 and patients were followed up at least until 1999. While survival rates for pilocytic astrocytomas were excellent (96% at 10 years), the prognosis of diffusely infiltrating gliomas was poorer, with median survival times (MST) of 5.6 years for low-grade astrocytoma WHO grade II, 1.6 years for anaplastic astrocytoma grade III, and 0.4 years for glioblastoma. For oligodendrogliomas the MSTwas 11.6 years for grade II and 3.5 years for grade III. TP53 mutations were most frequent in gemistocytic astrocytomas (88%), followed by fibrillary astrocytomas (53%) and oligoastrocytomas (44%), but infrequent (13%) in oligodendrogliomas. LOH 1p/19q typically occurred in tumors without TP53 mutations and were most frequent in oligodendrogliomas (69%), followed by oligoastrocytomas (45%), but were rare in fibrillary astrocytomas (7%) and absent in gemistocytic astrocytomas. Glioblastomas were most frequent (3.55 cases per 100,000 persons per year) adjusted to the European Standard Population, amounting to 69% of total incident cases. Observed survival rates were 42.4% at 6 months, 17.7% at one year, and 3.3% at 2 years. For all age groups, survival was inversely correlated with age, ranging from an MST of 8.8 months (<50 years) to 1.6 months (>80 years). In glioblastomas, LOH 10q was the most frequent genetic alteration (69%), followed by EGFR amplification (34%), TP53 mutations (31%), p16INK4a deletion (31%), and PTEN mutations (24%). LOH 10q occurred in association with any of the other genetic alterations, and was the only alteration associated with shorter survival of glioblastoma patients. Primary (de novo) glioblastomas prevailed (95%), while secondary glioblastomas that progressed from low-grade or anaplastic gliomas were rare (5%). Secondary glioblastomas were characterized by frequent LOH 10q (63%) and TP53 mutations (65%). Of the TP53 mutations in secondary glioblastomas, 57% were in hot-spot codons 248 and 273, while in primary glioblastomas, mutations were more evenly distributed. G:C-->A:T mutations at CpG sites were more frequent in secondary than primary glioblastomas, suggesting that the acquisition of TP53 mutations in these glioblastoma subtypes may occur through different mechanisms."
},
{
"pmid": "27778090",
"title": "Radiomic features from the peritumoral brain parenchyma on treatment-naïve multi-parametric MR imaging predict long versus short-term survival in glioblastoma multiforme: Preliminary findings.",
"abstract": "OBJECTIVE\nDespite 90 % of glioblastoma (GBM) recurrences occurring in the peritumoral brain zone (PBZ), its contribution in patient survival is poorly understood. The current study leverages computerized texture (i.e. radiomic) analysis to evaluate the efficacy of PBZ features from pre-operative MRI in predicting long- (>18 months) versus short-term (<7 months) survival in GBM.\n\n\nMETHODS\nSixty-five patient examinations (29 short-term, 36 long-term) with gadolinium-contrast T1w, FLAIR and T2w sequences from the Cancer Imaging Archive were employed. An expert manually segmented each study as: enhancing lesion, PBZ and tumour necrosis. 402 radiomic features (capturing co-occurrence, grey-level dependence and directional gradients) were obtained for each region. Evaluation was performed using threefold cross-validation, such that a subset of studies was used to select the most predictive features, and the remaining subset was used to evaluate their efficacy in predicting survival.\n\n\nRESULTS\nA subset of ten radiomic 'peritumoral' MRI features, suggestive of intensity heterogeneity and textural patterns, was found to be predictive of survival (p = 1.47 × 10-5) as compared to features from enhancing tumour, necrotic regions and known clinical factors.\n\n\nCONCLUSION\nOur preliminary analysis suggests that radiomic features from the PBZ on routine pre-operative MRI may be predictive of long- versus short-term survival in GBM.\n\n\nKEY POINTS\n• Radiomic features from peritumoral regions can capture glioblastoma heterogeneity to predict outcome. • Peritumoral radiomics along with clinical factors are highly predictive of glioblastoma outcome. • Identifying prognostic markers can assist in making personalized therapy decisions in glioblastoma."
},
{
"pmid": "29572492",
"title": "Radiomic MRI signature reveals three distinct subtypes of glioblastoma with different clinical and molecular characteristics, offering prognostic value beyond IDH1.",
"abstract": "The remarkable heterogeneity of glioblastoma, across patients and over time, is one of the main challenges in precision diagnostics and treatment planning. Non-invasive in vivo characterization of this heterogeneity using imaging could assist in understanding disease subtypes, as well as in risk-stratification and treatment planning of glioblastoma. The current study leveraged advanced imaging analytics and radiomic approaches applied to multi-parametric MRI of de novo glioblastoma patients (n = 208 discovery, n = 53 replication), and discovered three distinct and reproducible imaging subtypes of glioblastoma, with differential clinical outcome and underlying molecular characteristics, including isocitrate dehydrogenase-1 (IDH1), O6-methylguanine-DNA methyltransferase, epidermal growth factor receptor variant III (EGFRvIII), and transcriptomic subtype composition. The subtypes provided risk-stratification substantially beyond that provided by WHO classifications. Within IDH1-wildtype tumors, our subtypes revealed different survival (p < 0.001), thereby highlighting the synergistic consideration of molecular and imaging measures for prognostication. Moreover, the imaging characteristics suggest that subtype-specific treatment of peritumoral infiltrated brain tissue might be more effective than current uniform standard-of-care. Finally, our analysis found subtype-specific radiogenomic signatures of EGFRvIII-mutated tumors. The identified subtypes and their clinical and molecular correlates provide an in vivo portrait of phenotypic heterogeneity in glioblastoma, which points to the need for precision diagnostics and personalized treatment."
},
{
"pmid": "30449497",
"title": "Overall survival prediction in glioblastoma multiforme patients from volumetric, shape and texture features using machine learning.",
"abstract": "Glioblastoma multiforme (GBM) are aggressive brain tumors, which lead to poor overall survival (OS) of patients. OS prediction of GBM patients provides useful information for surgical and treatment planning. Radiomics research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, MR image derived texture features, tumor shape and volumetric features, and patient age were obtained for 163 patients. OS group prediction was performed for both 2-class (short and long) and 3-class (short, medium and long) survival groups. Support vector machine classification based recursive feature elimination method was used to perform feature selection. The performance of the classification model was assessed using 5-fold cross-validation. The 2-class and 3-class OS group prediction accuracy obtained were 98.7% and 88.95% respectively. The shape features used in this work have been evaluated for OS prediction of GBM patients for the first time. The feature selection and prediction scheme implemented in this study yielded high accuracy for both 2-class and 3-class OS group predictions. This study was performed using routinely acquired MR images for GBM patients, thus making the translation of this work into a clinical setup convenient."
},
{
"pmid": "27863561",
"title": "Precision Medicine and PET/Computed Tomography: Challenges and Implementation.",
"abstract": "Precision Medicine is about selecting the right therapy for the right patient, at the right time, specific to the molecular targets expressed by disease or tumors, in the context of patient's environment and lifestyle. Some of the challenges for delivery of precision medicine in oncology include biomarkers for patient selection for enrichment-precision diagnostics, mapping out tumor heterogeneity that contributes to therapy failures, and early therapy assessment to identify resistance to therapies. PET/computed tomography offers solutions in these important areas of challenges and facilitates implementation of precision medicine."
},
{
"pmid": "9044528",
"title": "The lasso method for variable selection in the Cox model.",
"abstract": "I propose a new method for variable selection and shrinkage in Cox's proportional hazards model. My proposal minimizes the log partial likelihood subject to the sum of the absolute values of the parameters being bounded by a constant. Because of the nature of this constraint, it shrinks coefficients and produces some coefficients that are exactly zero. As a result it reduces the estimation variance while providing an interpretable final model. The method is a variation of the 'lasso' proposal of Tibshirani, designed for the linear regression context. Simulations indicate that the lasso can be more accurate than stepwise selection in this setting."
},
{
"pmid": "20445000",
"title": "Exciting new advances in neuro-oncology: the avenue to a cure for malignant glioma.",
"abstract": "Malignant gliomas are the most common and deadly brain tumors. Nevertheless, survival for patients with glioblastoma, the most aggressive glioma, although individually variable, has improved from an average of 10 months to 14 months after diagnosis in the last 5 years due to improvements in the standard of care. Radiotherapy has been of key importance to the treatment of these lesions for decades, and the ability to focus the beam and tailor it to the irregular contours of brain tumors and minimize the dose to nearby critical structures with intensity-modulated or image-guided techniques has improved greatly. Temozolomide, an alkylating agent with simple oral administration and a favorable toxicity profile, is used in conjunction with and after radiotherapy. Newer surgical techniques, such as fluorescence-guided resection and neuroendoscopic approaches, have become important in the management of malignant gliomas. Furthermore, new discoveries are being made in basic and translational research, which are likely to improve this situation further in the next 10 years. These include agents that block 1 or more of the disordered tumor proliferation signaling pathways, and that overcome resistance to already existing treatments. Targeted therapies such as antiangiogenic therapy with antivascular endothelial growth factor antibodies (bevacizumab) are finding their way into clinical practice. Large-scale research efforts are ongoing to provide a comprehensive understanding of all the genetic alterations and gene expression changes underlying glioma formation. These have already refined the classification of glioblastoma into 4 distinct molecular entities that may lead to different treatment regimens. The role of cancer stem-like cells is another area of active investigation. There is definite hope that by 2020, new cocktails of drugs will be available to target the key molecular pathways involved in gliomas and reduce their mortality and morbidity, a positive development for patients, their families, and medical professionals alike."
},
{
"pmid": "26520762",
"title": "Evaluation of tumor-derived MRI-texture features for discrimination of molecular subtypes and prediction of 12-month survival status in glioblastoma.",
"abstract": "PURPOSE\nGlioblastoma multiforme (GBM) is the most common and aggressive primary brain cancer. Four molecular subtypes of GBM have been described but can only be determined by an invasive brain biopsy. The goal of this study is to evaluate the utility of texture features extracted from magnetic resonance imaging (MRI) scans as a potential noninvasive method to characterize molecular subtypes of GBM and to predict 12-month overall survival status for GBM patients.\n\n\nMETHODS\nThe authors manually segmented the tumor regions from postcontrast T1 weighted and T2 fluid-attenuated inversion recovery (FLAIR) MRI scans of 82 patients with de novo GBM. For each patient, the authors extracted five sets of computer-extracted texture features, namely, 48 segmentation-based fractal texture analysis (SFTA) features, 576 histogram of oriented gradients (HOGs) features, 44 run-length matrix (RLM) features, 256 local binary patterns features, and 52 Haralick features, from the tumor slice corresponding to the maximum tumor area in axial, sagittal, and coronal planes, respectively. The authors used an ensemble classifier called random forest on each feature family to predict GBM molecular subtypes and 12-month survival status (a dichotomized version of overall survival at the 12-month time point indicating if the patient was alive or not at 12 months). The performance of the prediction was quantified and compared using receiver operating characteristic (ROC) curves.\n\n\nRESULTS\nWith the appropriate combination of texture feature set, image plane (axial, coronal, or sagittal), and MRI sequence, the area under ROC curve values for predicting different molecular subtypes and 12-month survival status are 0.72 for classical (with Haralick features on T1 postcontrast axial scan), 0.70 for mesenchymal (with HOG features on T2 FLAIR axial scan), 0.75 for neural (with RLM features on T2 FLAIR axial scan), 0.82 for proneural (with SFTA features on T1 postcontrast coronal scan), and 0.69 for 12-month survival status (with SFTA features on T1 postcontrast coronal scan).\n\n\nCONCLUSIONS\nThe authors evaluated the performance of five types of texture features in predicting GBM molecular subtypes and 12-month survival status. The authors' results show that texture features are predictive of molecular subtypes and survival status in GBM. These results indicate the feasibility of using tumor-derived imaging features to guide genomically informed interventions without the need for invasive biopsies."
},
{
"pmid": "28280088",
"title": "Radiomics Features of Multiparametric MRI as Novel Prognostic Factors in Advanced Nasopharyngeal Carcinoma.",
"abstract": "Purpose: To identify MRI-based radiomics as prognostic factors in patients with advanced nasopharyngeal carcinoma (NPC).Experimental Design: One-hundred and eighteen patients (training cohort: n = 88; validation cohort: n = 30) with advanced NPC were enrolled. A total of 970 radiomics features were extracted from T2-weighted (T2-w) and contrast-enhanced T1-weighted (CET1-w) MRI. Least absolute shrinkage and selection operator (LASSO) regression was applied to select features for progression-free survival (PFS) nomograms. Nomogram discrimination and calibration were evaluated. Associations between radiomics features and clinical data were investigated using heatmaps.Results: The radiomics signatures were significantly associated with PFS. A radiomics signature derived from joint CET1-w and T2-w images showed better prognostic performance than signatures derived from CET1-w or T2-w images alone. One radiomics nomogram combined a radiomics signature from joint CET1-w and T2-w images with the TNM staging system. This nomogram showed a significant improvement over the TNM staging system in terms of evaluating PFS in the training cohort (C-index, 0.761 vs. 0.514; P < 2.68 × 10-9). Another radiomics nomogram integrated the radiomics signature with all clinical data, and thereby outperformed a nomogram based on clinical data alone (C-index, 0.776 vs. 0.649; P < 1.60 × 10-7). Calibration curves showed good agreement. Findings were confirmed in the validation cohort. Heatmaps revealed associations between radiomics features and tumor stages.Conclusions: Multiparametric MRI-based radiomics nomograms provided improved prognostic ability in advanced NPC. These results provide an illustrative example of precision medicine and may affect treatment strategies. Clin Cancer Res; 23(15); 4259-69. ©2017 AACR."
}
] |
Scientific Reports | 31481737 | PMC6722103 | 10.1038/s41598-019-48892-w | Training Optimization for Gate-Model Quantum Neural Networks | Gate-based quantum computations represent an essential to realize near-term quantum computer architectures. A gate-model quantum neural network (QNN) is a QNN implemented on a gate-model quantum computer, realized via a set of unitaries with associated gate parameters. Here, we define a training optimization procedure for gate-model QNNs. By deriving the environmental attributes of the gate-model quantum network, we prove the constraint-based learning models. We show that the optimal learning procedures are different if side information is available in different directions, and if side information is accessible about the previous running sequences of the gate-model QNN. The results are particularly convenient for gate-model quantum computer implementations. | Related WorksGate-model quantum computersA theoretical background on the realizations of quantum computations in a gate-model quantum computer environment can be found in15 and16. For a summary on the related references1–3,13,15–17,54,55, we suggest56.Quantum neural networksIn14, the formalism of a gate-model quantum neural network is defined. The gate-model quantum neural network is a quantum neural network implemented on gate-model quantum computer. A particular problem analyzed by the authors is the classification of classical data sets which consist of bitstrings with binary labels.In44, the authors studied the subject of quantum deep learning. As the authors found, the application of quantum computing can reduce the time required to train a deep restricted Boltzmann machine. The work also concluded that quantum computing provides a strong framework for deep learning, and the application of quantum computing can lead to significant performance improvements in comparison to classical computing.In45, the authors defined a quantum generalization of feedforward neural networks. In the proposed system model, the classical neurons are generalized to being quantum reversible. As the authors showed, the defined quantum network can be trained efficiently using gradient descent to perform quantum generalizations of classical tasks.In46, the authors defined a model of a quantum neuron to perform machine learning tasks on quantum computers. The authors proposed a small quantum circuit to simulate neurons with threshold activation. As the authors found, the proposed quantum circuit realizes a “œquantum neuron”. The authors showed an application of the defined quantum neuron model in feedforward networks. The work concluded that the quantum neuron model can learn a function if trained with superposition of inputs and the corresponding output. The proposed training method also suffices to learn the function on all individual inputs separately.In25, the authors studied the structure of artificial quantum neural network. The work focused on the model of quantum neurons and studied the logical elements and tests of convolutional networks. The authors defined a model of an artificial neural network that uses quantum-mechanical particles as a neuron, and set a Monte-Carlo integration method to simulate the proposed quantum-mechanical system. The work also studied the implementation of logical elements based on introduced quantum particles, and the implementation of a simple convolutional network.In26, the authors defined the model of a universal quantum perceptron as efficient unitary approximators. The authors studied the implementation of a quantum perceptron with a sigmoid activation function as a reversible many-body unitary operation. In the proposed system model, the response of the quantum perceptron is parameterized by the potential exerted by other neurons. The authors showed that the proposed quantum neural network model is a universal approximator of continuous functions, with at least the same power as classical neural networks.Quantum machine learningIn57, the authors analyzed a Markov process connected to a classical probabilistic algorithm58. A performance evaluation also has been included in the work to compare the performance of the quantum and classical algorithm.In19, the authors studied quantum algorithms for supervised and unsupervised machine learning. This particular work focuses on the problem of cluster assignment and cluster finding via quantum algorithms. As a main conclusion of the work, via the utilization of quantum computers and quantum machine learning, an exponential speed-up can be reached over classical algorithms.In20, the authors defined a method for the analysis of an unknown quantum state. The authors showed that it is possible to perform “œquantum principal component analysis” by creating quantum coherence among different copies, and the relevant attributes can be revealed exponentially faster than it is possible by any existing algorithm.In21, the authors studied the application of a quantum support vector machine in Big Data classification. The authors showed that a quantum version of the support vector machine (optimized binary classifier) can be implemented on a quantum computer. As the work concluded, the complexity of the quantum algorithm is only logarithmic in the size of the vectors and the number of training examples that provides a significant advantage over classical support machines.In22, the problem of quantum-based analysis of big data sets is studied by the authors. As the authors concluded, the proposed quantum algorithms provide an exponential speedup over classical algorithms for topological data analysis.The problem of quantum generative adversarial learning is studied in51. In generative adversarial networks a generator entity creates statistics for data that mimics those of a valid data set, and a discriminator unit distinguishes between the valid and non-valid data. As a main conclusion of the work, a quantum computer allows us to realize quantum adversarial networks with an exponential advantage over classical adversarial networks.In54, super-polynomial and exponential improvements for quantum-enhanced reinforcement learning are studied.In55, the authors proposed strategies for quantum computing molecular energies using the unitary coupled cluster ansatz.The authors of56 provided demonstrations of quantum advantage in machine learning problems.In57, the authors study the subject of quantum speedup in machine learning. As a particular problem, the work focuses on finding Boolean functions for classification tasks. | [
"28905912",
"28905917",
"27488798",
"26941315",
"24759412",
"12066177",
"27437573",
"9912632"
] | [
{
"pmid": "28905912",
"title": "Quantum computational supremacy.",
"abstract": "The field of quantum algorithms aims to find ways to speed up the solution of computational problems by using a quantum computer. A key milestone in this field will be when a universal quantum computer performs a computational task that is beyond the capability of any classical computer, an event known as quantum supremacy. This would be easier to achieve experimentally than full-scale quantum computing, but involves new theoretical challenges. Here we present the leading proposals to achieve quantum supremacy, and discuss how we can reliably compare the power of a classical computer to the power of a quantum computer."
},
{
"pmid": "28905917",
"title": "Quantum machine learning.",
"abstract": "Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable."
},
{
"pmid": "27488798",
"title": "Demonstration of a small programmable quantum computer with atomic qubits.",
"abstract": "Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels."
},
{
"pmid": "26941315",
"title": "Realization of a scalable Shor algorithm.",
"abstract": "Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four \"cache qubits\" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%."
},
{
"pmid": "24759412",
"title": "Superconducting quantum circuits at the surface code threshold for fault tolerance.",
"abstract": "A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits."
},
{
"pmid": "12066177",
"title": "Architecture for a large-scale ion-trap quantum computer.",
"abstract": "Among the numerous types of architecture being explored for quantum computers are systems utilizing ion traps, in which quantum bits (qubits) are formed from the electronic states of trapped ions and coupled through the Coulomb interaction. Although the elementary requirements for quantum computation have been demonstrated in this system, there exist theoretical and technical obstacles to scaling up the approach to large numbers of qubits. Therefore, recent efforts have been concentrated on using quantum communication to link a number of small ion-trap quantum systems. Developing the array-based approach, we show how to achieve massively parallel gate operation in a large-scale quantum computer, based on techniques already demonstrated for manipulating small quantum registers. The use of decoherence-free subspaces significantly reduces decoherence during ion transport, and removes the requirement of clock synchronization between the interaction regions."
},
{
"pmid": "27437573",
"title": "Extending the lifetime of a quantum bit with error correction in superconducting circuits.",
"abstract": "Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system."
}
] |
International Journal of Biological Sciences | 31523183 | PMC6743289 | 10.7150/ijbs.32142 | SWPepNovo: An Efficient De Novo Peptide Sequencing Tool for Large-scale MS/MS Spectra Analysis | Tandem mass spectrometry (MS/MS)-based de novo peptide sequencing is a powerful method for high-throughput protein analysis. However, the explosively increasing size of MS/MS spectra dataset inevitably and exponentially raises the computational demand of existing de novo peptide sequencing methods, which is an issue urgently to be solved in computational biology. This paper introduces an efficient tool based on SW26010 many-core processor, namely SWPepNovo, to process the large-scale peptide MS/MS spectra using a parallel peptide spectrum matches (PSMs) algorithm. Our design employs a two-level parallelization mechanism: (1) the task-level parallelism between MPEs using MPI based on a data transformation method and a dynamic feedback task scheduling algorithm, (2) the thread-level parallelism across CPEs using asynchronous task transfer and multithreading. Moreover, three optimization strategies, including vectorization, double buffering and memory access optimizations, have been employed to overcome both the compute-bound and the memory-bound bottlenecks in the parallel PSMs algorithm. The results of experiments conducted on multiple spectra datasets demonstrate the performance of SWPepNovo against three state-of-the-art tools for peptide sequencing, including PepNovo+, PEAKS and DeepNovo-DIA. The SWPepNovo also shows high scalability in experiments on extremely large datasets sized up to 11.22 GB. The software and the parameter settings are available at https://github.com/ChuangLi99/SWPepNovo. | Related WorksPrevious research has shown that many efforts of the acceleration of protein identification are focused on developed parallel database searching based peptide sequencing. Lee adopted the graph-based in-memory distributed system to develop a novel sequence alignment algorithm 26. Qi You developed a fast tool that can highly efficient analyze genome-editing dataset 27. In 28, Li considered the redundant candidate peptides in PSMs, and adopted inverted index strategy for speeding up tandem mass spectrometry. And also, some of the prevalent peptide sequencing methods adopted high performance computing (HPC) technology and cloud computing 29. Notably, Zhu presented an efficient OpenGL-based Multiple peptide sequence alignment implementation on GPUs hardware 30. In31, a based-GPU feature detection algorithm was presented by Hussong to reduce the running time of PSMs.As another most powerful method for protein analysis, de novo peptide sequencing has drawn limited attention in proteomics. A pioneering research on speeding up de novo peptide sequencing was done by Frank 32. In 32, Frank presented a discriminative boost ranking-based match scoring algorithm, which using machine learning ranking algorithms and producing identical speedup results while maintaining the same identification result. Another efficient real-time de novo sequencing algorithm, namely Novor, was recently presented by Ma 33. Compared with other de novo peptide sequencing methods, Novor shows a very fast sequencing speed. PEAKS 8, which developed by Ma, does the best job as acceleration de novo peptide sequencing. Although Peaks achieves great performance both in the speed and accuracy of de novo peptide sequencing analyses, it is a commercial software which cannot freely available to academic users.Recently Sunway TaihuLight supercomputer provide tremendous compute power to researchers. There are a few early development experiences on Sunway TaihuLight supercomputer. Chen et al. 34 designing and implementing a parallel AES algorithm, and the result shows that the parallel AES algorithm achieved a good speed-up performance. Fang et al. 35 have implemented and optimized a library, namely swDNN, which that supports efficient deep neural networks (DNNs) implementation on Sunway TaihuLight supercomputer. In 36, a SW26010-based programming framework was presented for Sea Ice Model (SIM) algorithm. According to the experiment results, the programming framework for SIM algorithm offers up to 40% performance increase. | [
"21999834",
"24226387",
"14976030",
"15817687",
"15858974",
"14558135",
"20329752",
"23766417",
"14568614",
"10582570",
"15987094",
"29989106",
"17721543",
"16401509",
"29989077",
"20187083",
"19447788",
"19231891",
"26122521"
] | [
{
"pmid": "21999834",
"title": "Algorithms for the de novo sequencing of peptides from tandem mass spectra.",
"abstract": "Proteomics is the study of proteins, their time- and location-dependent expression profiles, as well as their modifications and interactions. Mass spectrometry is useful to investigate many of the questions asked in proteomics. Database search methods are typically employed to identify proteins from complex mixtures. However, databases are not often available or, despite their availability, some sequences are not readily found therein. To overcome this problem, de novo sequencing can be used to directly assign a peptide sequence to a tandem mass spectrometry spectrum. Many algorithms have been proposed for de novo sequencing and a selection of them are detailed in this article. Although a standard accuracy measure has not been agreed upon in the field, relative algorithm performance is discussed. The current state of the de novo sequencing is assessed thereafter and, finally, examples are used to construct possible future perspectives of the field."
},
{
"pmid": "24226387",
"title": "An approach to correlate tandem mass spectral data of peptides with amino acid sequences in a protein database.",
"abstract": "A method to correlate the uninterpreted tandem mass spectra of peptides produced under low energy (10-50 eV) collision conditions with amino acid sequences in the Genpept database has been developed. In this method the protein database is searched to identify linear amino acid sequences within a mass tolerance of ±1 u of the precursor ion molecular weight A cross-correlation function is then used to provide a measurement of similarity between the mass-to-charge ratios for the fragment ions predicted from amino acid sequences obtained from the database and the fragment ions observed in the tandem mass spectrum. In general, a difference greater than 0.1 between the normalized cross-correlation functions of the first- and second-ranked search results indicates a successful match between sequence and spectrum. Searches of species-specific protein databases with tandem mass spectra acquired from peptides obtained from the enzymatically digested total proteins of E. coli and S. cerevisiae cells allowed matching of the spectra to amino acid sequences within proteins of these organisms. The approach described in this manuscript provides a convenient method to interpret tandem mass spectra with known sequences in a protein database."
},
{
"pmid": "14976030",
"title": "TANDEM: matching proteins with tandem mass spectra.",
"abstract": "SUMMARY\nTandem mass spectra obtained from fragmenting peptide ions contain some peptide sequence specific information, but often there is not enough information to sequence the original peptide completely. Several proprietary software applications have been developed to attempt to match the spectra with a list of protein sequences that may contain the sequence of the peptide. The application TANDEM was written to provide the proteomics research community with a set of components that can be used to test new methods and algorithms for performing this type of sequence-to-data matching.\n\n\nAVAILABILITY\nThe source code and binaries for this software are available at http://www.proteome.ca/opensource.html, for Windows, Linux and Macintosh OSX. The source code is made available under the Artistic License, from the authors."
},
{
"pmid": "15817687",
"title": "pFind: a novel database-searching software system for automated peptide and protein identification via tandem mass spectrometry.",
"abstract": "Research in proteomics requires powerful database-searching software to automatically identify protein sequences in a complex protein mixture via tandem mass spectrometry. In this paper, we describe a novel database-searching software system called pFind (peptide/protein Finder), which employs an effective peptide-scoring algorithm that we reported earlier. The pFind server is implemented with the C++ STL, .Net and XML technologies. As a result, high speed and good usability of the software are achieved."
},
{
"pmid": "15858974",
"title": "PepNovo: de novo peptide sequencing via probabilistic network modeling.",
"abstract": "We present a novel scoring method for de novo interpretation of peptides from tandem mass spectrometry data. Our scoring method uses a probabilistic network whose structure reflects the chemical and physical rules that govern the peptide fragmentation. We use a likelihood ratio hypothesis test to determine whether the peaks observed in the mass spectrum are more likely to have been produced under our fragmentation model than under a model that treats peaks as random events. We tested our de novo algorithm PepNovo on ion trap data and achieved results that are superior to popular de novo peptide sequencing algorithms. PepNovo can be accessed via the URL http://www-cse.ucsd.edu/groups/bioinformatics/software.html."
},
{
"pmid": "14558135",
"title": "PEAKS: powerful software for peptide de novo sequencing by tandem mass spectrometry.",
"abstract": "A number of different approaches have been described to identify proteins from tandem mass spectrometry (MS/MS) data. The most common approaches rely on the available databases to match experimental MS/MS data. These methods suffer from several drawbacks and cannot be used for the identification of proteins from unknown genomes. In this communication, we describe a new de novo sequencing software package, PEAKS, to extract amino acid sequence information without the use of databases. PEAKS uses a new model and a new algorithm to efficiently compute the best peptide sequences whose fragment ions can best interpret the peaks in the MS/MS spectrum. The output of the software gives amino acid sequences with confidence scores for the entire sequences, as well as an additional novel positional scoring scheme for portions of the sequences. The performance of PEAKS is compared with Lutefisk, a well-known de novo sequencing software, using quadrupole-time-of-flight (Q-TOF) data obtained for several tryptic peptides from standard proteins."
},
{
"pmid": "20329752",
"title": "pNovo: de novo peptide sequencing and identification using HCD spectra.",
"abstract": "De novo peptide sequencing has improved remarkably in the past decade as a result of better instruments and computational algorithms. However, de novo sequencing can correctly interpret only approximately 30% of high- and medium-quality spectra generated by collision-induced dissociation (CID), which is much less than database search. This is mainly due to incomplete fragmentation and overlap of different ion series in CID spectra. In this study, we show that higher-energy collisional dissociation (HCD) is of great help to de novo sequencing because it produces high mass accuracy tandem mass spectrometry (MS/MS) spectra without the low-mass cutoff associated with CID in ion trap instruments. Besides, abundant internal and immonium ions in the HCD spectra can help differentiate similar peptide sequences. Taking advantage of these characteristics, we developed an algorithm called pNovo for efficient de novo sequencing of peptides from HCD spectra. pNovo gave correct identifications to 80% or more of the HCD spectra identified by database search. The number of correct full-length peptides sequenced by pNovo is comparable with that obtained by database search. A distinct advantage of de novo sequencing is that deamidated peptides and peptides with amino acid mutations can be identified efficiently without extra cost in computation. In summary, implementation of the HCD characteristics makes pNovo an excellent tool for de novo peptide sequencing from HCD spectra."
},
{
"pmid": "23766417",
"title": "UniNovo: a universal tool for de novo peptide sequencing.",
"abstract": "MOTIVATION\nMass spectrometry (MS) instruments and experimental protocols are rapidly advancing, but de novo peptide sequencing algorithms to analyze tandem mass (MS/MS) spectra are lagging behind. Although existing de novo sequencing tools perform well on certain types of spectra [e.g. Collision Induced Dissociation (CID) spectra of tryptic peptides], their performance often deteriorates on other types of spectra, such as Electron Transfer Dissociation (ETD), Higher-energy Collisional Dissociation (HCD) spectra or spectra of non-tryptic digests. Thus, rather than developing a new algorithm for each type of spectra, we develop a universal de novo sequencing algorithm called UniNovo that works well for all types of spectra or even for spectral pairs (e.g. CID/ETD spectral pairs). UniNovo uses an improved scoring function that captures the dependences between different ion types, where such dependencies are learned automatically using a modified offset frequency function.\n\n\nRESULTS\nThe performance of UniNovo is compared with PepNovo+, PEAKS and pNovo using various types of spectra. The results show that the performance of UniNovo is superior to other tools for ETD spectra and superior or comparable with others for CID and HCD spectra. UniNovo also estimates the probability that each reported reconstruction is correct, using simple statistics that are readily obtained from a small training dataset. We demonstrate that the estimation is accurate for all tested types of spectra (including CID, HCD, ETD, CID/ETD and HCD/ETD spectra of trypsin, LysC or AspN digested peptides).\n\n\nAVAILABILITY\nUniNovo is implemented in JAVA and tested on Windows, Ubuntu and OS X machines. UniNovo is available at http://proteomics.ucsd.edu/Software/UniNovo.html along with the manual."
},
{
"pmid": "14568614",
"title": "Peptide and protein de novo sequencing by mass spectrometry.",
"abstract": "Although the advent of large-scale genomic sequencing has greatly simplified the task of determining the primary structures of peptides and proteins, the genomic sequences of many organisms are still unknown. Even for those that are known, modifications such as post-translational events may prevent the identification of all or part of the protein sequence. Thus, complete characterization of the protein primary structure often requires determination of the protein sequence by mass spectrometry with minimal assistance from genomic data - de novo protein sequencing. This task has been facilitated by technical developments during the past few years: 'soft' ionization techniques, new forms of chemical modification (derivatization), new types of mass spectrometer and improved software."
},
{
"pmid": "10582570",
"title": "De novo peptide sequencing via tandem mass spectrometry.",
"abstract": "Peptide sequencing via tandem mass spectrometry (MS/MS) is one of the most powerful tools in proteomics for identifying proteins. Because complete genome sequences are accumulating rapidly, the recent trend in interpretation of MS/MS spectra has been database search. However, de novo MS/MS spectral interpretation remains an open problem typically involving manual interpretation by expert mass spectrometrists. We have developed a new algorithm, SHERENGA, for de novo interpretation that automatically learns fragment ion types and intensity thresholds from a collection of test spectra generated from any type of mass spectrometer. The test data are used to construct optimal path scoring in the graph representations of MS/MS spectra. A ranked list of high scoring paths corresponds to potential peptide sequences. SHERENGA is most useful for interpreting sequences of peptides resulting from unknown proteins and for validating the results of database search algorithms in fully automated, high-throughput peptide sequencing."
},
{
"pmid": "15987094",
"title": "Discovering known and unanticipated protein modifications using MS/MS database searching.",
"abstract": "We present an MS/MS database search algorithm with the following novel features: (1) a novel protein database structure containing extensive preindexing and (2) zone modification searching, which enables the rapid discovery of protein modifications of known (i.e., user-specified) and unanticipated delta masses. All of these features are implemented in Interrogator, the search engine that runs behind the Pro ID, Pro ICAT, and Pro QUANT software products. Speed benchmarks demonstrate that our modification-tolerant database search algorithm is 100-fold faster than traditional database search algorithms when used for comprehensive searches for a broad variety of modification species. The ability to rapidly search for a large variety of known as well as unanticipated modifications allows a significantly greater percentage of MS/MS scans to be identified. We demonstrate this with an example in which, out of a total of 473 identified MS/MS scans, 315 of these scans correspond to unmodified peptides, while 158 scans correspond to a wide variety of modified peptides. In addition, we provide specific examples where the ability to search for unanticipated modifications allows the scientist to discover: unexpected modifications that have biological significance; amino acid mutations; salt-adducted peptides in a sample that has nominally been desalted; peptides arising from nontryptic cleavage in a sample that has nominally been digested using trypsin; other unintended consequences of sample handling procedures."
},
{
"pmid": "29989106",
"title": "Special issue on Computational Resources and Methods in Biological Sciences.",
"abstract": "This special issue covers a wide range of topics in computational biology, such as database construction, sequence analysis and function prediction with machine learning methods, disease-related diagnosis, drug-target and drug discovery, and electronic health record system construction."
},
{
"pmid": "17721543",
"title": "Higher-energy C-trap dissociation for peptide modification analysis.",
"abstract": "Peptide sequencing is the basis of mass spectrometry-driven proteomics. Here we show that in the linear ion trap-orbitrap mass spectrometer (LTQ Orbitrap) peptide ions can be efficiently fragmented by high-accuracy and full-mass-range tandem mass spectrometry (MS/MS) via higher-energy C-trap dissociation (HCD). Immonium ions generated via HCD pinpoint modifications such as phosphotyrosine with very high confidence. Additionally we show that an added octopole collision cell facilitates de novo sequencing."
},
{
"pmid": "16401509",
"title": "Collision-induced dissociation (CID) of peptides and proteins.",
"abstract": "The most commonly used activation method in the tandem mass spectrometry (MS) of peptides and proteins is energetic collisions with a neutral target gas. The overall process of collisional activation followed by fragmentation of the ion is commonly referred to as collision-induced dissociation (CID). The structural information that results from CID of a peptide or protein ion is highly dependent on the conditions used to effect CID. These include, for example, the relative translational energy of the ion and target, the nature of the target, the number of collisions that is likely to take place, and the observation window of the apparatus. This chapter summarizes the key experimental parameters in the CID of peptide and protein ions, as well as the conditions that tend to prevail in the most commonly employed tandem mass spectrometers."
},
{
"pmid": "29989077",
"title": "CRISPRMatch: An Automatic Calculation and Visualization Tool for High-throughput CRISPR Genome-editing Data Analysis.",
"abstract": "Custom-designed nucleases, including CRISPR-Cas9 and CRISPR-Cpf1, are widely used to realize the precise genome editing. The high-coverage, low-cost and quantifiability make high-throughput sequencing (NGS) to be an effective method to assess the efficiency of custom-designed nucleases. However, contrast to standardized transcriptome protocol, the NGS data lacks a user-friendly pipeline connecting different tools that can automatically calculate mutation, evaluate editing efficiency and realize in a more comprehensive dataset that can be visualized. Here, we have developed an automatic stand-alone toolkit based on python script, namely CRISPRMatch, to process the high-throughput genome-editing data of CRISPR nuclease transformed protoplasts by integrating analysis steps like mapping reads and normalizing reads count, calculating mutation frequency (deletion and insertion), evaluating efficiency and accuracy of genome-editing, and visualizing the results (tables and figures). Both of CRISPR-Cas9 and CRISPR-Cpf1 nucleases are supported by CRISPRMatch toolkit and the integrated code has been released on GitHub (https://github.com/zhangtaolab/CRISPRMatch)."
},
{
"pmid": "20187083",
"title": "Speeding up tandem mass spectrometry based database searching by peptide and spectrum indexing.",
"abstract": "Database searching is the technique of choice for shotgun proteomics, and to date much research effort has been spent on improving its effectiveness. However, database searching faces a serious challenge of efficiency, considering the large numbers of mass spectra and the ever fast increase in peptide databases resulting from genome translations, enzymatic digestions, and post-translational modifications. In this study, we conducted systematic research on speeding up database search engines for protein identification and illustrate the key points with the specific design of the pFind 2.1 search engine as a running example. Firstly, by constructing peptide indexes, pFind achieves a speedup of two to three compared with that without peptide indexes. Secondly, by constructing indexes for observed precursor and fragment ions, pFind achieves another speedup of two. As a result, pFind compares very favorably with predominant search engines such as Mascot, SEQUEST and X!Tandem."
},
{
"pmid": "19447788",
"title": "Highly accelerated feature detection in proteomics data sets using modern graphics processing units.",
"abstract": "MOTIVATION\nMass spectrometry (MS) is one of the most important techniques for high-throughput analysis in proteomics research. Due to the large number of different proteins and their post-translationally modified variants, the amount of data generated by a single wet-lab MS experiment can easily exceed several gigabytes. Hence, the time necessary to analyze and interpret the measured data is often significantly larger than the time spent on sample preparation and the wet-lab experiment itself. Since the automated analysis of this data is hampered by noise and baseline artifacts, more sophisticated computational techniques are required to handle the recorded mass spectra. Obviously, there is a clear tradeoff between performance and quality of the analysis, which is currently one of the most challenging problems in computational proteomics.\n\n\nRESULTS\nUsing modern graphics processing units (GPUs), we implemented a feature finding algorithm based on a hand-tailored adaptive wavelet transform that drastically reduces the computation time. A further speedup can be achieved exploiting the multi-core architecture of current computing devices, which leads to up to an approximately 200-fold speed-up in our computational experiments. In addition, we will demonstrate that several approximations necessary on the CPU to keep run times bearable, become obsolete on the GPU, yielding not only faster, but also improved results.\n\n\nAVAILABILITY\nAn open source implementation of the CUDA-based algorithm is available via the software framework OpenMS (http://www.openms.de).\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "19231891",
"title": "A ranking-based scoring function for peptide-spectrum matches.",
"abstract": "The analysis of the large volume of tandem mass spectrometry (MS/MS) proteomics data that is generated these days relies on automated algorithms that identify peptides from their mass spectra. An essential component of these algorithms is the scoring function used to evaluate the quality of peptide-spectrum matches (PSMs). In this paper, we present new approach to scoring of PSMs. We argue that since this problem is at its core a ranking task (especially in the case of de novo sequencing), it can be solved effectively using machine learning ranking algorithms. We developed a new discriminative boosting-based approach to scoring. Our scoring models draw upon a large set of diverse feature functions that measure different qualities of PSMs. Our method improves the performance of our de novo sequencing algorithm beyond the current state-of-the-art, and also greatly enhances the performance of database search programs. Furthermore, by increasing the efficiency of tag filtration and improving the sensitivity of PSM scoring, we make it practical to perform large-scale MS/MS analysis, such as proteogenomic search of a six-frame translation of the human genome (in which we achieve a reduction of the running time by a factor of 15 and a 60% increase in the number of identified peptides, compared to the InsPecT database search tool). Our scoring function is incorporated into PepNovo+ which is available for download or can be run online at http://bix.ucsd.edu."
},
{
"pmid": "26122521",
"title": "Novor: real-time peptide de novo sequencing software.",
"abstract": "De novo sequencing software has been widely used in proteomics to sequence new peptides from tandem mass spectrometry data. This study presents a new software tool, Novor, to greatly improve both the speed and accuracy of today's peptide de novo sequencing analyses. To improve the accuracy, Novor's scoring functions are based on two large decision trees built from a peptide spectral library with more than 300,000 spectra with machine learning. Important knowledge about peptide fragmentation is extracted automatically from the library and incorporated into the scoring functions. The decision tree model also enables efficient score calculation and contributes to the speed improvement. To further improve the speed, a two-stage algorithmic approach, namely dynamic programming and refinement, is used. The software program was also carefully optimized. On the testing datasets, Novor sequenced 7%-37% more correct residues than the state-of-the-art de novo sequencing tool, PEAKS, while being an order of magnitude faster. Novor can de novo sequence more than 300 MS/MS spectra per second on a laptop computer. The speed surpasses the acquisition speed of today's mass spectrometer and, therefore, opens a new possibility to de novo sequence in real time while the spectrometer is acquiring the spectral data. Graphical Abstract ᅟ."
}
] |
JMIR Medical Informatics | 31516126 | PMC6746103 | 10.2196/14830 | Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)–Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study | BackgroundThe bidirectional encoder representations from transformers (BERT) model has achieved great success in many natural language processing (NLP) tasks, such as named entity recognition and question answering. However, little prior work has explored this model to be used for an important task in the biomedical and clinical domains, namely entity normalization.ObjectiveWe aim to investigate the effectiveness of BERT-based models for biomedical or clinical entity normalization. In addition, our second objective is to investigate whether the domains of training data influence the performances of BERT-based models as well as the degree of influence.MethodsOur data was comprised of 1.5 million unlabeled electronic health record (EHR) notes. We first fine-tuned BioBERT on this large collection of unlabeled EHR notes. This generated our BERT-based model trained using 1.5 million electronic health record notes (EhrBERT). We then further fine-tuned EhrBERT, BioBERT, and BERT on three annotated corpora for biomedical and clinical entity normalization: the Medication, Indication, and Adverse Drug Events (MADE) 1.0 corpus, the National Center for Biotechnology Information (NCBI) disease corpus, and the Chemical-Disease Relations (CDR) corpus. We compared our models with two state-of-the-art normalization systems, namely MetaMap and disease name normalization (DNorm).ResultsEhrBERT achieved 40.95% F1 in the MADE 1.0 corpus for mapping named entities to the Medical Dictionary for Regulatory Activities and the Systematized Nomenclature of Medicine—Clinical Terms (SNOMED-CT), which have about 380,000 terms. In this corpus, EhrBERT outperformed MetaMap by 2.36% in F1. For the NCBI disease corpus and CDR corpus, EhrBERT also outperformed DNorm by improving the F1 scores from 88.37% and 89.92% to 90.35% and 93.82%, respectively. Compared with BioBERT and BERT, EhrBERT outperformed them on the MADE 1.0 corpus and the CDR corpus.ConclusionsOur work shows that BERT-based models have achieved state-of-the-art performance for biomedical and clinical entity normalization. BERT-based models can be readily fine-tuned to normalize any kind of named entities. | Related WorkPrevious work has studied various language models. For instance, the n-gram language model [2] assumes that the current word can be predicted via previous n words. Bengio et al [14] utilized feed-forward neural networks to build a language model, but their approach was limited to a fixed-length context. Mikolov et al [15] employed recurrent neural networks to represent languages, which can theoretically utilize an arbitrary-length context.Besides language models, researchers have also explored the problem of word representations. The bag-of-words model [16] assumes that a word can be represented by its neighbor words. Brown et al [17] proposed a clustering algorithm to group words into clusters that are semantically related. Their approach can be considered as a discrete version of distributed word representations. As deep learning develops, some researchers leveraged neural networks to generate word representations [16,18].Recently, researchers have found that many downstream applications can benefit from the word representations generated by pretrained models [11,12]. ELMo utilized bidirectional recurrent neural networks to generate word representations [12]. Compared to word2vec [16], their word representations are contextualized and contain subword information. BERT [11] utilizes two pretraining objectives, mask language model and next sentence prediction, which can naturally benefit from large unlabeled data. The BERT input consists of three parts: word pieces, positions, and segments. BERT uses bidirectional transformers to generate word representations, which are jointly conditioned on both the left and right context in all layers. BERT and its derivatives such as BioBERT [13] achieved new state-of-the-art results on various NLP or biomedical NLP tasks (eg, question answering, named entity recognition, and relation extraction) through simple fine-tuning techniques.In this paper, we investigated the effectiveness of such an approach in a new task, namely, biomedical or clinical entity normalization. In the biomedical or clinical domain, MetaMap [19] is the tool that is widely used to extract terms and link them to the Unified Medical Language System (UMLS) Metathesaurus [3]. Researchers utilized MetaMap in various scenarios, such as medical concept identification in electronic health record (EHR) notes [20], vocabulary construction for consumer health [21], and text mining from patent data [22]. In this paper, we employed MetaMap as one of our baselines. Previous work consisting of entity normalization can be roughly divided into three types: (1) rule-based approaches [7] depend on manually designed rules, but they are not able to cover all situations; (2) similarity-based approaches [23] compute similarities between entity mentions and terms, but the metrics of similarities highly influence the performances of such approaches; (3) machine learning-based approaches [1,8-10] can perform better, but they usually require enough annotated data to train models from scratch. In this paper, we fine-tuned pretrained representation-learning models on the entity normalization task to show that they are more effective than traditional supervised approaches. | [
"23969135",
"14681409",
"23043124",
"27283952",
"28984180",
"28369171",
"20442139",
"29358159",
"17478413",
"31038462",
"26232443",
"30649735",
"24393765"
] | [
{
"pmid": "23969135",
"title": "DNorm: disease name normalization with pairwise learning to rank.",
"abstract": "MOTIVATION\nDespite the central role of diseases in biomedical research, there have been much fewer attempts to automatically determine which diseases are mentioned in a text-the task of disease name normalization (DNorm)-compared with other normalization tasks in biomedical text mining research.\n\n\nMETHODS\nIn this article we introduce the first machine learning approach for DNorm, using the NCBI disease corpus and the MEDIC vocabulary, which combines MeSH® and OMIM. Our method is a high-performing and mathematically principled framework for learning similarities between mentions and concept names directly from training data. The technique is based on pairwise learning to rank, which has not previously been applied to the normalization task but has proven successful in large optimization problems for information retrieval.\n\n\nRESULTS\nWe compare our method with several techniques based on lexical normalization and matching, MetaMap and Lucene. Our algorithm achieves 0.782 micro-averaged F-measure and 0.809 macro-averaged F-measure, an increase over the highest performing baseline method of 0.121 and 0.098, respectively.\n\n\nAVAILABILITY\nThe source code for DNorm is available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/DNorm, along with a web-based demonstration and links to the NCBI disease corpus. Results on PubMed abstracts are available in PubTator: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator ."
},
{
"pmid": "14681409",
"title": "The Unified Medical Language System (UMLS): integrating biomedical terminology.",
"abstract": "The Unified Medical Language System (http://umlsks.nlm.nih.gov) is a repository of biomedical vocabularies developed by the US National Library of Medicine. The UMLS integrates over 2 million names for some 900,000 concepts from more than 60 families of biomedical vocabularies, as well as 12 million relations among these concepts. Vocabularies integrated in the UMLS Metathesaurus include the NCBI taxonomy, Gene Ontology, the Medical Subject Headings (MeSH), OMIM and the Digital Anatomist Symbolic Knowledge Base. UMLS concepts are not only inter-related, but may also be linked to external resources such as GenBank. In addition to data, the UMLS includes tools for customizing the Metathesaurus (MetamorphoSys), for generating lexical variants of concept names (lvg) and for extracting UMLS concepts from text (MetaMap). The UMLS knowledge sources are updated quarterly. All vocabularies are available at no fee for research purposes within an institution, but UMLS users are required to sign a license agreement. The UMLS knowledge sources are distributed on CD-ROM and by FTP."
},
{
"pmid": "23043124",
"title": "Using rule-based natural language processing to improve disease normalization in biomedical text.",
"abstract": "BACKGROUND AND OBJECTIVE\nIn order for computers to extract useful information from unstructured text, a concept normalization system is needed to link relevant concepts in a text to sources that contain further information about the concept. Popular concept normalization tools in the biomedical field are dictionary-based. In this study we investigate the usefulness of natural language processing (NLP) as an adjunct to dictionary-based concept normalization.\n\n\nMETHODS\nWe compared the performance of two biomedical concept normalization systems, MetaMap and Peregrine, on the Arizona Disease Corpus, with and without the use of a rule-based NLP module. Performance was assessed for exact and inexact boundary matching of the system annotations with those of the gold standard and for concept identifier matching.\n\n\nRESULTS\nWithout the NLP module, MetaMap and Peregrine attained F-scores of 61.0% and 63.9%, respectively, for exact boundary matching, and 55.1% and 56.9% for concept identifier matching. With the aid of the NLP module, the F-scores of MetaMap and Peregrine improved to 73.3% and 78.0% for boundary matching, and to 66.2% and 69.8% for concept identifier matching. For inexact boundary matching, performances further increased to 85.5% and 85.4%, and to 73.6% and 73.3% for concept identifier matching.\n\n\nCONCLUSIONS\nWe have shown the added value of NLP for the recognition and normalization of diseases with MetaMap and Peregrine. The NLP module is general and can be applied in combination with any concept normalization system. Whether its use for concept types other than disease is equally advantageous remains to be investigated."
},
{
"pmid": "27283952",
"title": "TaggerOne: joint named entity recognition and normalization with semi-Markov Models.",
"abstract": "MOTIVATION\nText mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization.\n\n\nMETHODS\nWe propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput.\n\n\nRESULTS\nWe validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "28984180",
"title": "CNN-based ranking for biomedical entity normalization.",
"abstract": "BACKGROUND\nMost state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities.\n\n\nRESULTS\nThe CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance.\n\n\nCONCLUSIONS\nWe propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement."
},
{
"pmid": "28369171",
"title": "A transition-based joint model for disease named entity recognition and normalization.",
"abstract": "MOTIVATION\nDisease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this.\n\n\nMETHODS\nWe propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies.\n\n\nRESULTS\nWe evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches.\n\n\nAVAILABILITY AND IMPLEMENTATION\nData and code are available at https://github.com/louyinxia/jointRN.\n\n\nCONTACT\[email protected]."
},
{
"pmid": "20442139",
"title": "An overview of MetaMap: historical perspective and recent advances.",
"abstract": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD."
},
{
"pmid": "29358159",
"title": "A Natural Language Processing System That Links Medical Terms in Electronic Health Record Notes to Lay Definitions: System Development Using Physician Reviews.",
"abstract": "BACKGROUND\nMany health care systems now allow patients to access their electronic health record (EHR) notes online through patient portals. Medical jargon in EHR notes can confuse patients, which may interfere with potential benefits of patient access to EHR notes.\n\n\nOBJECTIVE\nThe aim of this study was to develop and evaluate the usability and content quality of NoteAid, a Web-based natural language processing system that links medical terms in EHR notes to lay definitions, that is, definitions easily understood by lay people.\n\n\nMETHODS\nNoteAid incorporates two core components: CoDeMed, a lexical resource of lay definitions for medical terms, and MedLink, a computational unit that links medical terms to lay definitions. We developed innovative computational methods, including an adapted distant supervision algorithm to prioritize medical terms important for EHR comprehension to facilitate the effort of building CoDeMed. Ten physician domain experts evaluated the user interface and content quality of NoteAid. The evaluation protocol included a cognitive walkthrough session and a postsession questionnaire. Physician feedback sessions were audio-recorded. We used standard content analysis methods to analyze qualitative data from these sessions.\n\n\nRESULTS\nPhysician feedback was mixed. Positive feedback on NoteAid included (1) Easy to use, (2) Good visual display, (3) Satisfactory system speed, and (4) Adequate lay definitions. Opportunities for improvement arising from evaluation sessions and feedback included (1) improving the display of definitions for partially matched terms, (2) including more medical terms in CoDeMed, (3) improving the handling of terms whose definitions vary depending on different contexts, and (4) standardizing the scope of definitions for medicines. On the basis of these results, we have improved NoteAid's user interface and a number of definitions, and added 4502 more definitions in CoDeMed.\n\n\nCONCLUSIONS\nPhysician evaluation yielded useful feedback for content validation and refinement of this innovative tool that has the potential to improve patient EHR comprehension and experience using patient portals. Future ongoing work will develop algorithms to handle ambiguous medical terms and test and evaluate NoteAid with patients."
},
{
"pmid": "17478413",
"title": "Term identification methods for consumer health vocabulary development.",
"abstract": "BACKGROUND\nThe development of consumer health information applications such as health education websites has motivated the research on consumer health vocabulary (CHV). Term identification is a critical task in vocabulary development. Because of the heterogeneity and ambiguity of consumer expressions, term identification for CHV is more challenging than for professional health vocabularies.\n\n\nOBJECTIVE\nFor the development of a CHV, we explored several term identification methods, including collaborative human review and automated term recognition methods.\n\n\nMETHODS\nA set of criteria was established to ensure consistency in the collaborative review, which analyzed 1893 strings. Using the results from the human review, we tested two automated methods-C-value formula and a logistic regression model.\n\n\nRESULTS\nThe study identified 753 consumer terms and found the logistic regression model to be highly effective for CHV term identification (area under the receiver operating characteristic curve = 95.5%).\n\n\nCONCLUSIONS\nThe collaborative human review and logistic regression methods were effective for identifying terms for CHV development."
},
{
"pmid": "31038462",
"title": "Technological Innovations in Disease Management: Text Mining US Patent Data From 1995 to 2017.",
"abstract": "BACKGROUND\nPatents are important intellectual property protecting technological innovations that inspire efficient research and development in biomedicine. The number of awarded patents serves as an important indicator of economic growth and technological innovation. Researchers have mined patents to characterize the focuses and trends of technological innovations in many fields.\n\n\nOBJECTIVE\nTo expand patent mining to biomedicine and facilitate future resource allocation in biomedical research for the United States, we analyzed US patent documents to determine the focuses and trends of protected technological innovations across the entire disease landscape.\n\n\nMETHODS\nWe analyzed more than 5 million US patent documents between 1995 and 2017, using summary statistics and dynamic topic modeling. More specifically, we investigated the disease coverage and latent topics in patent documents over time. We also incorporated the patent data into the calculation of our recently developed Research Opportunity Index (ROI) and Public Health Index (PHI), to recalibrate the resource allocation in biomedical research.\n\n\nRESULTS\nOur analysis showed that protected technological innovations have been primarily focused on socioeconomically critical diseases such as \"other cancers\" (malignant neoplasm of head, face, neck, abdomen, pelvis, or limb; disseminated malignant neoplasm; Merkel cell carcinoma; and malignant neoplasm, malignant carcinoid tumors, neuroendocrine tumor, and carcinoma in situ of an unspecified site), diabetes mellitus, and obesity. The United States has significantly improved resource allocation to biomedical research and development over the past 17 years, as illustrated by the decreasing PHI. Diseases with positive ROI, such as ankle and foot fracture, indicate potential research opportunities for the future. Development of novel chemical or biological drugs and electrical devices for diagnosis and disease management is the dominating topic in patented inventions.\n\n\nCONCLUSIONS\nThis multifaceted analysis of patent documents provides a deep understanding of the focuses and trends of technological innovations in disease management in patents. Our findings offer insights into future research and innovation opportunities and provide actionable information to facilitate policy makers, payers, and investors to make better evidence-based decisions regarding resource allocation in biomedicine."
},
{
"pmid": "26232443",
"title": "Normalizing clinical terms using learned edit distance patterns.",
"abstract": "BACKGROUND\nVariations of clinical terms are very commonly encountered in clinical texts. Normalization methods that use similarity measures or hand-coded approximation rules for matching clinical terms to standard terminologies have limited accuracy and coverage.\n\n\nMATERIALS AND METHODS\nIn this paper, a novel method is presented that automatically learns patterns of variations of clinical terms from known variations from a resource such as the Unified Medical Language System (UMLS). The patterns are first learned by computing edit distances between the known variations, which are then appropriately generalized for normalizing previously unseen terms. The method was applied and evaluated on the disease and disorder mention normalization task using the dataset of SemEval 2014 and compared with the normalization ability of the MetaMap system and a method based on cosine similarity.\n\n\nRESULTS\nExcluding the mentions that already exactly match in UMLS and the training dataset, the proposed method obtained 64.7% accuracy on the rest of the test dataset. The accuracy was calculated as the number of mentions that correctly matched the gold-standard concept unique identifiers (CUIs) or correctly matched to be without a CUI. In comparison, MetaMap's accuracy was 41.9% and cosine similarity's accuracy was 44.6%. When only the output CUIs were evaluated, the proposed method obtained 54.4% best F-measure (at 92.1% precision and 38.6% recall) while MetaMap obtained 19.4% best F-measure (at 38.0% precision and 13.0% recall) and cosine similarity obtained 38.1% best F-measure (at 70.3% precision and 26.1% recall).\n\n\nCONCLUSIONS\nThe novel method was found to perform much better than the MetaMap system and the cosine similarity based method in normalizing disease mentions in clinical text that did not exactly match in UMLS. The method is also general and can be used for normalizing clinical terms of other semantic types as well."
},
{
"pmid": "30649735",
"title": "Overview of the First Natural Language Processing Challenge for Extracting Medication, Indication, and Adverse Drug Events from Electronic Health Record Notes (MADE 1.0).",
"abstract": "INTRODUCTION\nThis work describes the Medication and Adverse Drug Events from Electronic Health Records (MADE 1.0) corpus and provides an overview of the MADE 1.0 2018 challenge for extracting medication, indication, and adverse drug events (ADEs) from electronic health record (EHR) notes.\n\n\nOBJECTIVE\nThe goal of MADE is to provide a set of common evaluation tasks to assess the state of the art for natural language processing (NLP) systems applied to EHRs supporting drug safety surveillance and pharmacovigilance. We also provide benchmarks on the MADE dataset using the system submissions received in the MADE 2018 challenge.\n\n\nMETHODS\nThe MADE 1.0 challenge has released an expert-annotated cohort of medication and ADE information comprising 1089 fully de-identified longitudinal EHR notes from 21 randomly selected patients with cancer at the University of Massachusetts Memorial Hospital. Using this cohort as a benchmark, the MADE 1.0 challenge designed three shared NLP tasks. The named entity recognition (NER) task identifies medications and their attributes (dosage, route, duration, and frequency), indications, ADEs, and severity. The relation identification (RI) task identifies relations between the named entities: medication-indication, medication-ADE, and attribute relations. The third shared task (NER-RI) evaluates NLP models that perform the NER and RI tasks jointly. In total, 11 teams from four countries participated in at least one of the three shared tasks, and 41 system submissions were received in total.\n\n\nRESULTS\nThe best systems F1 scores for NER, RI, and NER-RI were 0.82, 0.86, and 0.61, respectively. Ensemble classifiers using the team submissions improved the performance further, with an F1 score of 0.85, 0.87, and 0.66 for the three tasks, respectively.\n\n\nCONCLUSION\nMADE results show that recent progress in NLP has led to remarkable improvements in NER and RI tasks for the clinical domain. However, some room for improvement remains, particularly in the NER-RI task."
},
{
"pmid": "24393765",
"title": "NCBI disease corpus: a resource for disease name recognition and concept normalization.",
"abstract": "Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora. This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency. The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks. The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/."
}
] |
Scientific Reports | 31527599 | PMC6746744 | 10.1038/s41598-019-49573-4 | A robust method for automatic identification of landmarks on surface models of the pelvis | The recognition of bony landmarks of the pelvis is a required operation in patient-specific orthopedics, subject-specific biomechanics or morphometrics. A fully automatic detection is preferable to a subjective and time-consuming manual identification. In this paper, a new approach, called the iterative tangential plane (ITP) method, for fully automatic identification of landmarks on surface models of the pelvis is introduced. The method includes the landmarks to construct the two most established anatomical reference frames of the pelvis: the anterior pelvic plane (APP) coordinate system and superior inferior spine plane (SISP) coordinate system. The ITP method proved to be robust against the initial alignment of the pelvis in space. A comparison to a manual identification was performed that showed minor but significant (p < 0.05) median differences below 3 mm for the position of the landmarks and below 1° for the orientation of the APP coordinate system. Whether these differences are acceptable, has to be evaluated for each specific use case. There were no significant differences for the orientation of the SISP coordinate system recommended by the International Society of Biomechanics. | Related WorkA list of all abbreviations used in this paper can be found as Supplementary Table S1.Ehrhardt et al. proposed an atlas-based approach with a curvature-based refinement of the landmarks1. The atlas consists of gray value data and a surface model of the pelvis with labeled anatomical areas and landmarks. Initially, the atlas is non-rigidly registered to the computed tomography (CT) data of the subject using the gray value data. Afterwards, the atlas and the subject mesh are locally cut out within a certain radius for each landmark and the registration of each cut-out is refined by a combination of an affine and a non-linear registration algorithm taking the Euclidean distance, the normals and the curvature of the surfaces into account. However, a manually labeled atlas and CT data of the subject are necessary for this approach. The gray value-based initialization of the atlas is error-prone due to large anatomical variations between subjects. Moreover, the limited number of seven subjects and the missing comparison to a manual landmark identification impede the assessment of Ehrhardt’s method.Seim et al. compared three methods for the identification of the landmarks of the anterior pelvic plane (APP). The APP is usually constructed by the anterior superior iliac spines (ASISs) and the pubic tubercles (PTs) or the pubic symphysis (PS). Seim et al. evaluated one convex hull-based method and two statistical shape model (SSM)-based methods15. All methods are based on a previous SSM-based segmentation and a graph-based optimized reconstruction of the subject’s pelvis from CT data. During this segmentation process, the pelvis is subdivided into regions such as the iliac and pubic bones. For the first method, the face of the convex hull of the pelvis with the vertices with minimal distance to the iliac and pubic regions defines the APP. This method is limited to landmarks that are part of the convex hull. The SSM-based methods only differ in the number of manually labeled data sets in the training data of the SSM. Both methods transfer the landmarks from the SSM to the optimized reconstruction of the subject’s pelvis because both meshes share the same topology. However, for all three methods of Seim et al. a sufficient amount of training data for the creation of the SSM and CT data of the subject is necessary. In their study, they used 50 datasets for the generation of the SSM.In addition to Ehrhardt’s and Seim’s methods, further methods for the detection of pelvic landmarks on raw image data have been proposed12,16,17. In contrast, our study focuses on fully automatic pelvic landmark identification on surface models of the pelvis considering scenarios with many subjects from different sources, where the volume data might not be available for instance due to reasons of data protection. The following studies addressed this issue.Subburaj et al. presented a curvature-based approach in combination with a spatial relationship matrix of the landmarks18. The surface of the mesh is grouped into different regions (peaks, ridges, pits and ravines) based on the curvature value. Subsequently, the regions were iteratively selected and labeled considering the spatial relationship matrix of the landmarks. However, the spatial relationship matrix depends on the alignment of the pelvis in space and the initialization of the spatial relationship matrix is unclear. Moreover, the approach was tested only for one subject. Our study will show that there is a trade-off between the detection rate and the accuracy depending on the number of landmarks in the spatial relationship matrix.Several studies focused on the detection of the landmarks that are necessary to construct the APP coordinate system. Kai et al. proposed a method to identify the APP purely based on the surface of the pelvis10. The surface is transformed to its principal axes of inertia and subsequently subdivided into four parts by a sagittal and a transverse plane. The most anterior points of the four parts define the landmarks of the APP. However, due to the variability of the pelvic morphology, the principal axes do not ensure a unified orientation for all subjects. Hence, the most anterior points are not necessarily the landmark points of the APP, defined by placing the anterior side of the pelvis on a table. Higgens et al., Zhang et al. and Chen et al. presented an iterative refinement of the APP identification6,19,20. Still, none of the three approaches is fully automatic. They are based on an initial manual selection of the approximated landmarks of the APP.In this paper, we introduce a fully automatic approach, hereafter referred to as iterative tangential plane (ITP) method, for identification of landmarks on a surface model of the pelvis. Additionally to the APP, the ITP method identifies the posterior superior iliac spines (PSISs), ischial spines (ISs) and the sacral promontory (SP). The ASISs and the midpoint between the PSISs define the superior iliac spine plane (SISP) recommended by the International Society of Biomechanics11.Any automatic landmark identification has to take into account that the anatomical planes of the patient can highly deviate from the CT coordinate system and that the reference systems of medical imaging systems are not standardized. We hypothesized, that the ITP method robustly identifies the pelvic landmarks independently from the initial orientation or position of the surface model of the pelvis and without significant differences to a manual identification of the landmarks. | [
"15472752",
"20849368",
"24641349",
"21295307",
"25378504",
"29773405",
"17503448",
"24456665",
"11934426",
"15894511",
"15639401",
"19195896",
"22224793",
"25366904",
"28207829",
"29269225",
"24220210",
"19345065",
"12111881"
] | [
{
"pmid": "15472752",
"title": "Atlas-based recognition of anatomical structures and landmarks and the automatic computation of orthopedic parameters.",
"abstract": "OBJECTIVE\nThis paper describes methods for the automatic atlas-based segmentation of bone structures of the hip, the automatic detection of anatomical point landmarks and the computation of orthopedic parameters to avoid the interactive, time-consuming pre-processing steps for the virtual planning of hip operations.\n\n\nMETHODS\nBased on the CT data of the Visible Human Data Sets, two three-dimensional atlases of the human pelvis have been built. The atlases consist of labeled CT data sets, 3D surface models of the separated structures and associated anatomical point landmarks. The atlas information is transferred to the patient data by a non-linear gray value-based registration algorithm. A surface-based registration algorithm was developed to detect the anatomical landmarks on the patient's bone structures. Furthermore, a software tool for the automatic computation of orthopedic parameters is presented. Finally, methods for an evaluation of the atlas-based segmentation and the atlas-based landmark detection are explained.\n\n\nRESULTS\nA first evaluation of the presented atlas-based segmentation method shows the correct labeling of 98.5% of the bony voxels. The presented landmark detection algorithm enables the precise and reliable localization of orthopedic landmarks. The accuracy of the landmark detection is below 2.5 mm.\n\n\nCONCLUSION\nThe atlas-based segmentation of bone structures, the atlas-based landmark detection and the automatic computation of orthopedic measures are suitable to essentially reduce the time-consuming user interaction during the pre-processing of the CT data for the virtual three-dimensional planning of hip operations."
},
{
"pmid": "20849368",
"title": "Integration of CAD/CAM planning into computer assisted orthopaedic surgery.",
"abstract": "Modern Computer Aided Design/Modeling (CAD/CAM) software allows complex surgical simulations, but it is often difficult to transfer and execute precisely the planned scenarios during actual operations. We describe a new method of integrating CAD/CAM surgical plans directly into a computer surgical navigation system, and demonstrate its use to guide three complex orthopaedic surgical procedures: a periacetabular osteotomy of a dysplastic hip, a corrective osteotomy of a post-traumatic tibial deformity, and a multi-planar resection of a distal femoral tumor followed by reconstruction with a CAD custom prosthesis."
},
{
"pmid": "24641349",
"title": "Computed tomography-based joint locations affect calculation of joint moments during gait when compared to scaling approaches.",
"abstract": "Hip joint moments are an important parameter in the biomechanical evaluation of orthopaedic surgery. Joint moments are generally calculated using scaled generic musculoskeletal models. However, due to anatomical variability or pathology, such models may differ from the patient's anatomy, calling into question the accuracy of the resulting joint moments. This study aimed to quantify the potential joint moment errors caused by geometrical inaccuracies in scaled models, during gait, for eight test subjects. For comparison, a semi-automatic computed tomography (CT)-based workflow was introduced to create models with subject-specific joint locations and inertial parameters. 3D surface models of the femora and hemipelves were created by segmentation and the hip joint centres and knee axes were located in these models. The scaled models systematically located the hip joint centre (HJC) up to 33.6 mm too inferiorly. As a consequence, significant and substantial peak hip extension and abduction moment differences were recorded, with, respectively, up to 23.1% and 15.8% higher values in the image-based models. These findings reaffirm the importance of accurate HJC estimation, which may be achieved using CT- or radiography-based subject-specific modelling. However, obesity-related gait analysis marker placement errors may have influenced these results and more research is needed to overcome these artefacts."
},
{
"pmid": "21295307",
"title": "Level of subject-specific detail in musculoskeletal models affects hip moment arm length calculation during gait in pediatric subjects with increased femoral anteversion.",
"abstract": "Biomechanical parameters of gait such as muscle's moment arm length (MAL) and muscle-tendon length are known to be sensitive to anatomical variability. Nevertheless, most studies rely on rescaled generic models (RGMo) constructed from averaged data of cadaveric measurements in a healthy adult population. As an alternative, deformable generic models (DGMo) have been proposed. These models integrate a higher level of subject-specific detail by applying characteristic deformations to the musculoskeletal geometry. In contrast, musculoskeletal models based on magnetic resonance (MR) images (MRMo) reflect the involved subject's characteristics in every level of the model. This study investigated the effect of the varying levels of subject-specific detail in these three model types on the calculated hip MAL during gait in a pediatric population of seven cerebral palsy subjects presenting aberrant femoral geometry. Our results show large percentage differences in calculated MAL between RGMo and MRMo. Furthermore, the use of DGMo did not uniformly reduce inter-model differences in calculated MAL. The magnitude of these percentage differences stresses the need to take these effects into account when selecting the level of subject-specific detail one wants to integrate in musculoskeletal. Furthermore, the variability of these differences between subjects and between muscles makes it very difficult to a priori estimate their importance for a biomechanical analysis of a certain muscle in a given subject."
},
{
"pmid": "25378504",
"title": "A novel approach for determining three-dimensional acetabular orientation: results from two hundred subjects.",
"abstract": "BACKGROUND\nThe inherently complex three-dimensional morphology of both the pelvis and acetabulum create difficulties in accurately determining acetabular orientation. Our objectives were to develop a reliable and accurate methodology for determining three-dimensional acetabular orientation and to utilize it to describe relevant characteristics of a large population of subjects without apparent hip pathology.\n\n\nMETHODS\nHigh-resolution computed tomography studies of 200 patients previously receiving pelvic scans for indications not related to orthopaedic conditions were selected from our institution's database. Three-dimensional models of each osseous pelvis were generated to extract specific anatomical data sets. A novel computational method was developed to determine standard measures of three-dimensional acetabular orientation within an automatically identified anterior pelvic plane reference frame. Automatically selected points on the osseous ridge of the acetabulum were used to generate a best-fit plane for describing acetabular orientation.\n\n\nRESULTS\nOur method showed excellent interobserver and intraobserver agreement (an intraclass correlation coefficient [ICC] of >0.999) and achieved high levels of accuracy. A significant difference between males and females in both anteversion (average, 3.5°; 95% confidence interval [CI], 1.9° to 5.1° across all angular definitions; p < 0.0001) and inclination (1.4°; 95% CI, 0.6° to 2.3° for anatomic angular definition; p < 0.002) was observed. Intrapatient asymmetry in anatomic measures showed bilateral differences in anteversion (maximum, 12.1°) and in inclination (maximum, 10.9°).\n\n\nCONCLUSIONS\nSignificant differences in acetabular orientation between the sexes can be detected only with accurate measurements that account for the entire acetabulum. While a wide range of interpatient acetabular orientations was observed, the majority of subjects had acetabula that were relatively symmetrical in both inclination and anteversion.\n\n\nCLINICAL RELEVANCE\nA highly accurate and reproducible method for determining the orientation of the acetabulum's aperture will benefit both surgeons and patients, by further refining the distinctions between normal and abnormal hip characteristics. Enhanced understanding of the acetabulum could be useful in the diagnostic, planning, and execution stages for surgical procedures of the hip or in advancing the design of new implant systems."
},
{
"pmid": "29773405",
"title": "Gender differences in knee morphology and the prospects for implant design in total knee replacement.",
"abstract": "BACKGROUND\nMorphological differences between female and male knees have been reported in the literature, which led to the development of so-called gender-specific implants. However, detailed morphological descriptions covering the entire joint are rare and little is known regarding whether gender differences are real sexual dimorphisms or can be explained by overall differences in size.\n\n\nMETHODS\nWe comprehensively analysed knee morphology using 33 features of the femur and 21 features of the tibia to quantify knee shape. The landmark recognition and feature extraction based on three-dimensional surface data were fully automatically applied to 412 pathological (248 female and 164 male) knees undergoing total knee arthroplasty. Subsequently, an exploratory statistical analysis was performed and linear correlation analysis was used to investigate normalization factors and gender-specific differences.\n\n\nRESULTS\nStatistically significant differences between genders were observed. These were pronounced for distance measurements and negligible for angular (relative) measurements. Female knees were significantly narrower at the same depth compared to male knees. The correlation analysis showed that linear correlations were higher for distance measurements defined in the same direction. After normalizing the distance features according to overall dimensions in the direction of their definition, gender-specific differences disappeared or were smaller than the related confidence intervals.\n\n\nCONCLUSIONS\nImplants should not be linearly scaled according to one dimension. Instead, features in medial/lateral and anterior/posterior directions should be normalized separately (non-isotropic scaling). However, large inter-individual variations of the features remain after normalization, suggesting that patient-specific design solutions are required for an improved implant design, regardless of gender."
},
{
"pmid": "17503448",
"title": "The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.",
"abstract": "Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the \"Pinocchio effect\") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples."
},
{
"pmid": "24456665",
"title": "Automatic construction of an anatomical coordinate system for three-dimensional bone models of the lower extremities--pelvis, femur, and tibia.",
"abstract": "Automated methods for constructing patient-specific anatomical coordinate systems (ACSs) for the pelvis, femur and tibia were developed based on the bony geometry of each, derived from computed tomography (CT). The methods used principal axes of inertia, principal component analysis (PCA), cross-sectional area, and spherical and ellipsoidal surface fitting to eliminate the influence of rater's bias on reference landmark selection. Automatic ACSs for the pelvis, femur, and tibia were successfully constructed on each 3D bone model using the developed algorithm. All constructions were performed within 30s; furthermore, between- and within- rater errors were zero for a given CT-based 3D bone model, owing to the automated nature of the algorithm. ACSs recommended by the International Society of Biomechanics (ISB) were compared with the automatically constructed ACS, to evaluate the potential differences caused by the selection of the coordinate system. The pelvis ACSs constructed using the ISB-recommended system were tilted significantly more anteriorly than those constructed automatically (range, 9.6-18.8°). There were no significant differences between the two methods for the femur. For the tibia, significant differences were found in the direction of the anteroposterior axis; the anteroposterior axes identified by ISB were more external than those in the automatic ACS (range, 17.5-25.0°)."
},
{
"pmid": "11934426",
"title": "ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion--part I: ankle, hip, and spine. International Society of Biomechanics.",
"abstract": "The Standardization and Terminology Committee (STC) of the International Society of Biomechanics (ISB) proposes a general reporting standard for joint kinematics based on the Joint Coordinate System (JCS), first proposed by Grood and Suntay for the knee joint in 1983 (J. Biomech. Eng. 105 (1983) 136). There is currently a lack of standard for reporting joint motion in the field of biomechanics for human movement, and the JCS as proposed by Grood and Suntay has the advantage of reporting joint motions in clinically relevant terms. In this communication, the STC proposes definitions of JCS for the ankle, hip, and spine. Definitions for other joints (such as shoulder, elbow, hand and wrist, temporomandibular joint (TMJ), and whole body) will be reported in later parts of the series. The STC is publishing these recommendations so as to encourage their use, to stimulate feedback and discussion, and to facilitate further revisions. For each joint, a standard for the local axis system in each articulating bone is generated. These axes then standardize the JCS. Adopting these standards will lead to better communication among researchers and clinicians."
},
{
"pmid": "15894511",
"title": "Localization of anatomical point landmarks in 3D medical images by fitting 3D parametric intensity models.",
"abstract": "We introduce a new approach for the localization of 3D anatomical point landmarks. This approach is based on 3D parametric intensity models which are directly fitted to 3D images. To efficiently model tip-like, saddle-like, and sphere-like anatomical structures we introduce analytic intensity models based on the Gaussian error function in conjunction with 3D rigid transformations as well as deformations. To select a suitable size of the region-of-interest (ROI) where model fitting is performed, we also propose a new scheme for automatic selection of an optimal 3D ROI size based on the dominant gradient direction. In addition, to achieve a higher level of automation we present an algorithm for automatic initialization of the model parameters. Our approach has been successfully applied to accurately localize anatomical landmarks in 3D synthetic data as well as 3D MR and 3D CT image data. We have also compared the experimental results with the results of a previously proposed 3D differential approach. It turns out that the new approach significantly improves the localization accuracy."
},
{
"pmid": "15639401",
"title": "Human movement analysis using stereophotogrammetry. Part 4: assessment of anatomical landmark misplacement and its effects on joint kinematics.",
"abstract": "Estimating the effects of different sources of error on joint kinematics is crucial for assessing the reliability of human movement analysis. The goal of the present paper is to review the different approaches dealing with joint kinematics sensitivity to rotation axes and the precision of anatomical landmark determination. Consistent with the previous papers in this series, the review is limited to studies performed with video-based stereophotogrammetric systems. Initially, studies dealing with estimates of precision in determining the location of both palpable and internal anatomical landmarks are reviewed. Next, the effects of anatomical landmark position uncertainty on anatomical frames are shown. Then, methods reported in the literature for estimating error propagation from anatomical axes location to joint kinematics are described. Interestingly, studies carried out using different approaches reported a common conclusion: when joint rotations occur mainly in a single plane, minor rotations out of this plane are strongly affected by errors introduced at the anatomical landmark identification level and are prone to misinterpretation. Finally, attempts at reducing joint kinematics errors due to anatomical landmark position uncertainty are reported. Given the relevance of this source of errors in the determination of joint kinematics, it is the authors' opinion that further efforts should be made in improving the reliability of the joint axes determination."
},
{
"pmid": "19195896",
"title": "How precise can bony landmarks be determined on a CT scan of the knee?",
"abstract": "The purpose of this study was to describe the intra- and inter-observer variability of the registration of bony landmarks and alignment axes on a Computed Axial Tomography (CT) scan. Six cadaver specimens were scanned. Three-dimensional surface models of the knee were created. Three observers marked anatomic surface landmarks and alignment landmarks. The intra- and inter-observer variability of the point and axis registration was performed. Mean intra-observer precision ranks around 1 mm for all landmarks. The intra-class correlation coefficient (ICC) for inter-observer variability ranked higher than 0.98 for all landmarks. The highest recorded intra- and inter-observer variability was 1.3 mm and 3.5 mm respectively and was observed for the lateral femoral epicondyle. The lowest variability in the determination of axes was found for the femoral mechanical axis (intra-observer 0.12 degrees and inter-observer 0.19 degrees) and for the tibial mechanical axis (respectively 0.15 degrees and 0.28 degrees). In the horizontal plane the lowest variability was observed for the posterior condylar line of the femur (intra-observer 0.17 degrees and inter-observer 0.78 degrees) and for the transverse axis (respectively 1.89 degrees and 2.03) on the tibia. This study demonstrates low intra- and inter-observer variability in the CT registration of landmarks that define the coordinate system of the femur and the tibia. In the femur, the horizontal plane projections of the posterior condylar line and the surgical and anatomical transepicondylar axis can be determined precisely on a CT scan, using the described methodology. In the tibia, the best result is obtained for the tibial transverse axis."
},
{
"pmid": "22224793",
"title": "Automated pelvic anatomical coordinate system is reproducible for determination of anterior pelvic plane.",
"abstract": "Most of computer-assisted planning systems need to determine the anatomical axis based on the anterior pelvic plane (APP). We analysed that our new system is more reproducible for determination of APP than previous methods. A pelvic model bone and two subjects suffering from hip osteoarthritis were evaluated. Multidetector-row computed tomography (MDCT) images were scanned with various rotations by MDCT scanner. The pelvic rotation was calibrated using silhouette images. APP was determined by an optimisation technique. The values of variation of APP caused by pelvic rotation were analysed with statistical analysis. APP determination with calibration and optimisation was most reproducible.The values of variance of APP were within 0.05° in model bone and 0.2° even in patient pelvis. Furthermore, the values of variance of APP with calibration/optimisation were significantly lower in comparison without calibration/optimisation. Both calibration and optimisation are actually required for determination of APP. This system could contribute to the evaluation of hip joint kinematics and computer-assisted surgery."
},
{
"pmid": "25366904",
"title": "FACTS: Fully Automatic CT Segmentation of a Hip Joint.",
"abstract": "Extraction of surface models of a hip joint from CT data is a pre-requisite step for computer assisted diagnosis and planning (CADP) of periacetabular osteotomy (PAO). Most of existing CADP systems are based on manual segmentation, which is time-consuming and hard to achieve reproducible results. In this paper, we present a Fully Automatic CT Segmentation (FACTS) approach to simultaneously extract both pelvic and femoral models. Our approach works by combining fast random forest (RF) regression based landmark detection, multi-atlas based segmentation, with articulated statistical shape model (aSSM) based fitting. The two fundamental contributions of our approach are: (1) an improved fast Gaussian transform (IFGT) is used within the RF regression framework for a fast and accurate landmark detection, which then allows for a fully automatic initialization of the multi-atlas based segmentation; and (2) aSSM based fitting is used to preserve hip joint structure and to avoid penetration between the pelvic and femoral models. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 6-fold cross validation. When the present approach was compared to manual segmentation, a mean segmentation accuracy of 0.40, 0.36, and 0.36 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. When the models derived from both segmentations were used to compute the PAO diagnosis parameters, a difference of 2.0 ± 1.5°, 2.1 ± 1.6°, and 3.5 ± 2.3% were found for anteversion, inclination, and acetabular coverage, respectively. The achieved accuracy is regarded as clinically accurate enough for our target applications."
},
{
"pmid": "28207829",
"title": "Three-dimensional acetabular orientation measurement in a reliable coordinate system among one hundred Chinese.",
"abstract": "Determining three-dimensional (3D) acetabular orientation is important for several orthopaedic scenarios, but the complex geometries of both pelvis and acetabulum make measurements of orientation unreliable. Acetabular orientation may also differ between the sexes or racial groups. We aimed to (1) establish and evaluate a novel method for measuring 3D acetabular orientation, (2) apply this new method to a large population of Chinese subjects, and (3) report relevant characteristics of native acetabular orientation in this population. We obtained computed tomography scans taken for non-orthopaedic indications in 100 Chinese subjects (50 male, 50 female). A novel algorithm tailored to segmentation of the hip joint was used to construct 3D pelvic models from these scans. We developed a surface-based method to establish a reliable 3D pelvic coordinate system and software to semi-automatically measure 3D acetabular orientation. Differences in various acetabular orientations were compared within and between subjects, between male and female subjects, and between our subjects and subjects previously reported by another group. The reported method was reliable (intraclass correlation coefficient >0.999). Acetabular orientations were symmetrical within subjects, but ranged widely between subjects. The sexes differed significantly in acetabular anteversion (average difference, 3.0°; p < 0.001) and inclination (1.5°; p < 0.03). Acetabular anteversion and inclination were substantially smaller among our Chinese subjects than previously reported for American subjects. Thus, our method was reliable and sensitive, and we detected sex differences in 3D acetabular orientation. Awareness of differences between the sexes and races is the first step towards better reconstruction of the hip joint for all individuals and could also be applied to other orthopaedic scenarios."
},
{
"pmid": "29269225",
"title": "A surface-based approach to determine key spatial parameters of the acetabulum in a standardized pelvic coordinate system.",
"abstract": "Accurately determining the spatial relationship between the pelvis and acetabulum is challenging due to their inherently complex three-dimensional (3D) anatomy. A standardized 3D pelvic coordinate system (PCS) and the precise assessment of acetabular orientation would enable the relationship to be determined. We present a surface-based method to establish a reliable PCS and develop software for semi-automatic measurement of acetabular spatial parameters. Vertices on the acetabular rim were manually extracted as an eigenpoint set after 3D models were imported into the software. A reliable PCS consisting of the anterior pelvic plane, midsagittal pelvic plane, and transverse pelvic plane was then computed by iteration on mesh data. A spatial circle was fitted as a succinct description of the acetabular rim. Finally, a series of mutual spatial parameters between the pelvis and acetabulum were determined semi-automatically, including the center of rotation, radius, and acetabular orientation. Pelvic models were reconstructed based on high-resolution computed tomography images. Inter- and intra-rater correlations for measurements of mutual spatial parameters were almost perfect, showing our method affords very reproducible measurements. The approach will thus be useful for analyzing anatomic data and has potential applications for preoperative planning in individuals receiving total hip arthroplasty."
},
{
"pmid": "24220210",
"title": "The virtual skeleton database: an open access repository for biomedical research and collaboration.",
"abstract": "BACKGROUND\nStatistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work.\n\n\nOBJECTIVE\nTo solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared.\n\n\nMETHODS\nThe VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity.\n\n\nRESULTS\nTo illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors.\n\n\nCONCLUSIONS\nThe VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms."
},
{
"pmid": "19345065",
"title": "Automated identification of anatomical landmarks on 3D bone models reconstructed from CT scan images.",
"abstract": "Identification of anatomical landmarks on skeletal tissue reconstructed from CT/MR images is indispensable in patient-specific preoperative planning (tumour referencing, deformity evaluation, resection planning, and implant alignment and anchoring) as well as intra-operative navigation (bone registration and instruments referencing). Interactive localisation of landmarks on patient-specific anatomical models is time-consuming and may lack in repeatability and accuracy. We present a computer graphics-based method for automatic localisation and identification (labelling) of anatomical landmarks on a 3D model of bone reconstructed from CT images of a patient. The model surface is segmented into different landmark regions (peak, ridge, pit and ravine) based on surface curvature. These regions are labelled automatically by an iterative process using a spatial adjacency relationship matrix between the landmarks. The methodology has been implemented in a software program and its results (automatically identified landmarks) are compared with those manually palpated by three experienced orthopaedic surgeons, on three 3D reconstructed bone models. The variability in location of landmarks was found to be in the range of 2.15-5.98 mm by manual method (inter surgeon) and 1.92-4.88 mm by our program. Both methods performed well in identifying sharp features. Overall, the performance of the automated methodology was better or similar to the manual method and its results were reproducible. It is expected to have a variety of applications in surgery planning and intra-operative navigation."
},
{
"pmid": "12111881",
"title": "Sample size requirements for estimating intraclass correlations with desired precision.",
"abstract": "A method is developed to calculate the approximate number of subjects required to obtain an exact confidence interval of desired width for certain types of intraclass correlations in one-way and two-way ANOVA models. The sample size approximation is shown to be very accurate."
}
] |
Frontiers in Pharmacology | 31551780 | PMC6747929 | 10.3389/fphar.2019.00975 | Ontological and Non-Ontological Resources for Associating Medical Dictionary for Regulatory Activities Terms to SNOMED Clinical Terms With Semantic Properties |
Background: Formal definitions allow selecting terms (e.g., identifying all terms related to “Infectious disease” using the query “has causative agent organism”) and terminological reasoning (e.g., “hepatitis B” is a “hepatitis” and is an “infectious disease”). However, the standard international terminology Medical Dictionary for Regulatory Activities (MedDRA) used for coding adverse drug reactions in pharmacovigilance databases does not beneficiate from such formal definitions. Our objective was to evaluate the potential of reuse of ontological and non-ontological resources for generating such definitions for MedDRA.
Methods: We developed several methods that collectively allow a semiautomatic semantic enrichment of MedDRA: 1) using MedDRA-to-SNOMED Clinical Terms (SNOMED CT) mappings (available in the Unified Medical Language System metathesaurus or other mapping resources, e.g., the MedDRA preferred term “hepatitis B” is associated to the SNOMED CT concept “type B viral hepatitis”) to extract term definitions (e.g., “hepatitis B” is associated with the following properties: has finding site liver structure, has associated morphology inflammation morphology, and has causative agent hepatitis B virus); 2) using MedDRA labels and lexical/syntactic methods for automatic decomposition of complex MedDRA terms (e.g., the MedDRA systems organ class “blood and lymphatic system disorders” is decomposed in blood system disorders and lymphatic system disorders) or automatic suggestions of properties (e.g., the string “cyclic” in preferred term “cyclic neutropenia” leads to the property has clinical course cyclic).
Results: The Unified Medical Language System metathesaurus was the main ontological resource reusable for generating formal definitions for MedDRA terms. The non-ontological resources (another mapping resource provided by Nadkarni and Darer in 2010 and MedDRA labels) allowed defining few additional preferred terms. While the Ci4SeR tool helped the curator to define 1,935 terms by suggesting potential supplemental relations based on the parents’ and siblings’ semantic definition, defining manually all MedDRA terms remains expensive in time.
Discussion: Several ontological and non-ontological resources are available for associating MedDRA terms to SNOMED CT concepts with semantic properties, but providing manual definitions is still necessary. The ontology of adverse events is a possible alternative but does not cover all MedDRA terms either. Perspectives are to implement more efficient techniques to find more logical relations between SNOMED CT and MedDRA in an automated way. | Related Work in Medical Informatics
He et al. (2014) have introduced the Ontology of Adverse Events (OAE). OAE was originally targeted for vaccine adverse events (Marcos et al., 2013) and now also includes adverse drug events. In practice, using OAE to select case reports in the Vaccine Adverse Event Reporting System proved difficult: “AE data stored in Vaccine Adverse Event Reporting System are annotated using MedDRA” (Marcos et al., 2013). Authors complained that “many disadvantages of MedDRA, including the lack of term definitions and a well-defined hierarchical and logical structure, prevent its effective usage in VAE (vaccine adverse event) term classification.” Therefore, for an efficient analysis, they performed a mapping between MedDRA and OAE (Sarntivijai et al., 2012).OAE contains about 2,300 AE entities but only 1,900 MedDRA mappings (9% of all MedDRA PT). For example, there is a single term for upper gastrointestinal hemorrhage in OAE (He et al., 2014), whereas one can cite several in MedDRA (see the section Rationale for Supplementing MedDRA With Formal Definitions where we identified 27 using OntoADR). Furthermore, OAE formal definitions are limited to anatomical and physiopathological descriptions. He and colleagues proposed extensions to OAE such as the Ontology of Drug Neuropathy Adverse Events (Guo et al., 2016), which suggests that providing supplementary MedDRA mappings is possible using the same methodology. One advantage of OAE is the possibility to use it in open access, which allows wide dissemination to users, while legal issues related to ownership of MedDRA and SNOMED CT should be solved before we can make OntoADR available.Adverse Events Reporting Ontology aims to allow storing of pharmacovigilance data related to anaphylaxis according to guidelines defined by the Brighton collaboration (Courtot et al., 2014) but may also be extended to other safety topics, e.g., malaria (Courtot et al., 2013). Nevertheless, ADRs are not formally defined in Adverse Events Reporting Ontology.While we did not find any resource available providing definitions for every ADR in MedDRA, there are more general resources with formal representation of clinical terms. In order not to start from scratch the definitions of ADRs, we needed a trustworthy formal resource, standardized and reliable. We chose SNOMED CT for three main reasons: first, pharmacovigilance concepts generally do not differ from those used in other medical fields. Second, SNOMED CT is the most complete and most detailed terminology of medicine with a formal semantic foundation currently available (Elkin et al., 2006) sharing common fields with MedDRA (medical pathologies in all medical specialties, signs and symptoms, laboratory tests results, some diagnostic and therapeutic procedures). Finally, SNOMED CT has the advantage of covering to a large extent, if not entirely, other standard medical terminologies such as International Classification of Diseases, 10th edition (ICD-10), and especially more than 50% of MedDRA terms (excluding LLT) are associated with a SNOMED CT concept (Bodenreider, 2009) in UMLS, a degree of coverage that, to our knowledge, no other current medical ontology was able to match.We found in the literature several examples of mappings from a terminology to SNOMED CT (Vikström et al., 2007; Merabti et al., 2009; Nyström et al., 2010; Dhombres and Bodenreider, 2016; Fung et al., 2017). However, the objective was usually to integrate a terminology in SNOMED CT or to map this terminology to SNOMED CT but not to enrich this terminology by the means of formal definitions. The lexically assign logically refine method is an example of an automated method in which logical observation identifiers names and codes (LOINC) and SNOMED terms are first decomposed, then refined by the means of knowledge-based methods that allowed to map LOINC and SNOMED together (Dolin et al., 1998). In another work, Adamusiak and Adamusiak and Bodenreider (2012) developed an OWL version of both LOINC and SNOMED CT and made use of mappings between SNOMED CT terms to identify redundancy and inconsistencies in LOINC multi-axial hierarchy. Roldán-García et al. (2016) implemented Dione, an OWL representation of ICD-10-CM where formal definitions were obtained thanks to mappings between ICD-10-CM and SNOMED CT available in UMLS and the Bioportal. More recently, Nikiema et al. (2017) benefited from SNOMED CT logical definitions to find mappings between ICD-10 and ICD-O3 concepts in the domain of cancer diagnosis terminologies.It is usually recommended to build medical terminologies following the model of clinical terminologies that obey to Cimino’s desiderata (Cimino, 1998; Bales et al., 2006). Such model brings several advantages such as improving the maintenance of large terminologies (Cimino et al., 1994), and formal definitions were implemented in several terminologies such as the NCI-Thesaurus (Hartel et al., 2005). Our approach is more in line with what is recommended by Ingenerf and Giere (1998), that is to say, to keep terminologies with disjoint classes required for statistics (in a clinical terminology, the same term may be present in several separate categories because of multiple inheritance and be counted more than once) and instead implement a mapping of terms of first-generation system to a formal system. This allows keeping the MedDRA terminology in its current format, counting ADRs according to predefined categories that are standardized and replicable at the international level with MedDRA and building new categories on demand by using knowledge engineering methods. This is what we have done in our implementation of OntoADR (Bousquet et al., 2014) in the form of an OWL-DL file and in the form of a database (Souvignet et al., 2016b).We have no knowledge of other works in which the formalization of complex terms involving AND/OR relations has been performed in an automated way. We have not proposed formal definitions of LLT because this level is reserved for the coding of case reports, in order to improve the accuracy of coding, but it is not useful for grouping data for analysis (which is performed at the PT level). Although the analysis of pharmacovigilance databases is performed preferentially at the PT level, it could be important to also define the upper levels: SOC, HLGT, and HLT. This formalization would bring several advantages: i) preferred terms may inherit properties from their parents that allows to give them a formal definition in case the synonymous SNOMED CT concept has no definition, or there is no SNOMED CT concept mapped to this PT in UMLS; ii) This would allow to calculate by the means of terminological reasoning high level MedDRA categories in which PTs should be included and therefore restore multiple inheritance that does not exist in MedDRA. However, it is advisable to remain modest insofar as the relations between a PT and the higher hierarchical levels to which it is attached are not always of a taxonomic nature. | [
"23304386",
"24239752",
"17108616",
"17911807",
"19007441",
"30200874",
"27692980",
"16122973",
"29179777",
"15649103",
"24680984",
"10082069",
"12580645",
"7719786",
"9865037",
"24667848",
"22874155",
"18801700",
"26865946",
"25411633",
"25937883",
"9524353",
"24739596",
"16770974",
"27259657",
"15802483",
"17911788",
"28269853",
"22874157",
"17481964",
"15797001",
"16970507",
"25093068",
"29329592",
"16185681",
"25025130",
"17108617",
"9865051",
"29743102",
"29726439",
"8412823",
"20647054",
"24279920",
"28969675",
"10566373",
"19745303",
"17604415",
"28844750",
"20618919",
"29316968",
"25949785",
"25488031",
"27737720",
"18755993",
"22024315",
"9865053",
"16501181",
"22080554",
"23974561",
"29295238",
"16697710",
"23304363",
"27348725",
"27369567",
"30792654",
"19757412",
"29908358",
"20529942",
"25785185"
] | [
{
"pmid": "23304386",
"title": "Quality assurance in LOINC using Description Logic.",
"abstract": "OBJECTIVE\nTo assess whether errors can be found in LOINC by changing its representation to OWL DL and comparing its classification to that of SNOMED CT.\n\n\nMETHODS\nWe created Description Logic definitions for LOINC concepts in OWL and merged the ontology with SNOMED CT to enrich the relatively flat hierarchy of LOINC parts. LOINC - SNOMED CT mappings were acquired through UMLS. The resulting ontology was classified with the ConDOR reasoner.\n\n\nRESULTS\nTransformation into DL helped to identify 427 sets of logically equivalent LOINC codes, 676 sets of logically equivalent LOINC parts, and 239 inconsistencies in LOINC multiaxial hierarchy. Automatic classification of LOINC and SNOMED CT combined increased the connectivity within LOINC hierarchy and increased its coverage by an additional 9,006 LOINC codes.\n\n\nCONCLUSIONS\nLOINC is a well-maintained terminology. While only a relatively small number of logical inconsistencies were found, we identified a number of areas where LOINC could benefit from the application of Description Logic."
},
{
"pmid": "24239752",
"title": "Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.",
"abstract": "OBJECTIVE\nTo quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method.\n\n\nMATERIAL AND METHOD\nUtilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional.\n\n\nRESULTS\nFor the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples.\n\n\nDISCUSSION AND CONCLUSION\nWhile, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution."
},
{
"pmid": "17108616",
"title": "Mapping of the WHO-ART terminology on Snomed CT to improve grouping of related adverse drug reactions.",
"abstract": "The WHO-ART and MedDRA terminologies used for coding adverse drug reactions (ADR) do not provide formal definitions of terms. In order to improve groupings, we propose to map ADR terms to equivalent Snomed CT concepts through UMLS Metathesaurus. We performed such mappings on WHO-ART terms and can automatically classify them using a description logic definition expressing their synonymies. Our gold standard was a set of 13 MedDRA special search categories restricted to ADR terms available in WHO-ART. The overlapping of the groupings within the new structure of WHO-ART on the manually built MedDRA search categories showed a 71% success rate. We plan to improve our method in order to retrieve associative relations between WHO-ART terms."
},
{
"pmid": "17911807",
"title": "PharmARTS: terminology web services for drug safety data coding and retrieval.",
"abstract": "MedDRA and WHO-ART are the terminologies used to encode drug safety reports. The standardisation achieved with these terminologies facilitates: 1) The sharing of safety databases; 2) Data mining for the continuous reassessment of benefit-risk ratio at national or international level or in the pharmaceutical industry. There is some debate about the capacity of these terminologies for retrieving case reports related to similar medical conditions. We have developed a resource that allows grouping similar medical conditions more effectively than WHO-ART and MedDRA. We describe here a software tool facilitating the use of this terminological resource thanks to an RDF framework with support for RDF Schema inferencing and querying. This tool eases coding and data retrieval in drug safety."
},
{
"pmid": "19007441",
"title": "A case report: using SNOMED CT for grouping Adverse Drug Reactions Terms.",
"abstract": "BACKGROUND\nWHO-ART and MedDRA are medical terminologies used for the coding of adverse drug reactions in pharmacovigilance databases. MedDRA proposes 13 Special Search Categories (SSC) grouping terms associated to specific medical conditions. For instance, the SSC \"Haemorrhage\" includes 346 MedDRA terms among which 55 are also WHO-ART terms. WHO-ART itself does not provide such groupings. Our main contention is the possibility of classifying WHO-ART terms in semantic categories by using knowledge extracted from SNOMED CT. A previous paper presents the way WHO-ART term definitions have been automatically generated in a description logics formalism by using their corresponding SNOMED CT synonyms. Based on synonymy and relative position of WHO-ART terms in SNOMED CT, specialization or generalization relationships could be inferred. This strategy is successful for grouping the WHO-ART terms present in most MedDRA SSCs. However the strategy failed when SSC were organized on other basis than taxonomy.\n\n\nMETHODS\nWe propose a new method that improves the previous WHO-ART structure by integrating the associative relationships included in SNOMED CT.\n\n\nRESULTS\nThe new method improves the groupings. For example, none of the 55 WHO-ART terms in the Haemorrhage SSC were matched using the previous method. With the new method, we improve the groupings and obtain 87% coverage of the Haemorrhage SSC.\n\n\nCONCLUSION\nSNOMED CT's terminological structure can be used to perform automated groupings in WHO-ART. This work proves that groupings already present in the MedDRA SSCs (e.g. the haemorrhage SSC) may be retrieved using classification in SNOMED CT."
},
{
"pmid": "30200874",
"title": "Linked open data-based framework for automatic biomedical ontology generation.",
"abstract": "BACKGROUND\nFulfilling the vision of Semantic Web requires an accurate data model for organizing knowledge and sharing common understanding of the domain. Fitting this description, ontologies are the cornerstones of Semantic Web and can be used to solve many problems of clinical information and biomedical engineering, such as word sense disambiguation, semantic similarity, question answering, ontology alignment, etc. Manual construction of ontology is labor intensive and requires domain experts and ontology engineers. To downsize the labor-intensive nature of ontology generation and minimize the need for domain experts, we present a novel automated ontology generation framework, Linked Open Data approach for Automatic Biomedical Ontology Generation (LOD-ABOG), which is empowered by Linked Open Data (LOD). LOD-ABOG performs concept extraction using knowledge base mainly UMLS and LOD, along with Natural Language Processing (NLP) operations; and applies relation extraction using LOD, Breadth first Search (BSF) graph method, and Freepal repository patterns.\n\n\nRESULTS\nOur evaluation shows improved results in most of the tasks of ontology generation compared to those obtained by existing frameworks. We evaluated the performance of individual tasks (modules) of proposed framework using CDR and SemMedDB datasets. For concept extraction, evaluation shows an average F-measure of 58.12% for CDR corpus and 81.68% for SemMedDB; F-measure of 65.26% and 77.44% for biomedical taxonomic relation extraction using datasets of CDR and SemMedDB, respectively; and F-measure of 52.78% and 58.12% for biomedical non-taxonomic relation extraction using CDR corpus and SemMedDB, respectively. Additionally, the comparison with manually constructed baseline Alzheimer ontology shows F-measure of 72.48% in terms of concepts detection, 76.27% in relation extraction, and 83.28% in property extraction. Also, we compared our proposed framework with ontology-learning framework called \"OntoGain\" which shows that LOD-ABOG performs 14.76% better in terms of relation extraction.\n\n\nCONCLUSION\nThis paper has presented LOD-ABOG framework which shows that current LOD sources and technologies are a promising solution to automate the process of biomedical ontology generation and extract relations to a greater extent. In addition, unlike existing frameworks which require domain experts in ontology development process, the proposed approach requires involvement of them only for improvement purpose at the end of ontology life cycle."
},
{
"pmid": "27692980",
"title": "[Automated grouping of terms associated to cardiac valve fibrosis in MedDRA].",
"abstract": "AIM\nTo propose an alternative approach for building custom groupings of terms that complements the usual approach based on both hierarchical method (selection of reference groupings in medical dictionary for regulatory activities [MedDRA]) and/or textual method (string search), for case reports extraction from a pharmacovigilance database in response to a safety problem. Here we take cardiac valve fibrosis as an example.\n\n\nMETHODS\nThe list of terms obtained by an automated approach, based on querying ontology of adverse drug reactions (OntoADR), a knowledge base defining MedDRA terms through relationships with systematized nomenclature of medicine-clinical terms (SNOMED CT) concepts, was compared with the reference list consisting of 53 preferred terms obtained by hierarchical and textual method. Two queries were performed on OntoADR by using a dedicated software: OntoADR query tools. Both queries excluded congenital diseases, and included a procedure or an auscultation method performed on cardiac valve structures. Query 1 also considered MedDRA terms related to fibrosis, narrowing or calcification of heart valves, and query 2 MedDRA terms described according to one of these four SNOMED CT terms: \"Insufficiency\", \"Valvular sclerosis\", \"Heart valve calcification\" or \"Heart valve stenosis\".\n\n\nRESULTS\nThe reference grouping consisted of 53 MedDRA preferred terms. Our automated method achieved recall of 79% and precision of 100% for query 1 privileging morphological abnormalities, and recall of 100% and precision of 96% for query 2 privileging functional abnormalities.\n\n\nCONCLUSION\nAn alternative approach to MedDRA reference groupings for building custom groupings is feasible for cardiac valve fibrosis. OntoADR is still in development. Its application to other adverse reactions would require significant work for a knowledge engineer to define every MedDRA term, but such definitions could then be queried as many times as necessary by pharmacovigilance professionals."
},
{
"pmid": "16122973",
"title": "Qualitative assessment of the International Classification of Functioning, Disability, and Health with respect to the desiderata for controlled medical vocabularies.",
"abstract": "BACKGROUND\nThe International Classification of Functioning, Disability, and Health (ICF), a classification system published in 2001 by the World Health Organization (WHO), provides a common language and framework for describing functional status information (FSI) in health records.\n\n\nMETHODS\nInformed by ongoing research in coding FSI in patient records, this paper qualitatively assesses the ICF framework with respect to the desiderata for controlled medical vocabularies, an enumerated a list of desirable qualities for controlled medical vocabularies proposed by Cimino [J.J. Cimino, Desiderata for controlled medical vocabularies in the twenty-first century, Meth. Inform. Med. 37 (1998) 394-403].\n\n\nRESULTS\nThe ICF satisfies 5 of the 12 desiderata. Five points were not satisfied and two points could not be evaluated.\n\n\nCONCLUSION\nThe ICF is a rich source of relevant terms, concepts, and relationships, but it was not developed in consideration of requirements for formal terminologies. Therefore, it could serve as a base from which to develop a formal terminology of functioning and disability. This assessment is a key next step in the development of the ICF as a sensitive, universal measure of functional status."
},
{
"pmid": "29179777",
"title": "A document-centric approach for developing the tolAPC ontology.",
"abstract": "BACKGROUND\nThere are many challenges associated with ontology building, as the process often touches on many different subject areas; it needs knowledge of the problem domain, an understanding of the ontology formalism, software in use and, sometimes, an understanding of the philosophical background. In practice, it is very rare that an ontology can be completed by a single person, as they are unlikely to combine all of these skills. So people with these skills must collaborate. One solution to this is to use face-to-face meetings, but these can be expensive and time-consuming for teams that are not co-located. Remote collaboration is possible, of course, but one difficulty here is that domain specialists use a wide-variety of different \"formalisms\" to represent and share their data - by the far most common, however, is the \"office file\" either in the form of a word-processor document or a spreadsheet. Here we describe the development of an ontology of immunological cell types; this was initially developed by domain specialists using an Excel spreadsheet for collaboration. We have transformed this spreadsheet into an ontology using highly-programmatic and pattern-driven ontology development. Critically, the spreadsheet remains part of the source for the ontology; the domain specialists are free to update it, and changes will percolate to the end ontology.\n\n\nRESULTS\nWe have developed a new ontology describing immunological cell lines built by instantiating ontology design patterns written programmatically, using values from a spreadsheet catalogue.\n\n\nCONCLUSIONS\nThis method employs a spreadsheet that was developed by domain experts. The spreadsheet is unconstrained in its usage and can be freely updated resulting in a new ontology. This provides a general methodology for ontology development using data generated by domain specialists."
},
{
"pmid": "15649103",
"title": "Appraisal of the MedDRA conceptual structure for describing and grouping adverse drug reactions.",
"abstract": "Computerised queries in spontaneous reporting systems for pharmacovigilance require reliable and reproducible coding of adverse drug reactions (ADRs). The aim of the Medical Dictionary for Regulatory Activities (MedDRA) terminology is to provide an internationally approved classification for efficient communication of ADR data between countries. Several studies have evaluated the domain completeness of MedDRA and whether encoded terms are coherent with physicians' original verbatim descriptions of the ADR. MedDRA terms are organised into five levels: system organ class (SOC), high level group terms (HLGTs), high level terms (HLTs), preferred terms (PTs) and low level terms (LLTs). Although terms may belong to different SOCs, no PT is related to more than one HLT within the same SOC. This hierarchical property ensures that terms cannot be counted twice in statistical studies, though it does not allow appropriate semantic grouping of PTs. For this purpose, special search categories (SSCs) [collections of PTs assembled from various SOCs] have been introduced in MedDRA to group terms with similar meanings. However, only a small number of categories are currently available and the criteria used to construct these categories have not been clarified. The objective of this work is to determine whether MedDRA contains the structural and terminological properties to group semantically linked adverse events in order to improve the performance of spontaneous reporting systems. Rossi Mori classifies terminological systems in three categories: first-generation systems, which represent terms as strings; second-generation systems, which dissect terminological phrases into a set of simpler terms; and third-generation systems, which provide advanced features to automatically retrieve the position of new terms in the classification and group sets of meaning-related terms. We applied Cimino's desiderata to show that MedDRA is not compatible with the properties of third-generation systems. Consequently, no tool can help for the automated positioning of new terms inside the hierarchy and SSCs have to be entered manually rather than automatically using the MedDRA files. One solution could be to link MedDRA to a third-generation system. This would allow the current MedDRA structure to be kept to ensure that end users have a common view on the same data and the addition of new computational properties to MedDRA."
},
{
"pmid": "24680984",
"title": "Formalizing MedDRA to support semantic reasoning on adverse drug reaction terms.",
"abstract": "Although MedDRA has obvious advantages over previous terminologies for coding adverse drug reactions and discovering potential signals using data mining techniques, its terminological organization constrains users to search terms according to predefined categories. Adding formal definitions to MedDRA would allow retrieval of terms according to a case definition that may correspond to novel categories that are not currently available in the terminology. To achieve semantic reasoning with MedDRA, we have associated formal definitions to MedDRA terms in an OWL file named OntoADR that is the result of our first step for providing an \"ontologized\" version of MedDRA. MedDRA five-levels original hierarchy was converted into a subsumption tree and formal definitions of MedDRA terms were designed using several methods: mappings to SNOMED-CT, semi-automatic definition algorithms or a fully manual way. This article presents the main steps of OntoADR conception process, its structure and content, and discusses problems and limits raised by this attempt to \"ontologize\" MedDRA."
},
{
"pmid": "10082069",
"title": "The medical dictionary for regulatory activities (MedDRA).",
"abstract": "The International Conference on Harmonisation has agreed upon the structure and content of the Medical Dictionary for Regulatory Activities (MedDRA) version 2.0 which should become available in the early part of 1999. This medical terminology is intended for use in the pre- and postmarketing phases of the medicines regulatory process, covering diagnoses, symptoms and signs, adverse drug reactions and therapeutic indications, the names and qualitative results of investigations, surgical and medical procedures, and medical/social history. It can be used for recording adverse events and medical history in clinical trials, in the analysis and tabulations of data from these trials and in the expedited submission of safety data to government regulatory authorities, as well as in constructing standard product information and documentation for applications for marketing authorisation. After licensing of a medicine, it may be used in pharmacovigilance and is expected to be the preferred terminology for international electronic regulatory communication. MedDRA is a hierarchical terminology with 5 levels and is multiaxial: terms may exist in more than 1 vertical axis, providing specificity of terms for data entry and flexibility in data retrieval. Terms in MedDRA were derived from several sources including the WHO's adverse reaction terminology (WHO-ART), Coding Symbols for a Thesaurus of Adverse Reaction Terms (COSTART), International Classification of Diseases (ICD) 9 and ICD9-CM. It will be maintained, further developed and distributed by a Maintenance Support Services Organisation (MSSO). It is anticipated that using MedDRA will improve the quality of data captured on databases, support effective analysis by providing clinically relevant groupings of terms and facilitate electronic communication of data, although as a new tool, users will need to invest time in gaining expertise in its use."
},
{
"pmid": "12580645",
"title": "Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).",
"abstract": "The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by searching all of MedDRA) are limited by the small number available and by lack of clarity about criteria applied in their construction. Difficulties in database searching may be addressed by suitable user training and experience, and by central reporting of detected deficiencies in MedDRA. Other remedies may include regulatory guidance on implementation and use of MedDRA. Further systematic review of MedDRA is needed and generation of standardised searches that may be used 'off the shelf' will help, particularly where the same search is performed repeatedly on multiple data sets. Until these enhancements are widely available, MedDRA users should take great care when searching a safety database to ensure that cases are not inadvertently missed."
},
{
"pmid": "7719786",
"title": "Knowledge-based approaches to the maintenance of a large controlled medical terminology.",
"abstract": "OBJECTIVE\nDevelop a knowledge-based representation for a controlled terminology of clinical information to facilitate creation, maintenance, and use of the terminology.\n\n\nDESIGN\nThe Medical Entities Dictionary (MED) is a semantic network, based on the Unified Medical Language System (UMLS), with a directed acyclic graph to represent multiple hierarchies. Terms from four hospital systems (laboratory, electrocardiography, medical records coding, and pharmacy) were added as nodes in the network. Additional knowledge about terms, added as semantic links, was used to assist in integration, harmonization, and automated classification of disparate terminologies.\n\n\nRESULTS\nThe MED contains 32,767 terms and is in active clinical use. Automated classification was successfully applied to terms for laboratory specimens, laboratory tests, and medications. One benefit of the approach has been the automated inclusion of medications into multiple pharmacologic and allergenic classes that were not present in the pharmacy system. Another benefit has been the reduction of maintenance efforts by 90%.\n\n\nCONCLUSION\nThe MED is a hybrid of terminology and knowledge. It provides domain coverage, synonymy, consistency of views, explicit relationships, and multiple classification while preventing redundancy, ambiguity (homonymy) and misclassification."
},
{
"pmid": "9865037",
"title": "Desiderata for controlled medical vocabularies in the twenty-first century.",
"abstract": "Builders of medical informatics applications need controlled medical vocabularies to support their applications and it is to their advantage to use available standards. In order to do so, however, these standards need to address the requirements of their intended users. Over the past decade, medical informatics researchers have begun to articulate some of these requirements. This paper brings together some of the common themes which have been described, including: vocabulary content, concept orientation, concept permanence, nonsemantic concept identifiers, polyhierarchy, formal definitions, rejection of \"not elsewhere classified\" terms, multiple granularities, multiple consistent views, context representation, graceful evolution, and recognized redundancy. Standards developers are beginning to recognize and address these desiderata and adapt their offerings to meet them."
},
{
"pmid": "24667848",
"title": "The logic of surveillance guidelines: an analysis of vaccine adverse event reports from an ontological perspective.",
"abstract": "BACKGROUND\nWhen increased rates of adverse events following immunization are detected, regulatory action can be taken by public health agencies. However to be interpreted reports of adverse events must be encoded in a consistent way. Regulatory agencies rely on guidelines to help determine the diagnosis of the adverse events. Manual application of these guidelines is expensive, time consuming, and open to logical errors. Representing these guidelines in a format amenable to automated processing can make this process more efficient.\n\n\nMETHODS AND FINDINGS\nUsing the Brighton anaphylaxis case definition, we show that existing clinical guidelines used as standards in pharmacovigilance can be logically encoded using a formal representation such as the Adverse Event Reporting Ontology we developed. We validated the classification of vaccine adverse event reports using the ontology against existing rule-based systems and a manually curated subset of the Vaccine Adverse Event Reporting System. However, we encountered a number of critical issues in the formulation and application of the clinical guidelines. We report these issues and the steps being taken to address them in current surveillance systems, and in the terminological standards in use.\n\n\nCONCLUSIONS\nBy standardizing and improving the reporting process, we were able to automate diagnosis confirmation. By allowing medical experts to prioritize reports such a system can accelerate the identification of adverse reactions to vaccines and the response of regulatory agencies. This approach of combining ontology and semantic technologies can be used to improve other areas of vaccine adverse event reports analysis and should inform both the design of clinical guidelines and how they are used in the future.\n\n\nAVAILABILITY\nSufficient material to reproduce our results is available, including documentation, ontology, code and datasets, at http://purl.obolibrary.org/obo/aero."
},
{
"pmid": "22874155",
"title": "Automatic generation of MedDRA terms groupings using an ontology.",
"abstract": "In the context of PROTECT European project, we have developed an ontology of adverse drug reactions (OntoADR) based on the original MedDRA hierarchy and a query-based method to achieve automatic MedDRA terms groupings for improving pharmacovigilance signal detection. Those groupings were evaluated against standard handmade MedDRA groupings corresponding to first priority pharmacovigilance safety topics. Our results demonstrate that this automatic method allows catching most of the terms present in the reference groupings, and suggest that it could offer an important saving of time for the achievement of pharmacovigilance groupings. This paper describes the theoretical context of this work, the evaluation methodology, and presents the principal results."
},
{
"pmid": "18801700",
"title": "Morphosemantic parsing of medical compound words: transferring a French analyzer to English.",
"abstract": "PURPOSE\nMedical language, as many technical languages, is rich with morphologically complex words, many of which take their roots in Greek and Latin--in which case they are called neoclassical compounds. Morphosemantic analysis can help generate definitions of such words. The similarity of structure of those compounds in several European languages has also been observed, which seems to indicate that a same linguistic analysis could be applied to neo-classical compounds from different languages with minor modifications.\n\n\nMETHODS\nThis paper reports work on the adaptation of a morphosemantic analyzer dedicated to French (DériF) to analyze English medical neo-classical compounds. It presents the principles of this transposition and its current performance.\n\n\nRESULTS\nThe analyzer was tested on a set of 1299 compounds extracted from the WHO-ART terminology. 859 could be decomposed and defined, 675 of which successfully.\n\n\nCONCLUSION\nAn advantage of this process is that complex linguistic analyses designed for French could be successfully transposed to the analysis of English medical neoclassical compounds, which confirmed our hypothesis of transferability. The fact that the method was successfully applied to a Germanic language such as English suggests that performances would be at least as high if experimenting with Romance languages such as Spanish. Finally, the resulting system can produce more complete analyses of English medical compounds than existing systems, including a hierarchical decomposition and semantic gloss of each word."
},
{
"pmid": "26865946",
"title": "Interoperability between phenotypes in research and healthcare terminologies--Investigating partial mappings between HPO and SNOMED CT.",
"abstract": "BACKGROUND\nIdentifying partial mappings between two terminologies is of special importance when one terminology is finer-grained than the other, as is the case for the Human Phenotype Ontology (HPO), mainly used for research purposes, and SNOMED CT, mainly used in healthcare.\n\n\nOBJECTIVES\nTo investigate and contrast lexical and logical approaches to deriving partial mappings between HPO and SNOMED CT.\n\n\nMETHODS\n1) Lexical approach-We identify modifiers in HPO terms and attempt to map demodified terms to SNOMED CT through UMLS; 2) Logical approach-We leverage subsumption relations in HPO to infer partial mappings to SNOMED CT; 3) Comparison-We analyze the specific contribution of each approach and evaluate the quality of the partial mappings through manual review.\n\n\nRESULTS\nThere are 7358 HPO concepts with no complete mapping to SNOMED CT. We identified partial mappings lexically for 33% of them and logically for 82%. We identified partial mappings both lexically and logically for 27%. The clinical relevance of the partial mappings (for a cohort selection use case) is 49% for lexical mappings and 67% for logical mappings.\n\n\nCONCLUSIONS\nThrough complete and partial mappings, 92% of the 10,454 HPO concepts can be mapped to SNOMED CT (30% complete and 62% partial). Equivalence mappings between HPO and SNOMED CT allow for interoperability between data described using these two systems. However, due to differences in focus and granularity, equivalence is only possible for 30% of HPO classes. In the remaining cases, partial mappings provide a next-best approach for traversing between the two systems. Both lexical and logical mapping techniques produce mappings that cannot be generated by the other technique, suggesting that the two techniques are complementary to each other. Finally, this work demonstrates interesting properties (both lexical and logical) of HPO and SNOMED CT and illustrates some limitations of mapping through UMLS."
},
{
"pmid": "25411633",
"title": "An effective method of large scale ontology matching.",
"abstract": "BACKGROUND\nWe are currently facing a proliferation of heterogeneous biomedical data sources accessible through various knowledge-based applications. These data are annotated by increasingly extensive and widely disseminated knowledge organisation systems ranging from simple terminologies and structured vocabularies to formal ontologies. In order to solve the interoperability issue, which arises due to the heterogeneity of these ontologies, an alignment task is usually performed. However, while significant effort has been made to provide tools that automatically align small ontologies containing hundreds or thousands of entities, little attention has been paid to the matching of large sized ontologies in the life sciences domain.\n\n\nRESULTS\nWe have designed and implemented ServOMap, an effective method for large scale ontology matching. It is a fast and efficient high precision system able to perform matching of input ontologies containing hundreds of thousands of entities. The system, which was included in the 2012 and 2013 editions of the Ontology Alignment Evaluation Initiative campaign, performed very well. It was ranked among the top systems for the large ontologies matching.\n\n\nCONCLUSIONS\nWe proposed an approach for large scale ontology matching relying on Information Retrieval (IR) techniques and the combination of lexical and machine learning contextual similarity computing for the generation of candidate mappings. It is particularly adapted to the life sciences domain as many of the ontologies in this domain benefit from synonym terms taken from the Unified Medical Language System and that can be used by our IR strategy. The ServOMap system we implemented is able to deal with hundreds of thousands entities with an efficient computation time."
},
{
"pmid": "25937883",
"title": "TermGenie - a web-application for pattern-based ontology class generation.",
"abstract": "BACKGROUND\nBiological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes.\n\n\nRESULTS\nTermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes.\n\n\nCONCLUSION\nTermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training."
},
{
"pmid": "9524353",
"title": "Evaluation of a \"lexically assign, logically refine\" strategy for semi-automated integration of overlapping terminologies.",
"abstract": "OBJECTIVE\nTo evaluate a \"lexically assign, logically refine\" (LALR) strategy for merging overlapping healthcare terminologies. This strategy combines description logic classification with lexical techniques that propose initial term definitions. The lexically suggested initial definitions are manually refined by domain experts to yield description logic definitions for each term in the overlapping terminologies of interest. Logic-based techniques are then used to merge defined terms.\n\n\nMETHODS\nA LALR strategy was applied to 7,763 LOINC and 2,050 SNOMED procedure terms using a common set of defining relationships taken from the LOINC data model. Candidate value restrictions were derived by lexically comparing the procedure's name with other terms contained in the reference SNOMED topography, living organism, function, and chemical axes. These candidate restrictions were reviewed by a domain expert, transformed into terminologic definitions for each of the terms, and then algorithmically classified.\n\n\nRESULTS\nThe authors successfully defined 5,724 (73%) LOINC and 1,151 (56%) SNOMED procedure terms using a LALR strategy. Algorithmic classification of the defined concepts resulted in an organization mirroring that of the reference hierarchies. The classification techniques appropriately placed more detailed LOINC terms underneath the corresponding SNOMED terms, thus forming a complementary relationship between the LOINC and SNOMED terms.\n\n\nDISCUSSION\nLALR is a successful strategy for merging overlapping terminologies in a test case where both terminologies can be defined using the same defining relationships, and where value restrictions can be drawn from a single reference hierarchy. Those concepts not having lexically suggested value restrictions frequently indicate gaps in the reference hierarchy."
},
{
"pmid": "24739596",
"title": "Exploitation of semantic methods to cluster pharmacovigilance terms.",
"abstract": "Pharmacovigilance is the activity related to the collection, analysis and prevention of adverse drug reactions (ADRs) induced by drugs. This activity is usually performed within dedicated databases (national, European, international...), in which the ADRs declared for patients are usually coded with a specific controlled terminology MedDRA (Medical Dictionary for Drug Regulatory Activities). Traditionally, the detection of adverse drug reactions is performed with data mining algorithms, while more recently the groupings of close ADR terms are also being exploited. The Standardized MedDRA Queries (SMQs) have become a standard in pharmacovigilance. They are created manually by international boards of experts with the objective to group together the MedDRA terms related to a given safety topic. Within the MedDRA version 13, 84 SMQs exist, although several important safety topics are not yet covered. The objective of our work is to propose an automatic method for assisting the creation of SMQs using the clustering of semantically close MedDRA terms. The experimented method relies on semantic approaches: semantic distance and similarity algorithms, terminology structuring methods and term clustering. The obtained results indicate that the proposed unsupervised methods appear to be complementary for this task, they can generate subsets of the existing SMQs and make this process systematic and less time consuming."
},
{
"pmid": "16770974",
"title": "Evaluation of the content coverage of SNOMED CT: ability of SNOMED clinical terms to represent clinical problem lists.",
"abstract": "OBJECTIVE\nTo evaluate the ability of SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) version 1.0 to represent the most common problems seen at the Mayo Clinic in Rochester, Minn.\n\n\nMATERIAL AND METHODS\nWe selected the 4996 most common nonduplicated text strings from the Mayo Master Sheet Index that describe patient problems associated with inpatient and outpatient episodes of care. From July 2003 through January 2004, 2 physician reviewers compared the Master Sheet Index text with the SNOMED CT terms that were automatically mapped by a vocabulary server or that they identified using a vocabulary browser and rated the \"correctness\" of the match. If the 2 reviewers disagreed, a third reviewer adjudicated. We evaluated the specificity, sensitivity, and positive predictive value of SNOMED CT.\n\n\nRESULTS\nOf the 4996 problems in the test set, SNOMED CT correctly identified 4568 terms (true-positive results); 36 terms were true negatives, 9 terms were false positives, and 383 terms were false negatives. SNOMED CT had a sensitivity of 92.3%, a specificity of 80.0%, and a positive predictive value of 99.8%.\n\n\nCONCLUSION\nSNOMED CT, when used as a compositional terminology, can exactly represent most (92.3%) of the terms used commonly in medical problem lists. Improvements to synonymy and adding missing modifiers would lead to greater coverage of common problem statements. Health care organizations should be encouraged and provided incentives to begin adopting SNOMED CT to drive their decision-support applications."
},
{
"pmid": "27259657",
"title": "The Orthology Ontology: development and applications.",
"abstract": "BACKGROUND\nComputational comparative analysis of multiple genomes provides valuable opportunities to biomedical research. In particular, orthology analysis can play a central role in comparative genomics; it guides establishing evolutionary relations among genes of organisms and allows functional inference of gene products. However, the wide variations in current orthology databases necessitate the research toward the shareability of the content that is generated by different tools and stored in different structures. Exchanging the content with other research communities requires making the meaning of the content explicit.\n\n\nDESCRIPTION\nThe need for a common ontology has led to the creation of the Orthology Ontology (ORTH) following the best practices in ontology construction. Here, we describe our model and major entities of the ontology that is implemented in the Web Ontology Language (OWL), followed by the assessment of the quality of the ontology and the application of the ORTH to existing orthology datasets. This shareable ontology enables the possibility to develop Linked Orthology Datasets and a meta-predictor of orthology through standardization for the representation of orthology databases. The ORTH is freely available in OWL format to all users at http://purl.org/net/orth .\n\n\nCONCLUSIONS\nThe Orthology Ontology can serve as a framework for the semantic standardization of orthology content and it will contribute to a better exploitation of orthology resources in biomedical research. The results demonstrate the feasibility of developing shareable datasets using this ontology. Further applications will maximize the usefulness of this ontology."
},
{
"pmid": "15802483",
"title": "Integrating SNOMED CT into the UMLS: an exploration of different views of synonymy and quality of editing.",
"abstract": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups."
},
{
"pmid": "17911788",
"title": "Combining lexical and semantic methods of inter-terminology mapping using the UMLS.",
"abstract": "The need for inter-terminology mapping is constantly increasing with the growth in the volume of electronically captured biomedical data and the demand to re-use the same data for secondary purposes. Using the UMLS as a knowledge base, semantically-based and lexically-based mappings were generated from SNOMED CT to ICD9CM terms and compared to a gold standard. Semantic mapping performed better than lexical mapping in terms of coverage, recall and precision. As the two mapping methods are orthogonal, the two sets of mappings can be used to validate and enhance each other. A method of combining the mappings based on the precision level of sub-categories in each method was derived. The combined method outperformed both methods, achieving coverage of 91%, recall of 43% and precision of 27%. It is also possible to customize the method of combination to optimize performance according to the task at hand."
},
{
"pmid": "28269853",
"title": "Leveraging Lexical Matching and Ontological Alignment to Map SNOMED CT Surgical Procedures to ICD-10-PCS.",
"abstract": "In 2015 ICD-10-PCS replaced ICD-9-CM for coding medical procedures in the U.S. We explored two methods to automatically map SNOMED CT surgical procedures to ICD-10-PCS. First, we used MetaMap to lexically map ICD-10-PCS index terms to SNOMED CT. Second, we made use of the axial structure of ICD-10-PCS and aligned them to defining attributes in SNOMED CT. Lexical mapping produced 45% of correct maps and 44% of broader maps. Ontological mappings were 40% correct and 5% broader. Both correct and broader maps will be useful in assisting mappers to create the map. When the two mapping methods agreed, the accuracy increased to 93%. Reviewing the MetaMap generated body part mappings and using additional information in the SNOMED CT names and definitions can lead to better results for the ontological map."
},
{
"pmid": "22874157",
"title": "Mapping SNOMED CT to ICD-10.",
"abstract": "A collaboration between the International Health Terminology Standards Development Organisation (IHTSDO®) and the World Health Organization (WHO) has resulted in a priority set of cross maps from SNOMED CT® to ICD-10® to support the epidemiological, statistical and administrative reporting needs of the IHTSDO member countries, WHO Collaborating Centres, and other interested parties. Overseen by the Joint Advisory Group (JAG), approximately 20,000 SNOMED CT concepts have been mapped to ICD-10 using a stand-alone mapping tool. The IHTSDO Map Special Interest Group (MapSIG) developed the mapping heuristics and established the validation process in conjunction with the JAG. Mapping team personnel were selected and then required to participate in a training session using the heuristics and tool. Quality metrics were used to assess the training program. An independent validation of cross map content was conducted under the supervision of the American Health Information Management Association. Lessons learned are being incorporated into the plans to complete the mapping of the remaining SNOMED CT concepts to ICD-10."
},
{
"pmid": "17481964",
"title": "Serious adverse events with infliximab: analysis of spontaneously reported adverse events.",
"abstract": "BACKGROUND & AIMS\nSerious adverse events such as bowel obstruction, heart failure, infection, lymphoma, and neuropathy have been reported with infliximab. The aims of this study were to explore adverse event signals with infliximab by using a long period of post-marketing experience, stratifying by indication.\n\n\nMETHODS\nThe relative reporting of infliximab adverse events to the U.S. Food and Drug Administration (FDA) was assessed with the public release version of the adverse event reporting system (AERS) database from 1968 to third quarter 2005. On the basis of a systematic review of adverse events, Medical Dictionary for Regulatory Activities (MedDRA) terms were mapped to predefined categories of adverse events, including death, heart failure, hepatitis, infection, infusion reaction, lymphoma, myelosuppression, neuropathy, and obstruction. Disproportionality analysis was used to calculate the empiric Bayes geometric mean (EBGM) and corresponding 90% confidence intervals (EB05, EB95) for adverse event categories.\n\n\nRESULTS\nInfliximab was identified as the suspect medication in 18,220 reports in the FDA AERS database. We identified a signal for lymphoma (EB05 = 6.9), neuropathy (EB05 = 3.8), infection (EB05 = 2.9), and bowel obstruction (EB05 = 2.8). The signal for granulomatous infections was stronger than the signal for non-granulomatous infections (EB05 = 12.6 and 2.4, respectively). The signals for bowel obstruction and infusion reaction were specific to patients with IBD; this suggests potential confounding by indication, especially for bowel obstruction.\n\n\nCONCLUSIONS\nIn light of this additional evidence of risk of lymphoma, neuropathy, and granulomatous infections, clinicians should stress this risk in the shared decision-making process."
},
{
"pmid": "15797001",
"title": "Modeling a description logic vocabulary for cancer research.",
"abstract": "The National Cancer Institute has developed the NCI Thesaurus, a biomedical vocabulary for cancer research, covering terminology across a wide range of cancer research domains. A major design goal of the NCI Thesaurus is to facilitate translational research. We describe: the features of Ontylog, a description logic used to build NCI Thesaurus; our methodology for enhancing the terminology through collaboration between ontologists and domain experts, and for addressing certain real world challenges arising in modeling the Thesaurus; and finally, we describe the conversion of NCI Thesaurus from Ontylog into Web Ontology Language Lite. Ontylog has proven well suited for constructing big biomedical vocabularies. We have capitalized on the Ontylog constructs Kind and Role in the collaboration process described in this paper to facilitate communication between ontologists and domain experts. The artifacts and processes developed by NCI for collaboration may be useful in other biomedical terminology development efforts."
},
{
"pmid": "25093068",
"title": "OAE: The Ontology of Adverse Events.",
"abstract": "BACKGROUND\nA medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health.\n\n\nDESCRIPTION\nThe Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term 'adverse event' denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA.\n\n\nCONCLUSION\nOAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of adverse events and of the factors (e.g., vaccinee age) important for determining their clinical outcomes."
},
{
"pmid": "29329592",
"title": "The eXtensible ontology development (XOD) principles and tool implementation to support ontology interoperability.",
"abstract": "Ontologies are critical to data/metadata and knowledge standardization, sharing, and analysis. With hundreds of biological and biomedical ontologies developed, it has become critical to ensure ontology interoperability and the usage of interoperable ontologies for standardized data representation and integration. The suite of web-based Ontoanimal tools (e.g., Ontofox, Ontorat, and Ontobee) support different aspects of extensible ontology development. By summarizing the common features of Ontoanimal and other similar tools, we identified and proposed an \"eXtensible Ontology Development\" (XOD) strategy and its associated four principles. These XOD principles reuse existing terms and semantic relations from reliable ontologies, develop and apply well-established ontology design patterns (ODPs), and involve community efforts to support new ontology development, promoting standardized and interoperable data and knowledge representation and integration. The adoption of the XOD strategy, together with robust XOD tool development, will greatly support ontology interoperability and robust ontology applications to support data to be Findable, Accessible, Interoperable and Reusable (i.e., FAIR)."
},
{
"pmid": "16185681",
"title": "Building an ontology of adverse drug reactions for automated signal generation in pharmacovigilance.",
"abstract": "Automated signal generation in pharmacovigilance implements unsupervised statistical machine learning techniques in order to discover unknown adverse drug reactions (ADR) in spontaneous reporting systems. The impact of the terminology used for coding ADRs has not been addressed previously. The Medical Dictionary for Regulatory Activities (MedDRA) used worldwide in pharmacovigilance cases does not provide formal definitions of terms. We have built an ontology of ADRs to describe semantics of MedDRA terms. Ontological subsumption and approximate matching inferences allow a better grouping of medically related conditions. Signal generation performances are significantly improved but time consumption related to modelization remains very important."
},
{
"pmid": "25025130",
"title": "OMIT: dynamic, semi-automated ontology development for the microRNA domain.",
"abstract": "As a special class of short non-coding RNAs, microRNAs (a.k.a. miRNAs or miRs) have been reported to perform important roles in various biological processes by regulating respective target genes. However, significant barriers exist during biologists' conventional miR knowledge discovery. Emerging semantic technologies, which are based upon domain ontologies, can render critical assistance to this problem. Our previous research has investigated the construction of a miR ontology, named Ontology for MIcroRNA Target Prediction (OMIT), the very first of its kind that formally encodes miR domain knowledge. Although it is unavoidable to have a manual component contributed by domain experts when building ontologies, many challenges have been identified for a completely manual development process. The most significant issue is that a manual development process is very labor-intensive and thus extremely expensive. Therefore, we propose in this paper an innovative ontology development methodology. Our contributions can be summarized as: (i) We have continued the development and critical improvement of OMIT, solidly based on our previous research outcomes. (ii) We have explored effective and efficient algorithms with which the ontology development can be seamlessly combined with machine intelligence and be accomplished in a semi-automated manner, thus significantly reducing large amounts of human efforts. A set of experiments have been conducted to thoroughly evaluate our proposed methodology."
},
{
"pmid": "17108617",
"title": "Knowledge acquisition for computation of semantic distance between WHO-ART terms.",
"abstract": "Computation of semantic distance between adverse drug reactions terms may be an efficient way to group related medical conditions in pharmacovigilance case reports. Previous experience with ICD-10 on a semantic distance tool highlighted a bottleneck related to manual description of formal definitions in large terminologies. We propose a method based on acquisition of formal definitions by knowledge extraction from UMLS and morphosemantic analysis. These formal definitions are expressed with SNOMED International terms. We provide formal definitions for 758 WHO-ART terms: 321 terms defined from UMLS, 320 terms defined using morphosemantic analysis and 117 terms defined after expert evaluation. Computation of semantic distance (e.g. k-nearest neighbours) was implemented in J2EE terminology services. Similar WHO-ART terms defined by automated knowledge acquisition and ICD terms defined manually show similar behaviour in the semantic distance tool. Our knowledge acquisition method can help us to generate new formal definitions of medical terms for our semantic distance terminology services."
},
{
"pmid": "9865051",
"title": "Concept-oriented standardization and statistics-oriented classification: continuing the classification versus nomenclature controversy.",
"abstract": "Nowadays, most activities on controlled medical vocabularies focus on the provision of a sufficient atomic-level granularity for representing clinical data. Amongst others, clinical vocabularies should be concept oriented, compositional and should also reject \"Not Elsewhere Classified\". We strongly share the opinion that there is a need to deal with serious deficits of existing manually created vocabularies and with new demands for computer-based advanced processing and exchange of medical language data. However, we do not share the opinion that methodological requirements like observational and structural comparability needed for sound statistics should not be included in desiderata of controlled medical vocabularies. Statistical-oriented classifications are not developed for representing detailed clinical data but for providing purpose-dependent classes where cases of interest are assigned uniquely. Either statistical classifications are not included into the set of controlled medical vocabularies in the sense of Cimino, or his desiderata are misleading. We argue that statistical classifications should be linked to (formal) concept systems, but again this linkage does not change their different natures. With this article we continue the \"classification versus nomenclature\" controversy referring to Coté."
},
{
"pmid": "29743102",
"title": "Extending the DIDEO ontology to include entities from the natural product drug interaction domain of discourse.",
"abstract": "BACKGROUND\nPrompted by the frequency of concomitant use of prescription drugs with natural products, and the lack of knowledge regarding the impact of pharmacokinetic-based natural product-drug interactions (PK-NPDIs), the United States National Center for Complementary and Integrative Health has established a center of excellence for PK-NPDI. The Center is creating a public database to help researchers (primarly pharmacologists and medicinal chemists) to share and access data, results, and methods from PK-NPDI studies. In order to represent the semantics of the data and foster interoperability, we are extending the Drug-Drug Interaction and Evidence Ontology (DIDEO) to include definitions for terms used by the data repository. This is feasible due to a number of similarities between pharmacokinetic drug-drug interactions and PK-NPDIs.\n\n\nMETHODS\nTo achieve this, we set up an iterative domain analysis in the following steps. In Step 1 PK-NPDI domain experts produce a list of terms and definitions based on data from PK-NPDI studies, in Step 2 an ontology expert creates ontologically appropriate classes and definitions from the list along with class axioms, in Step 3 there is an iterative editing process during which the domain experts and the ontology experts review, assess, and amend class labels and definitions and in Step 4 the ontology expert implements the new classes in the DIDEO development branch. This workflow often results in different labels and definitions for the new classes in DIDEO than the domain experts initially provided; the latter are preserved in DIDEO as separate annotations.\n\n\nRESULTS\nStep 1 resulted in a list of 344 terms. During Step 2 we found that 9 of these terms already existed in DIDEO, and 6 existed in other OBO Foundry ontologies. These 6 were imported into DIDEO; additional terms from multiple OBO Foundry ontologies were also imported, either to serve as superclasses for new terms in the initial list or to build axioms for these terms. At the time of writing, 7 terms have definitions ready for review (Step 2), 64 are ready for implementation (Step 3) and 112 have been pushed to DIDEO (Step 4). Step 2 also suggested that 26 terms of the original list were redundant and did not need implementation; the domain experts agreed to remove them. Step 4 resulted in many terms being added to DIDEO that help to provide an additional layer of granularity in describing experimental conditions and results, e.g. transfected cultured cells used in metabolism studies and chemical reactions used in measuring enzyme activity. These terms also were integrated into the NaPDI repository.\n\n\nCONCLUSION\nWe found DIDEO to provide a sound foundation for semantic representation of PK-NPDI terms, and we have shown the novelty of the project in that DIDEO is the only ontology in which NPDI terms are formally defined."
},
{
"pmid": "29726439",
"title": "Evaluation of SNOMED CT Content Coverage: A Systematic Literature Review.",
"abstract": "BACKGROUND\nOne of the most important features studied for adoption of terminologies is content coverage. The content coverage of SNOMED CT as a large scale terminology system has been evaluated in different domains by various methods.\n\n\nOBJECTIVES\nThis study provided an overview of studies evaluating SNOMED CT content coverage.\n\n\nMETHODS\nThis systematic literature review covered Scopus, Embase, PubMed and Web of Science. It included studies in English language with accessible full-text from the beginning of 2002 to November 2017.\n\n\nRESULTS\nReviewing 62 studies revealed that 76 percent of studies were carried out in the US and other countries started to study in this regard from 2007. Most of the studies focused on the comparison of SNOMED CT with disease classifications in the domain of \"diagnosis and problem list\".\n\n\nCONCLUSION\nStudying the trend of studies in different countries shows that SNOMED CT content coverage is not limited to the early stages of SNOMED CT adoption. However, evaluation methods are likely different due to the stage of SNOMED CT implementation. Therefore, it is recommended to identify and compare evaluation methods of SNOMED CT content coverage in future studies."
},
{
"pmid": "8412823",
"title": "The Unified Medical Language System.",
"abstract": "In 1986, the National Library of Medicine began a long-term research and development project to build the Unified Medical Language System (UMLS). The purpose of the UMLS is to improve the ability of computer programs to \"understand\" the biomedical meaning in user inquiries and to use this understanding to retrieve and integrate relevant machine-readable information for users. Underlying the UMLS effort is the assumption that timely access to accurate and up-to-date information will improve decision making and ultimately the quality of patient care and research. The development of the UMLS is a distributed national experiment with a strong element of international collaboration. The general strategy is to develop UMLS components through a series of successive approximations of the capabilities ultimately desired. Three experimental Knowledge Sources, the Metathesaurus, the Semantic Network, and the Information Sources Map have been developed and are distributed annually to interested researchers, many of whom have tested and evaluated them in a range of applications. The UMLS project and current developments in high-speed, high-capacity international networks are converging in ways that have great potential for enhancing access to biomedical information."
},
{
"pmid": "20647054",
"title": "Natural Language Processing methods and systems for biomedical ontology learning.",
"abstract": "While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of Natural Language Processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies."
},
{
"pmid": "24279920",
"title": "The Ontology of Vaccine Adverse Events (OVAE) and its usage in representing and analyzing adverse events associated with US-licensed human vaccines.",
"abstract": "BACKGROUND\nLicensed human vaccines can induce various adverse events (AE) in vaccinated patients. Due to the involvement of the whole immune system and complex immunological reactions after vaccination, it is difficult to identify the relations among vaccines, adverse events, and human populations in different age groups. Many known vaccine adverse events (VAEs) have been recorded in the package inserts of US-licensed commercial vaccine products. To better represent and analyze VAEs, we developed the Ontology of Vaccine Adverse Events (OVAE) as an extension of the Ontology of Adverse Events (OAE) and the Vaccine Ontology (VO).\n\n\nRESULTS\nLike OAE and VO, OVAE is aligned with the Basic Formal Ontology (BFO). The commercial vaccines and adverse events in OVAE are imported from VO and OAE, respectively. A new population term 'human vaccinee population' is generated and used to define VAE occurrence. An OVAE design pattern is developed to link vaccine, adverse event, vaccinee population, age range, and VAE occurrence. OVAE has been used to represent and classify the adverse events recorded in package insert documents of commercial vaccines licensed by the USA Food and Drug Administration (FDA). OVAE currently includes over 1,300 terms, including 87 distinct types of VAEs associated with 63 human vaccines licensed in the USA. For each vaccine, occurrence rates for every VAE in different age groups have been logically represented in OVAE. SPARQL scripts were developed to query and analyze the OVAE knowledge base data. To demonstrate the usage of OVAE, the top 10 vaccines accompanying with the highest numbers of VAEs and the top 10 VAEs most frequently observed among vaccines were identified and analyzed. Asserted and inferred ontology hierarchies classify VAEs in different levels of AE groups. Different VAE occurrences in different age groups were also analyzed.\n\n\nCONCLUSIONS\nThe ontology-based data representation and integration using the FDA-approved information from the vaccine package insert documents enables the identification of adverse events from vaccination in relation to predefined parts of the population (age groups) and certain groups of vaccines. The resulting ontology-based VAE knowledge base classifies vaccine-specific VAEs and supports better VAE understanding and future rational AE prevention and treatment."
},
{
"pmid": "28969675",
"title": "A histological ontology of the human cardiovascular system.",
"abstract": "BACKGROUND\nIn this paper, we describe a histological ontology of the human cardiovascular system developed in collaboration among histology experts and computer scientists.\n\n\nRESULTS\nThe histological ontology is developed following an existing methodology using Conceptual Models (CMs) and validated using OOPS!, expert evaluation with CMs, and how accurately the ontology can answer the Competency Questions (CQ). It is publicly available at http://bioportal.bioontology.org/ontologies/HO and https://w3id.org/def/System .\n\n\nCONCLUSIONS\nThe histological ontology is developed to support complex tasks, such as supporting teaching activities, medical practices, and bio-medical research or having natural language interactions."
},
{
"pmid": "10566373",
"title": "Barriers to the clinical implementation of compositionality.",
"abstract": "BACKGROUND\nCompositional mechanisms for the entry of clinically relevant controlled vocabularies have been suggested as a possible solution to providing adequate descriptive precision while keeping term vocabulary redundancy under control. As of yet, there are no widely accepted term navigators that allow physicians to enter problem lists utilizing controlled vocabularies with compositionality.\n\n\nMETHODS\nWe report on the results of a usability trial of 5 physicians using our most recent attempt at developing the Mayo Problem List Manager. We tested the implementation of an automated term composition, and hierarchical term dissection.\n\n\nRESULTS\nParticipants found acceptable terms 96% of the time and found automated term composition helpful in 85% of the case scenarios. There was significant confusion about the terminology used to describe compositional elements (kernel concepts, modifiers, and qualifiers) however participants used the functions appropriately. Speed of entry was universally stated as the limiting factor.\n\n\nCONCLUSIONS\nThe variety of methods that our participants used to enter terms highlights the need for multiple ways to accomplish the task of data entry. Successful implementation of user directed compositionality could be accomplished with further improvement of the user interface and the underlying terminology."
},
{
"pmid": "19745303",
"title": "Projection and inheritance of SNOMED CT relations between MeSH terms.",
"abstract": "This paper proposes a methodology to achieve the automatic inheritance of SNOMED CT relations applied to MeSH preferred terms using UMLS as knowledge source server. We propose an interoperability wildcard to achieve this objective. A quantitative and a qualitative analysis were performed on top four SNOMED CT relations inherited between MeSH preferred terms. A total of 12,030 couples of MeSH preferred terms are in relation via at least one SNOMED CT relationship. For the top-four relations inherited between MeSH preferred terms, overall 79.25% of them are relevant, 16.25% as intermediate and 4.5% as irrelevant, as judged by a medical librarian. This work should lead to an optimization of multi-terminology indexing tools, multi-terminology information retrieval and navigation among a multi-terminology server."
},
{
"pmid": "17604415",
"title": "Standardised MedDRA queries: their role in signal detection.",
"abstract": "Standardised MedDRA (Medical Dictionary for Regulatory Activities) queries (SMQs) are a newly developed tool to assist in the retrieval of cases of interest from a MedDRA-coded database. SMQs contain terms related to signs, symptoms, diagnoses, syndromes, physical findings, laboratory and other physiological test data etc, that are associated with the medical condition of interest. They are being developed jointly by CIOMS and the MedDRA Maintenance and Support Services Organization (MSSO) and are provided as an integral part of a MedDRA subscription. During their development, SMQs undergo testing to assure that they are able to retrieve cases of interest within the defined scope of the SMQ. This paper describes the features of SMQs that allow for flexibility in their application, such as 'narrow' and 'broad' sub-searches, hierarchical grouping of sub-searches and search algorithms. In addition, as with MedDRA, users can request changes to SMQs. SMQs are maintained in synchrony with MedDRA versions by internal maintenance processes in the MSSO. The list of safety topics to be developed into SMQs is long and comprehensive. The CIOMS Working Group retains a list of topics to be developed and periodically reviews the list for priority and relevance. As of mid-2007, 37 SMQs are in production use and several more are under development. The potential uses of SMQs in safety analysis will be discussed including their role in signal detection and evaluation."
},
{
"pmid": "28844750",
"title": "Integrating cancer diagnosis terminologies based on logical definitions of SNOMED CT concepts.",
"abstract": "In oncology, the reuse of data is confronted with the heterogeneity of terminologies. It is necessary to semantically integrate these distinct terminologies. The semantic integration by using a third terminology as a support is a conventional approach for the integration of two terminologies that are not very structured. The aim of our study was to use SNOMED CT for integrating ICD-10 and ICD-O3. We used two complementary resources, mapping tables provided by SNOMED CT and the NCI Metathesaurus, in order to find mappings between ICD-10 or ICD-O3 concepts and SNOMED CT concepts. We used the SNOMED CT structure to filter inconsistent mappings, as well as to disambiguate multiple mappings. Based on the remaining mappings, we used semantic relations from SNOMED CT to establish links between ICD-10 and ICD-O3. Overall, the coverage of ICD-O3 and ICD10 codes was over 88%. Finally, we obtained an integration of 24% (203/852) of ICD-10 concepts with 86% (888/1032) of ICD-O3 morphology concepts combined to 39% (127/330) of ICD-O3 topography concepts. Comparing our results with the 23,684 ICD-O3 pairs mapped to ICD-10 concepts in the SEER conversion file, we found 17,447 pairs of ICD-O3 concepts in common among which 11,932 pairs were integrated with the same ICD-10 concept as the SEER conversion file. The automated process leverages logical definitions of SNOMED CT concepts. While the low quality of some of these definitions impacted negatively the integration process, the identification of such situations made it possible to indirectly audit the structure of SNOMED CT."
},
{
"pmid": "20618919",
"title": "Enriching a primary health care version of ICD-10 using SNOMED CT mapping.",
"abstract": "BACKGROUND\nIn order to satisfy different needs, medical terminology systems must have richer structures. This study examines whether a Swedish primary health care version of the mono-hierarchical ICD-10 (KSH97-P) may obtain a richer structure using category and chapter mappings from KSH97-P to SNOMED CT and SNOMED CT's structure. Manually-built mappings from KSH97-P's categories and chapters to SNOMED CT's concepts are used as a starting point.\n\n\nRESULTS\nThe mappings are manually evaluated using computer-produced information and a small number of mappings are updated. A new and poly-hierarchical chapter division of KSH97-P's categories has been created using the category and chapter mappings and SNOMED CT's generic structure. In the new chapter division, most categories are included in their original chapters. A considerable number of concepts are included in other chapters than their original chapters. Most of these inclusions can be explained by ICD-10's design. KSH97-P's categories are also extended with attributes using the category mappings and SNOMED CT's defining attribute relationships. About three-fourths of all concepts receive an attribute of type Finding site and about half of all concepts receive an attribute of type Associated morphology. Other types of attributes are less common.\n\n\nCONCLUSIONS\nIt is possible to use mappings from KSH97-P to SNOMED CT and SNOMED CT's structure to enrich KSH97-P's mono-hierarchical structure with a poly-hierarchical chapter division and attributes of type Finding site and Associated morphology. The final mappings are available as additional files for this paper."
},
{
"pmid": "29316968",
"title": "Improving the interoperability of biomedical ontologies with compound alignments.",
"abstract": "BACKGROUND\nOntologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts.\n\n\nRESULTS\nWe have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that \"aortic valve stenosis\"(HP:0001650) is equivalent to the intersection between \"aortic valve\"(FMA:7236) and \"constricted\" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies.\n\n\nCONCLUSIONS\nThis work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs."
},
{
"pmid": "25949785",
"title": "Formalizing biomedical concepts from textual definitions.",
"abstract": "BACKGROUND\nOntologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions.\n\n\nRESULTS\nWe evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions.\n\n\nCONCLUSIONS\nThe analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL."
},
{
"pmid": "25488031",
"title": "Approaching the axiomatic enrichment of the Gene Ontology from a lexical perspective.",
"abstract": "OBJECTIVE\nThe main goal of this work is to measure how lexical regularities in biomedical ontology labels can be used for the automatic creation of formal relationships between classes, and to evaluate the results of applying our approach to the Gene Ontology (GO).\n\n\nMETHODS\nIn recent years, we have developed a method for the lexical analysis of regularities in biomedical ontology labels, and we showed that the labels can present a high degree of regularity. In this work, we extend our method with a cross-products extension (CPE) metric, which estimates the potential interest of a specific regularity for axiomatic enrichment in the lexical analysis, using information on exact matches in external ontologies. The GO consortium recently enriched the GO by using so-called cross-product extensions. Cross-products are generated by establishing axioms that relate a given GO class with classes from the GO or other biomedical ontologies. We apply our method to the GO and study how its lexical analysis can identify and reconstruct the cross-products that are defined by the GO consortium.\n\n\nRESULTS\nThe label of the classes of the GO are highly regular in lexical terms, and the exact matches with labels of external ontologies affect 80% of the GO classes. The CPE metric reveals that 31.48% of the classes that exhibit regularities have fragments that are classes into two external ontologies that are selected for our experiment, namely, the Cell Ontology and the Chemical Entities of Biological Interest ontology, and 18.90% of them are fully decomposable into smaller parts. Our results show that the CPE metric permits our method to detect GO cross-product extensions with a mean recall of 62% and a mean precision of 28%. The study is completed with an analysis of false positives to explain this precision value.\n\n\nCONCLUSIONS\nWe think that our results support the claim that our lexical approach can contribute to the axiomatic enrichment of biomedical ontologies and that it can provide new insights into the engineering of biomedical ontologies."
},
{
"pmid": "27737720",
"title": "Dione: An OWL representation of ICD-10-CM for classifying patients' diseases.",
"abstract": "BACKGROUND\nSystematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) has been designed as standard clinical terminology for annotating Electronic Health Records (EHRs). EHRs textual information is used to classify patients' diseases into an International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) category (usually by an expert). Improving the accuracy of classification is the main purpose of using ontologies and OWL representations at the core of classification systems. In the last few years some ontologies and OWL representations for representing ICD-10-CM categories have been developed. However, they were not designed to be the basis for an automatic classification tool nor do they model ICD-10-CM inclusion terms as Web Ontology Language (OWL) axioms, which enables automatic classification. In this context we have developed Dione, an OWL representation of ICD-10-CM.\n\n\nRESULTS\nDione is the first OWL representation of ICD-10-CM, which is logically consistent, whose axioms define the ICD-10-CM inclusion terms by means of a methodology based on SNOMED CT/ICD-10-CM mappings. The ICD-10-CM exclusions are handled with these mappings. Dione currently contains 391,669 classes, 391,720 entity annotation axioms and 11,795 owl:equivalentClass axioms which have been constructed using 104,646 relationships extracted from the SNOMED CT/ICD-10-CM and BioPortal mappings included in Dione using the owl:intersectionOf and the owl:someValuesFrom statements. The resulting OWL representation has been classified and its consistency tested with the ELK reasoner. We have also taken three clinical records from the Virgen de la Victoria Hospital (Málaga, Spain) which have been manually annotated using SNOMED CT. These annotations have been included as instances to be classified by the reasoner. The classified instances show that Dione could be a promising ICD-10-CM OWL representation to support the classification of patients' diseases.\n\n\nCONCLUSIONS\nDione is a first step towards the automatic classification of patients' diseases by using SNOMED CT annotations embedded in Electronic Health Records (EHRs). The purpose of Dione is to standardise and formalise a medical terminology, thereby enabling new kinds of tools and new sets of functionalities to be developed. This in turn assists health specialists by providing classified information from EHRs and enables the automatic annotation of patients' diseases with ICD-10-CM codes."
},
{
"pmid": "18755993",
"title": "Why do it the hard way? The case for an expressive description logic for SNOMED.",
"abstract": "There has been major progress both in description logics and ontology design since SNOMED was originally developed. The emergence of the standard Web Ontology language in its latest revision, OWL 1.1 is leading to a rapid proliferation of tools. Combined with the increase in computing power in the past two decades, these developments mean that many of the restrictions that limited SNOMED's original formulation no longer need apply. We argue that many of the difficulties identified in SNOMED could be more easily dealt with using a more expressive language than that in which SNOMED was originally, and still is, formulated. The use of a more expressive language would bring major benefits including a uniform structure for context and negation. The result would be easier to use and would simplify developing software and formulating queries."
},
{
"pmid": "22024315",
"title": "Lexically suggest, logically define: quality assurance of the use of qualifiers and expected results of post-coordination in SNOMED CT.",
"abstract": "A study of the use of common qualifiers in SNOMED CT definitions and the resulting classification was undertaken using combined lexical and semantic techniques. The accuracy of SNOMED authors in formulating definitions for pre-coordinated concepts was taken as a proxy for the expected accuracy of users formulating post-coordinated expressions. The study focused on \"acute\" and \"chronic\" as used within a module based on the UMLS CORE Problem List and using the pattern of SNOMED CT's definition Acute disease and Chronic disease. Scripts were used to identify potential candidate concepts whose names suggested that they should be classified as acute or chronic findings. The potential candidates were filtered by local clinical experts to eliminate spurious lexical matches. Scripts were then use to determine which of the filtered candidates were not classified under acute or chronic findings as expected. The results were that 28% and 20% of candidate chronic and acute concepts, respectively, were not so classified. Of these candidate misclassifications, the large majority occurred because \"acute\" and \"chronic\" are sometimes specified by qualifiers for clinical course and sometimes for morphology, a fact mentioned but not fully detailed in the User Guide distributed with the SNOMED releases. This heterogeneous representation reflects a potential conflict between common usage in patient care and SNOMED's origins in pathology. Other incidental findings included questions about the qualifier hierarchies themselves and issues with the underlying model for anatomy. The effort required for the study was kept modest by using module extraction and scripts, showing that such quality assurance of SNOMED is practical. The results of a preliminary study using proxy measures must be taken with caution. However, the high rate of misclassification indicates that, until the specifications for qualifiers are better documented and/or brought more in line with common clinical usage, anyone attempting to use post-coordination in SNOMED CT must be aware that there are significant pitfalls."
},
{
"pmid": "9865053",
"title": "Standards to support development of terminological systems for healthcare telematics.",
"abstract": "The Technical Committee on \"Medical Informatics\" of the European Committee for Standardization (CEN/TC251) is supporting developers of terminological systems in healthcare by a series of standards. The dream of \"universal\" coding system was abandoned in favor of a coherent family of terminologies, diversified according to tasks; two ideas were introduced: (1) the \"categorical structure\", i.e. a model of semantic categories and their relations within a subject field and (2) the \"cross-thesaurus\", i.e. a system of descriptors to build a systematic representation (called here \"dissection\") for each terminological phrase, coherent across diverse terminologies on a given subject field. The goal is to assure coexistence and interoperability (and reciprocal support for development and maintenance) to three generations of systems: (1) traditional paper-based systems (first generation); (2) compositional systems built according to a categorical structure and a cross-thesaurus (second generation) and (3) formal models (third generation). Various scenarios are presented, on the exploitation of computer-based terminological systems. The idea of \"operational meaning\" of terminological phrases within administrative and organizational contexts and the idea of \"task-oriented details\" are also introduced, to justify and exploit design constraints on terminological systems."
},
{
"pmid": "16501181",
"title": "Interface terminologies: facilitating direct entry of clinical data into electronic health record systems.",
"abstract": "Previous investigators have defined clinical interface terminology as a systematic collection of health care-related phrases (terms) that supports clinicians' entry of patient-related information into computer programs, such as clinical \"note capture\" and decision support tools. Interface terminologies also can facilitate display of computer-stored patient information to clinician-users. Interface terminologies \"interface\" between clinicians' own unfettered, colloquial conceptualizations of patient descriptors and the more structured, coded internal data elements used by specific health care application programs. The intended uses of a terminology determine its conceptual underpinnings, structure, and content. As a result, the desiderata for interface terminologies differ from desiderata for health care-related terminologies used for storage (e.g., SNOMED-CT), information retrieval (e.g., MeSH), and classification (e.g., ICD9-CM). Necessary but not sufficient attributes for an interface terminology include adequate synonym coverage, presence of relevant assertional knowledge, and a balance between pre- and post-coordination. To place interface terminologies in context, this article reviews historical goals and challenges of clinical terminology development in general and then focuses on the unique features of interface terminologies."
},
{
"pmid": "22080554",
"title": "Disease Ontology: a backbone for disease semantic integration.",
"abstract": "The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings."
},
{
"pmid": "23974561",
"title": "Formal ontologies in biomedical knowledge representation.",
"abstract": "OBJECTIVES\nMedical decision support and other intelligent applications in the life sciences depend on increasing amounts of digital information. Knowledge bases as well as formal ontologies are being used to organize biomedical knowledge and data. However, these two kinds of artefacts are not always clearly distinguished. Whereas the popular RDF(S) standard provides an intuitive triple-based representation, it is semantically weak. Description logics based ontology languages like OWL-DL carry a clear-cut semantics, but they are computationally expensive, and they are often misinterpreted to encode all kinds of statements, including those which are not ontological.\n\n\nMETHOD\nWe distinguish four kinds of statements needed to comprehensively represent domain knowledge: universal statements, terminological statements, statements about particulars and contingent statements. We argue that the task of formal ontologies is solely to represent universal statements, while the non-ontological kinds of statements can nevertheless be connected with ontological representations. To illustrate these four types of representations, we use a running example from parasitology.\n\n\nRESULTS\nWe finally formulate recommendations for semantically adequate ontologies that can efficiently be used as a stable framework for more context-dependent biomedical knowledge representation and reasoning applications like clinical decision support systems."
},
{
"pmid": "29295238",
"title": "Interface Terminologies, Reference Terminologies and Aggregation Terminologies: A Strategy for Better Integration.",
"abstract": "The time has come to end unproductive competitions among different types of biomedical terminology artefacts. Tools and strategies to create the foundation of a seamless environment covering clinical jargon, clinical terminologies, and classifications are necessary. Whereas language processing relies on human interface terminologies, which represent clinical jargon, their link to reference terminologies such as SNOMED CT is essential to guarantee semantic interoperability. There is also a need for interoperation between reference and aggregation terminologies. Simple mappings between nodes are not enough, because the three kinds of terminology systems represent different things: reference terminologies focus on context-free descriptions of classes of entities of a domain; aggregation terminologies contain rules that enforce the principle of single hierarchies and disjoint classes; interface terminologies represent the language used in a domain. We propose a model that aims at providing a better flow of standardized information, addressing multiple use cases in health care including clinical research, epidemiology, care management, and reimbursement."
},
{
"pmid": "16697710",
"title": "NCI Thesaurus: a semantic model integrating cancer-related clinical and molecular information.",
"abstract": "Over the last 8 years, the National Cancer Institute (NCI) has launched a major effort to integrate molecular and clinical cancer-related information within a unified biomedical informatics framework, with controlled terminology as its foundational layer. The NCI Thesaurus is the reference terminology underpinning these efforts. It is designed to meet the growing need for accurate, comprehensive, and shared terminology, covering topics including: cancers, findings, drugs, therapies, anatomy, genes, pathways, cellular and subcellular processes, proteins, and experimental organisms. The NCI Thesaurus provides a partial model of how these things relate to each other, responding to actual user needs and implemented in a deductive logic framework that can help maintain the integrity and extend the informational power of what is provided. This paper presents the semantic model for cancer diseases and its uses in integrating clinical and molecular knowledge, more briefly examines the models and uses for drug, biochemical pathway, and mouse terminology, and discusses limits of the current approach and directions for future work."
},
{
"pmid": "23304363",
"title": "Evaluation of automated term groupings for detecting anaphylactic shock signals for drugs.",
"abstract": "Signal detection in pharmacovigilance should take into account all terms related to a medical concept rather than a single term. We built an OWL-DL file with formal definitions of MedDRA and SNOMED-CT concepts and performed two queries, Query 1 and 2, to retrieve narrow and broad terms within the Standard MedDRA Query (SMQ) related to 'anaphylactic shock' and the terms from the High Level Term (HLT) grouping related to 'anaphylaxis'. We compared values of the EB05 (EBGM) statistical test for disproportionality with 50 active ingredients randomly selected in the public version of the FDA pharmacovigilance database. Coefficient of correlation was R(2) = 1.00 between Query 1 and HLT; R(2) = 0.98 between Query 1 and SMQ narrow; R(2) = 0.89 between Query 2 and SMQ Narrow+Broad. Generating automated groupings of terms for signal detection is feasible but requires additional efforts in modeling MedDRA terms in order to improve precision and recall of these groupings."
},
{
"pmid": "27348725",
"title": "MedDRA® automated term groupings using OntoADR: evaluation with upper gastrointestinal bleedings.",
"abstract": "OBJECTIVE\nTo propose a method to build customized sets of MedDRA terms for the description of a medical condition. We illustrate this method with upper gastrointestinal bleedings (UGIB).\n\n\nRESEARCH DESIGN AND METHODS\nWe created a broad list of MedDRA terms related to UGIB and defined a gold standard with the help of experts. MedDRA terms were formally described in a semantic resource named OntoADR. We report the use of two semantic queries that automatically select candidate terms for UGIB. Query 1 is a combination of two SNOMED CT concepts describing both morphology 'Hemorrhage' and finding site 'Upper digestive tract structure'. Query 2 complements Query 1 by taking into account MedDRA terms associated to SNOMED CT concepts describing clinical manifestations 'Melena' or 'Hematemesis'.\n\n\nRESULTS\nWe compared terms in queries and our gold standard achieving a recall of 71.0% and a precision of 81.4% for query 1 (F1 score 0.76); and a recall of 96.7% and a precision of 77.0% for query 2 (F1 score 0.86).\n\n\nCONCLUSIONS\nOur results demonstrate the feasibility of applying knowledge engineering techniques for building customized sets of MedDRA terms. Additional work is necessary to improve precision and recall, and confirm the interest of the proposed strategy."
},
{
"pmid": "27369567",
"title": "OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.",
"abstract": "INTRODUCTION\nEfficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR.\n\n\nMETHODS\nThe method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts.\n\n\nRESULTS\nWe built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest.\n\n\nDISCUSSION\nThe methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal detection, which suggests that the graphical user interface we are currently implementing to process OntoADR could be usefully integrated into specialized pharmacovigilance software that rely on MedDRA."
},
{
"pmid": "30792654",
"title": "Semantic Queries Expedite MedDRA Terms Selection Thanks to a Dedicated User Interface: A Pilot Study on Five Medical Conditions.",
"abstract": "Background: Searching into the MedDRA terminology is usually limited to a hierarchical search, and/or a string search. Our objective was to compare user performances when using a new kind of user interface enabling semantic queries versus classical methods, and evaluating term selection improvement in MedDRA. Methods: We implemented a forms-based web interface: OntoADR Query Tools (OQT). It relies on OntoADR, a formal resource describing MedDRA terms using SNOMED CT concepts and corresponding semantic relations, enabling terminological reasoning. We then compared time spent on five examples of medical conditions using OQT or the MedDRA web-based browser (MWB), and precision and recall of the term selection. Results: OntoADR Query Tools allows the user to search in MedDRA: One may enter search criteria by selecting one semantic property from a dropdown list and one or more SNOMED CT concepts related to the range of the chosen property. The user is assisted in building his query: he can add criteria and combine them. Then, the interface displays the set of MedDRA terms matching the query. Meanwhile, on average, the time spent on OQT (about 4 min 30 s) is significantly lower (-35%; p < 0.001) than time spent on MWB (about 7 min). The results of the System Usability Scale (SUS) gave a score of 62.19 for OQT (rated as good). We also demonstrated increased precision (+27%; p = 0.01) and recall (+34%; p = 0.02). Computed \"performance\" (correct terms found per minute) is more than three times better with OQT than with MWB. Discussion: This pilot study establishes the feasibility of our approach based on our initial assumption: performing MedDRA queries on the five selected medical conditions, using terminological reasoning, expedites term selection, and improves search capabilities for pharmacovigilance end users. Evaluation with a larger number of users and medical conditions are required in order to establish if OQT is appropriate for the needs of different user profiles, and to check if conclusions can be extended to other kinds of medical conditions. The application is currently limited by the non-exhaustive coverage of MedDRA by OntoADR, but nevertheless shows good performance which encourages continuing in the same direction."
},
{
"pmid": "19757412",
"title": "Data mining on electronic health record databases for signal detection in pharmacovigilance: which events to monitor?",
"abstract": "PURPOSE\nData mining on electronic health records (EHRs) has emerged as a promising complementary method for post-marketing drug safety surveillance. The EU-ADR project, funded by the European Commission, is developing techniques that allow mining of EHRs for adverse drug events across different countries in Europe. Since mining on all possible events was considered to unduly increase the number of spurious signals, we wanted to create a ranked list of high-priority events.\n\n\nMETHODS\nScientific literature, medical textbooks, and websites of regulatory agencies were reviewed to create a preliminary list of events that are deemed important in pharmacovigilance. Two teams of pharmacovigilance experts independently rated each event on five criteria: 'trigger for drug withdrawal', 'trigger for black box warning', 'leading to emergency department visit or hospital admission', 'probability of event to be drug-related', and 'likelihood of death'. In case of disagreement, a consensus score was obtained. Ordinal scales between 0 and 3 were used for rating the criteria, and an overall score was computed to rank the events.\n\n\nRESULTS\nAn initial list comprising 23 adverse events was identified. After rating all the events and calculation of overall scores, a ranked list was established. The top-ranking events were: cutaneous bullous eruptions, acute renal failure, anaphylactic shock, acute myocardial infarction, and rhabdomyolysis.\n\n\nCONCLUSIONS\nA ranked list of 23 adverse drug events judged as important in pharmacovigilance was created to permit focused data mining. The list will need to be updated periodically as knowledge on drug safety evolves and new issues in drug safety arise."
},
{
"pmid": "29908358",
"title": "From lexical regularities to axiomatic patterns for the quality assurance of biomedical terminologies and ontologies.",
"abstract": "Ontologies and terminologies have been identified as key resources for the achievement of semantic interoperability in biomedical domains. The development of ontologies is performed as a joint work by domain experts and knowledge engineers. The maintenance and auditing of these resources is also the responsibility of such experts, and this is usually a time-consuming, mostly manual task. Manual auditing is impractical and ineffective for most biomedical ontologies, especially for larger ones. An example is SNOMED CT, a key resource in many countries for codifying medical information. SNOMED CT contains more than 300000 concepts. Consequently its auditing requires the support of automatic methods. Many biomedical ontologies contain natural language content for humans and logical axioms for machines. The 'lexically suggest, logically define' principle means that there should be a relation between what is expressed in natural language and as logical axioms, and that such a relation should be useful for auditing and quality assurance. Besides, the meaning of this principle is that the natural language content for humans could be used to generate the logical axioms for the machines. In this work, we propose a method that combines lexical analysis and clustering techniques to (1) identify regularities in the natural language content of ontologies; (2) cluster, by similarity, labels exhibiting a regularity; (3) extract relevant information from those clusters; and (4) propose logical axioms for each cluster with the support of axiom templates. These logical axioms can then be evaluated with the existing axioms in the ontology to check their correctness and completeness, which are two fundamental objectives in auditing and quality assurance. In this paper, we describe the application of the method to two SNOMED CT modules, a 'congenital' module, obtained using concepts exhibiting the attribute Occurrence - Congenital, and a 'chronic' module, using concepts exhibiting the attribute Clinical course - Chronic. We obtained a precision and a recall of respectively 75% and 28% for the 'congenital' module, and 64% and 40% for the 'chronic' one. We consider these results to be promising, so our method can contribute to the support of content editors by using automatic methods for assuring the quality of biomedical ontologies and terminologies."
},
{
"pmid": "20529942",
"title": "Semi-automated ontology generation within OBO-Edit.",
"abstract": "MOTIVATION\nOntologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration.\n\n\nRESULTS\nWe have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent-child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent-child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child-ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent-child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers.\n\n\nAVAILABILITY\nDOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25785185",
"title": "Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.",
"abstract": "BACKGROUND\nIt is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format.\n\n\nRESULTS\nInspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development.\n\n\nCONCLUSIONS\nWith ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following design patterns.\n\n\nAVAILABILITY\nhttp://ontorat.hegroup.org/."
}
] |
JMIR mHealth and uHealth | 31482852 | PMC6751099 | 10.2196/14474 | User Experience of 7 Mobile Electroencephalography Devices: Comparative Study | BackgroundRegistration of brain activity has become increasingly popular and offers a way to identify the mental state of the user, prevent inappropriate workload, and control other devices by means of brain-computer interfaces. However, electroencephalography (EEG) is often related to user acceptance issues regarding the measuring technique. Meanwhile, emerging mobile EEG technology offers the possibility of gel-free signal acquisition and wireless signal transmission. Nonetheless, user experience research about the new devices is lacking.ObjectiveThis study aimed to evaluate user experience aspects of emerging mobile EEG devices and, in particular, to investigate wearing comfort and issues related to emotional design.MethodsWe considered 7 mobile EEG devices and compared them for their wearing comfort, type of electrodes, visual appearance, and subjects’ preference for daily use. A total of 24 subjects participated in our study and tested every device independently of the others. The devices were selected in a randomized order and worn on consecutive day sessions of 60-min duration. At the end of each session, subjects rated the devices by means of questionnaires.ResultsResults indicated a highly significant change in maximal possible wearing duration among the EEG devices (χ26=40.2, n=24; P<.001). Regarding the visual perception of devices’ headset design, results indicated a significant change in the subjects’ ratings (χ26=78.7, n=24; P<.001). Results of the subjects’ ratings regarding the practicability of the devices indicated highly significant differences among the EEG devices (χ26=83.2, n=24; P<.001). Ranking order and posthoc tests offered more insight and indicated that pin electrodes had the lowest wearing comfort, in particular, when coupled with a rigid, heavy headset. Finally, multiple linear regression for each device separately revealed that users were not willing to accept less comfort for a more attractive headset design.ConclusionsThe study offers a differentiated look at emerging mobile and gel-free EEG technology and the relation between user experience aspects and device preference. Our research could be seen as a precondition for the development of usable applications with wearables and contributes to consumer health informatics and health-enabling technologies. Furthermore, our results provided guidance for the technological development direction of new EEG devices related to the aspects of emotional design. | Related WorkDuring the previous years, the advances in sensor technology promoted the research regarding the usability of emerging EEG devices. Most of the published papers concentrated only on device functionality and signal quality comparison between the traditional gel-based electrodes and the new dry electrodes [7,10-12].Only a small number of studies were concerned with devices’ wearing comfort and design requirements. Nikulin et al [13] reported that for designing a new kind of electrodes, they considered not only signal quality but also electrodes’ visual appearance and wearing comfort. They put effort to create extremely light and small electrodes that could be applied with some conductive gel directly on the head without any cap or headset. During the study, subjects reported that the electrodes were not noticeable and also not visually detectable by other people. Subjects felt less watched and thus better. Nikulin et al argued that this was particularly important when working outside the laboratory, and subjects were asked to behave naturally and free, in particular, during field experiments in real work environments. However, the main limitation was that the electrodes had to be applied with gel. This application procedure was time consuming and required specific knowledge about electrodes’ precise positions on the head. Hence, it had to be done by an experienced investigator and could not be done by the subject itself. A further limitation was that the subjects did not have the opportunity to compare the new electrode device with another.Similarly, Grozea et al [14] reported on their work on new electrodes with fine, flexible, and metal-coated polymer bristles. The bristles should allow for a good contact through the hair, and simultaneously, they should be comfortable during wearing. The researchers tested the electrodes on subjects (ie, colleagues) that had previous experience with other kinds of electrodes (eg, gel-based and pin electrodes). The subjects concluded that although the bristles electrodes were better than the pin electrodes, the bristles could have been softer and more flexible to increase comfort. Limitations of the study were the small number of subjects participating and the lack of direct comparison among the different kinds of electrodes instead of recalling the wearing comfort from previous experiences.Comparison studies among different commercial EEG devices regarding user experience were rare. A study by Ekandem et al [15] dealt with the comparison between Emotiv’s EPOC device and NeuroSky’s MindWave device. Research questions concerned the wearing comfort, the preparation, and the application time. The latter was less than 5 min for both devices and thus clearly less compared with traditional EEG devices. After 15 min of wearing, subjects were asked to answer questions about the overall comfort of the worn device, the length of time they would be able to wear it, and the type of discomfort [15]. Thereby, the EPOC device was rated more comfortable compared with the MindWave device. A main limitation of the study concerned the wearing time of 15 min because this could be insufficient for determining discomfort issues.A study by Izdebski et al [16] was divided into 2 similar experiments that tested in total 7 devices. Of 7 devices, 4 devices (g.tec’s g.SAHARA, Emotiv’s EPOC, ANT Neuro’s asalab, and Brain Products’ [Brain Products GmbH] actiCAP) were tested by 4 subjects, and the remaining 3 devices (BioSemi’s ActiveTwo, Cognionics’ Dry System, and Cognionics’ Wet System) were tested by 9 subjects. Duration of the sessions varied between 1 and 3 hours, and the usability was assessed at the end of each session by a questionnaire. Surveyed usability aspects were comfort, cap fit, mood, and movement restriction. Izdebski et al reported that the gel-based electrode headsets asalab and actiCAP induced general discomfort although participants did not report an unpleasant feeling under the cap nor a high pressure of the electrodes. Regarding cap fit, the ActiveTwo and systems without adjustment possibilities received negative ratings. The EPOC, g.SAHARA, and asalab devices yielded a more negative mood at the end of the session, whereas the wired systems asalab and actiCAP were rated as more movement restricting. A limitation of the study concerns the lack of a consistent within-subject design and the very different session durations.Hairston et al [17] conducted a usability research experiment with a wearing time duration of 60 min. They compared 4 EEG devices: 3 wireless EEG systems (Emotiv’s EPOC, Advanced Brain Monitoring’s B-Alert X10, and QUASAR’s HMS) and 1 wired, laboratory-grade device (Bio-Semi’s ActiveTwo). The main user experience aspects they focused on, besides signal quality issues, were the adaptability of the devices to different head sizes, comfort, and subjects’ device preference. They found that subjects preferred the B-Alert X10 device more than the other 2 wireless systems although it had gel-based electrodes. Subjects reported that the gel-infused pads of the B-Alert X10 device were more comfortable than the others. Finally, Hairston et al stated that future work was needed to systematically study usability factors and improve development efforts of new systems.To compare the usability of a brain-computer interface for communication, Nijboer et al [18] tested 3 different EEG headsets (g.tec’s g.SAHARA, Emotiv’s EPOC, and BioSemi’s ActiveTwo). Apart from signal quality, Nijboer et al also assessed the speed and ease of headset’s setup, subjects’ rating about their appearance with headset, comfort, and general device preference. Nijboer et al obtained the highest setup time for the gel-based ActiveTwo device, the best aesthetic ratings for the EPOC device, and the best comfort ratings for the gel-based ActiveTwo and pin-based g.SAHARA devices. Although the EPOC device yielded the worst ratings regarding comfort, it was the device of choice in the ranking of preference. Nijboer et al assumed that aesthetics and ease of use could be more important factors than comfort when it comes to preference ranking. They stated that more research was needed to understand which user experience aspects influence subjects’ preference choice.Table 1 summarizes the above-mentioned studies in a symmetric presentation style. To conclude, considering that duration of registration sessions and thus device wearing can take a long time, comfort requirements are particularly important. Existing studies regarding the usability of EEG headsets indicated that for assuring user acceptance, devices should be lightweight, comfortable, not painful to wear, and with an unobtrusive design. However, limitations of these studies were a limited number of participants, lack of comparisons among different devices, or a too short wearing duration of the EEG headsets. Most of the studies focused primarily on wearing comfort and neglected user experience aspects such as emotional design. In our study, we considered these things and systematically compared 7 different EEG devices.Table 1Literature review regarding user experience of emerging electroencephalography technology.ReferenceDevices testedElectrode type and numberSet sizeWearing durationUser aspects and itemsResultsNikulin et al 2010 [13]Proprietary development, traditional EEGa capMiniaturized C-electrodes with gel, 3; standard electrodes with gel, 34 subjects40-60 minWearing comfort, tactile sensation, shameNo tactile sensations associated with C-electrode wearing, no negative emotional impact in the presence of others, and no discomfortGrozea et al [14]Proprietary developmentDry bristle electrodes; no information about number of electrodes8 colleagues (2 of them excluded)<1 hourComfort issuesMost subjects reported them to be more advanced than the previously knownEkandem et al [15]Emotiv’s EPOC, NeuroSky’s MindWaveSaline-based, 14; dry, 113 subjects (2 of them excluded)15 minComfort and wearing durationEPOC more comfortable; at least 20 min possibleIzdebski et al [16]g.tec’s g.SAHARA, Emotiv’s EPOC, Cognionics’ Dry System, ANT Neuro’s asalab, Brain Products’ actiCAP, BioSemi’s ActiveTwo, and Cognionics’ Wet SystemDry, 32; saline-based, 14; dry, 64; gel, 128; gel, 64; gel, 128; gel, 644 subjects (g.SAHARA, EPOC asalab, and actiCAP); 9 subjects (ActiveTwo, Cognionics’ Dry System, and Cognionics’ Wet System)4 subjects (2-3 hours); 9 subjects (1-2 hours)Comfort, cap fit, mood, and movement restrictionasalab and actiCAP induced general discomfort although participants did not report unpleasant feeling under cap nor high pressure of electrodes; ActiveTwo and systems without adjustment possibilities received negative ratings regarding cap fit; EPOC, g.SAHARA, and asalab yielded a more negative mood at the end of the session; the wired systems asalab and actiCAP were rated as more movement restrictingHairston et al [17]Emotiv’s EPOC, Advanced Brain Monitoring’s B-Alert X10, QUASAR’s HMS, and BioSemi’s ActiveTwoSaline-based, 14; gel, 9; dry, 9; gel, 6416 subjects (3-4 of them excluded)60 minComfort, preferenceMost preferred: B-Alert; comfortable to wearNijboer et al [18]g.tec’s g.SAHARA, Emotiv’s EPOC, BioSemi’s ActiveTwoDry, 8; saline-based, 14; gel, 3213 subjects~1 hourSpeed and ease of setup, appearance with headset, comfort, and general preferenceHighest setup time for ActiveTwo; best aesthetic ratings for EPOC; best comfort ratings for ActiveTwo and g.SAHARA; in general, most preferred: EPOCaEEG: electroencephalography. | [
"26020164",
"23013047",
"20227914",
"21436526",
"22506831",
"24980915",
"29491841",
"28761162",
"23247157"
] | [
{
"pmid": "26020164",
"title": "Measurement of neural signals from inexpensive, wireless and dry EEG systems.",
"abstract": "Electroencephalography (EEG) is challenged by high cost, immobility of equipment and the use of inconvenient conductive gels. We compared EEG recordings obtained from three systems that are inexpensive, wireless, and/or dry (no gel), against recordings made with a traditional, research-grade EEG system, in order to investigate the ability of these 'non-traditional' systems to produce recordings of comparable quality to a research-grade system. The systems compared were: Emotiv EPOC (inexpensive and wireless), B-Alert (wireless), g.Sahara (dry) and g.HIamp (research-grade). We compared the ability of the systems to demonstrate five well-studied neural phenomena: (1) enhanced alpha activity with eyes closed versus open; (2) visual steady-state response (VSSR); (3) mismatch negativity; (4) P300; and (5) event-related desynchronization/synchronization. All systems measured significant alpha augmentation with eye closure, and were able to measure VSSRs (although these were smaller with g.Sahara). The B-Alert and g.Sahara were able to measure the three time-locked phenomena equivalently to the g.HIamp. The Emotiv EPOC did not have suitably located electrodes for two of the tasks and synchronization considerations meant that data from the time-locked tasks were not assessed. The results show that inexpensive, wireless, or dry systems may be suitable for experimental studies using EEG, depending on the research paradigm, and within the constraints imposed by their limited electrode placement and number."
},
{
"pmid": "23013047",
"title": "How about taking a low-cost, small, and wireless EEG for a walk?",
"abstract": "To build a low-cost, small, and wireless electroencephalogram (EEG) system suitable for field recordings, we merged consumer EEG hardware with an EEG electrode cap. Auditory oddball data were obtained while participants walked outdoors on university campus. Single-trial P300 classification with linear discriminant analysis revealed high classification accuracies for both indoor (77%) and outdoor (69%) recording conditions. We conclude that good quality, single-trial EEG data suitable for mobile brain-computer interfaces can be obtained with affordable hardware."
},
{
"pmid": "20227914",
"title": "Miniaturized electroencephalographic scalp electrode for optimal wearing comfort.",
"abstract": "OBJECTIVE\nCurrent mainstream EEG electrode setups permit efficient recordings, but are often bulky and uncomfortable for subjects. Here we introduce a novel type of EEG electrode, which is designed for an optimal wearing comfort. The electrode is referred to as C-electrode where \"C\" stands for comfort.\n\n\nMETHODS\nThe C-electrode does not require any holder/cap for fixation on the head nor does it use traditional pads/lining of disposable electrodes - thus, it does not disturb subjects. Fixation of the C-electrode on the scalp is based entirely on the adhesive interaction between the very light C-electrode/wire construction (<35 mg) and a droplet of EEG paste/gel. Moreover, because of its miniaturization, both C-electrode (diameter 2-3mm) and a wire (diameter approximately 50 microm) are minimally (or not at all) visible to an external observer. EEG recordings with standard and C-electrodes were performed during rest condition, self-paced movements and median nerve stimulation.\n\n\nRESULTS\nThe quality of EEG recordings for all three types of experimental conditions was similar for standard and C-electrodes, i.e., for near-DC recordings (Bereitschaftspotential), standard rest EEG spectra (1-45 Hz) and very fast oscillations approximately 600 Hz (somatosensory evoked potentials). The tests showed also that once being placed on a subject's head, C-electrodes can be used for 9h without any loss in EEG recording quality. Furthermore, we showed that C-electrodes can be effectively utilized for Brain-Computer Interfacing. C-electrodes proved to posses a high stability of mechanical fixation (stayed attached with 2.5 g accelerations). Subjects also reported not having any tactile sensations associated with wearing of C-electrodes.\n\n\nCONCLUSION\nC-electrodes provide optimal wearing comfort without any loss in the quality of EEG recordings.\n\n\nSIGNIFICANCE\nWe anticipate that C-electrodes can be used in a wide range of clinical, research and emerging neuro-technological environments."
},
{
"pmid": "21436526",
"title": "Bristle-sensors--low-cost flexible passive dry EEG electrodes for neurofeedback and BCI applications.",
"abstract": "In this paper, we present a new, low-cost dry electrode for EEG that is made of flexible metal-coated polymer bristles. We examine various standard EEG paradigms, such as capturing occipital alpha rhythms, testing for event-related potentials in an auditory oddball paradigm and performing a sensory motor rhythm-based event-related (de-) synchronization paradigm to validate the performance of the novel electrodes in terms of signal quality. Our findings suggest that the dry electrodes that we developed result in high-quality EEG recordings and are thus suitable for a wide range of EEG studies and BCI applications. Furthermore, due to the flexibility of the novel electrodes, greater comfort is achieved in some subjects, this being essential for long-term use."
},
{
"pmid": "22506831",
"title": "Evaluating the ergonomics of BCI devices for research and experimentation.",
"abstract": "The use of brain computer interface (BCI) devices in research and applications has exploded in recent years. Applications such as lie detectors that use functional magnetic resonance imaging (fMRI) to video games controlled using electroencephalography (EEG) are currently in use. These developments, coupled with the emergence of inexpensive commercial BCI headsets, such as the Emotiv EPOC ( http://emotiv.com/index.php ) and the Neurosky MindWave, have also highlighted the need of performing basic ergonomics research since such devices have usability issues, such as comfort during prolonged use, and reduced performance for individuals with common physical attributes, such as long or coarse hair. This paper examines the feasibility of using consumer BCIs in scientific research. In particular, we compare user comfort, experiment preparation time, signal reliability and ease of use in light of individual differences among subjects for two commercially available hardware devices, the Emotiv EPOC and the Neurosky MindWave. Based on these results, we suggest some basic considerations for selecting a commercial BCI for research and experimentation. STATEMENT OF RELEVANCE: Despite increased usage, few studies have examined the usability of commercial BCI hardware. This study assesses usability and experimentation factors of two commercial BCI models, for the purpose of creating basic guidelines for increased usability. Finding that more sensors can be less comfortable and accurate than devices with fewer sensors."
},
{
"pmid": "24980915",
"title": "Usability of four commercially-oriented EEG systems.",
"abstract": "Electroencephalography (EEG) holds promise as a neuroimaging technology that can be used to understand how the human brain functions in real-world, operational settings while individuals move freely in perceptually-rich environments. In recent years, several EEG systems have been developed that aim to increase the usability of the neuroimaging technology in real-world settings. Here, the usability of three wireless EEG systems from different companies are compared to a conventional wired EEG system, BioSemi's ActiveTwo, which serves as an established laboratory-grade 'gold standard' baseline. The wireless systems compared include Advanced Brain Monitoring's B-Alert X10, Emotiv Systems' EPOC and the 2009 version of QUASAR's Dry Sensor Interface 10-20. The design of each wireless system is discussed in relation to its impact on the system's usability as a potential real-world neuroimaging system. Evaluations are based on having participants complete a series of cognitive tasks while wearing each of the EEG acquisition systems. This report focuses on the system design, usability factors and participant comfort issues that arise during the experimental sessions. In particular, the EEG systems are assessed on five design elements: adaptability of the system for differing head sizes, subject comfort and preference, variance in scalp locations for the recording electrodes, stability of the electrical connection between the scalp and electrode, and timing integration between the EEG system, the stimulus presentation computer and other external events."
},
{
"pmid": "29491841",
"title": "Signal Quality Evaluation of Emerging EEG Devices.",
"abstract": "Electroencephalogram (EEG) registration as a direct measure of brain activity has unique potentials. It is one of the most reliable and predicative indicators when studying human cognition, evaluating a subject's health condition, or monitoring their mental state. Unfortunately, standard signal acquisition procedures limit the usability of EEG devices and narrow their application outside the lab. Emerging sensor technology allows gel-free EEG registration and wireless signal transmission. Thus, it enables quick and easy application of EEG devices by users themselves. Although a main requirement for the interpretation of an EEG is good signal quality, there is a lack of research on this topic in relation to new devices. In our work, we compared the signal quality of six very different EEG devices. On six consecutive days, 24 subjects wore each device for 60 min and completed tasks and games on the computer. The registered signals were evaluated in the time and frequency domains. In the time domain, we examined the percentage of artifact-contaminated EEG segments and the signal-to-noise ratios. In the frequency domain, we focused on the band power variation in relation to task demands. The results indicated that the signal quality of a mobile, gel-based EEG system could not be surpassed by that of a gel-free system. However, some of the mobile dry-electrode devices offered signals that were almost comparable and were very promising. This study provided a differentiated view of the signal quality of emerging mobile and gel-free EEG recording technology and allowed an assessment of the functionality of the new devices. Hence, it provided a crucial prerequisite for their general application, while simultaneously supporting their further development."
},
{
"pmid": "28761162",
"title": "Hearables: Multimodal physiological in-ear sensing.",
"abstract": "Future health systems require the means to assess and track the neural and physiological function of a user over long periods of time, and in the community. Human body responses are manifested through multiple, interacting modalities - the mechanical, electrical and chemical; yet, current physiological monitors (e.g. actigraphy, heart rate) largely lack in cross-modal ability, are inconvenient and/or stigmatizing. We address these challenges through an inconspicuous earpiece, which benefits from the relatively stable position of the ear canal with respect to vital organs. Equipped with miniature multimodal sensors, it robustly measures the brain, cardiac and respiratory functions. Comprehensive experiments validate each modality within the proposed earpiece, while its potential in wearable health monitoring is illustrated through case studies spanning these three functions. We further demonstrate how combining data from multiple sensors within such an integrated wearable device improves both the accuracy of measurements and the ability to deal with artifacts in real-world scenarios."
},
{
"pmid": "23247157",
"title": "The in-the-ear recording concept: user-centered and wearable brain monitoring.",
"abstract": "The integration of brain monitoring based on electroencephalography (EEG) into everyday life has been hindered by the limited portability and long setup time of current wearable systems as well as by the invasiveness of implanted systems (e.g. intracranial EEG). We explore the potential to record EEG in the ear canal, leading to a discreet, unobtrusive, and user-centered approach to brain monitoring. The in-the-ear EEG (Ear-EEG) recording concept is tested using several standard EEG paradigms, benchmarked against standard onscalp EEG, and its feasibility proven. Such a system promises a number of advantages, including fixed electrode positions, user comfort, robustness to electromagnetic interference, feedback to the user, and ease of use. The Ear-EEG platform could also support additional biosensors, extending its reach beyond EEG to provide a powerful health-monitoring system for those applications that require long recording periods in a natural environment."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.