id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
-:q4V6u | Evaluation of Folksonomy Induction Algorithms | Algorithms for constructing hierarchical structures from user-generated metadata have caught the interest of the academic community in recent years. In social tagging systems, the output of these algorithms is usually referred to as folksonomies (from folk-generated taxonomies). Evaluation of folksonomies and folksonomy induction algorithms is a challenging issue complicated by the lack of golden standards, lack of comprehensive methods and tools as well as a lack of research and empirical/simulation studies applying these methods. In this article, we report results from a broad comparative study of state-of-the-art folksonomy induction algorithms that we have applied and evaluated in the context of five social tagging systems. In addition to adopting semantic evaluation techniques, we present and adopt a new technique that can be used to evaluate the usefulness of folksonomies for navigation. Our work sheds new light on the properties and characteristics of state-of-the-art folksonomy induction algorithms and introduces a new pragmatic approach to folksonomy evaluation, while at the same time identifying some important limitations and challenges of folksonomy evaluation. Our results show that folksonomy induction algorithms specifically developed to capture intuitions of social tagging systems outperform traditional hierarchical clustering techniques. To the best of our knowledge, this work represents the largest and most comprehensive evaluation study of state-of-the-art folksonomy induction algorithms to date. | [
"evaluation",
"folksonomies",
"algorithms",
"social tagging systems",
"taxonomies",
"experimentation"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
2qDGJwJ | TELEPORTATION OF N-QUDIT STATE | In this paper, we study the teleportation of arbitrary N-qudit state with the tensor representation. The necessary and sufficient condition for realizing a successful or perfect teleportation is obtained, as will be shown, which is determined by the measurement matrix T-delta and the quantum channel parameter matrix X. The general expressions of the measurement matrix T-delta are written out and the quantum channel parameter matrix X are discussed. As an example, we show the details of three-ququart state teleportation. | [
"measurement matrix",
"qudit state",
"ququart state",
"channel parameter matrix (cpm)",
"transformation matrix"
] | [
"P",
"M",
"M",
"M",
"M"
] |
4YLrZLU | Bio-Interactive Healthcare Service System Using Lifelog Based Context Computing | Intelligent bio-sensor information processing was developed using lifelog based context aware technology to provide a flexible and dynamic range of diagnostic capabilities to satisfy healthcare requirements in ubiquitous and mobile computing environments. To accomplish this, various noise signals were grouped into six categories by context estimation and effectively reconfigured noise reduction filters by neural network and genetic algorithm. The neural network-based control module effectively selected an optimal filter block by noise context-based clustering in running mode, and filtering performance was improved by genetic algorithm in evolution mode. Due to its adaptive criteria, genetic algorithm was used to explore the action configuration for each identified bio-context to implement our concept. Our proposed Bio-interactive healthcare service system adopts the concepts of biological context-awareness with evolutionary computations in working environments modeled and identified as bio-sensors based environmental contexts. We used an unsupervised learning algorithm for lifelog based context modeling and a supervised learning algorithm for context identification. | [
"context awareness",
"biometric interaction",
"interactive healthcare"
] | [
"P",
"U",
"M"
] |
-24p8PQ | SMB: Collision detection based on temporal coherence | The paper presents a novel collision detection algorithm, termed the sort moving boxes (SMB) for large number of moving 2D/3D objects which are represented by their axis-aligned bounding boxes (AABBs). The main feature of the algorithm is the full exploitation of the temporal coherence of the objects exhibited in a dynamic environment. In the algorithm, the AABBs are first projected to each Cartesian axis. The projected intervals on the axes are separately sorted by the diminishing increment sort (DIS) and further divided into subsections. By processing all the intervals within the subsections to check if they overlap, a complete contact list can be built. The SMB is a fast and robust collision detection algorithm, particularly for systems involving a large number of moving AABBs, and also supports for the dynamic insertion and deletion of objects. Its performance in terms of both expected total detection time and memory requirements is proportional to the total number of AABBs, N, and is not influenced by size differences of AABBs, the space size and packing density over a large range up to ten times difference. The only assumption made is that the sorted list at one time step will remain an almost sorted list at the next time step, which is valid for most applications whose movement and deformation of each AABB and the dynamic change of the total number N are approximately continuous. | [
"collision detection",
"temporal coherence",
"sort",
"moving",
"axis-aligned bounding boxes (aabbs)",
"contact search"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
-om9P6- | An international analysis of the extensions to the IEEE LOMv1.0 metadata standard | We analyzed 44 works using the IEEE LOMv1.0 standard and found 15 types of extensions made to it. Due to Mexico interoperability difficulties, we compared its extensions with the rest of the world. We found that local extensions do not help to increase the system's interoperability ability. We found the action most important after implementing extensions is to publish them. | [
"extensions",
"metadata",
"ieee lomv1.0 standard",
"interoperability",
"learning objects",
"metadata application profiles"
] | [
"P",
"P",
"P",
"P",
"U",
"M"
] |
-PKBF7f | A unifying approach to goal-directed evaluation | Goal-directed evaluation, as embodied in Icon and Snobol, is built on the notions of backtracking and of generating successive results, and therefore it has always been something of a challenge to specify and implement. In this article, we address this challenge using computational monads and partial evaluation. We consider a subset of Icon and we specify it with a monadic semantics and a list monad. We then consider a spectrum of monads that also fit the bill, and we relate them to each other. For example, we derive a continuation monad as a Church encoding of the list monad. The resulting semantics coincides with Gudeman's continuation semantics of Icon. We then compile Icon programs by specializing their interpreter (i.e., by using the first Futamura projection), using type-directed partial evaluation. Through various back ends, including a run-time code generator, we generate ML code, C code, and OCaml byte code. Binding-time analysis and partial evaluation of the continuation-based interpreter automatically give rise to C programs that coincide with the result of Proebsting's optimized compiler. | [
"computational monads",
"continuations",
"(type-directed) partial evaluation",
"(run-time) code generation",
"code templates"
] | [
"P",
"P",
"R",
"R",
"M"
] |
25BEGgN | An architectural history of metaphors | This paper presents a review and an historical perspective on the architectural metaphor. It identifies common characteristics and peculiaritiesas they apply to given historical periodsand analyses the similarities and divergences. The review provides a vocabulary, which will facilitate an appreciation of existing and new metaphors. | [
"architecture",
"metaphor",
"art",
"traditional or classical art",
"ancient prehistoric",
"modern and contemporary architecture"
] | [
"P",
"P",
"U",
"U",
"U",
"M"
] |
2Pk3Lmi | time-based query performance predictors | Query performance prediction is aimed at predicting the retrieval effectiveness that a query will achieve with respect to a particular ranking model. In this paper, we study query performance prediction for a ranking model that explicitly incorporates the time dimension into ranking. Different time-based predictors are proposed as analogous to existing keyword-based predictors. In order to improve predicting performance, we combine different predictors using linear regression and neural networks. Extensive experiments are conducted using queries and relevance judgments obtained by crowdsourcing. | [
"query performance prediction",
"time-aware ranking"
] | [
"P",
"M"
] |
-T3xn6G | Asymptotically sufficient partitions and quantizations | We consider quantizations of observations represented by finite partitions of observation spaces. Partitions usually decrease the sensitivity of observations to their probability distributions. A sequence of quantizations is considered to be asymptotically sufficient for a statistical problem if the loss of sensitivity is asymptotically negligible. The sensitivity is measured by f-divergences of distributions or the closely related f-informations including the classical Shannon information. It is demonstrated that in some cases the maximization of f-divergences means the same as minimization of distortion of observations in the classical sense considered in mathematical statistics and information theory. The main result of the correspondence is a general sufficient condition ford the asymptotic sufficiency of quantizations. Selected applications of this condition are studied leading to new simple criteria of asymptotic optimality for quantizations of vector-valued observations and observations on general Poisson processes. | [
"asymptotically sufficient partitions",
"f-divergences",
"f-informations",
"general poisson processes",
"abstract observation spaces",
"asymptotically sufficient quantizations",
"euclidean observation spaces",
"optimal quantizations",
"sufficient statistics"
] | [
"P",
"P",
"P",
"P",
"M",
"R",
"M",
"R",
"R"
] |
-JXVYZr | The topology aware file distribution problem | We present theoretical results for large-file distribution on general networks of known topology (known link bandwidths and router locations). We show that the problem of distributing a file in minimum time is NP-hard in this model, and we give an O(log n) approximation algorithm, where n is the number of workstations that require the file. We also characterize our method as optimal amongst the class of "no-link-sharing" algorithms. | [
"file distribution",
"network",
"approximation"
] | [
"P",
"P",
"P"
] |
1BdjySJ | Achieving quality assurance functionality in the food industry using a hybrid case-based reasoning and fuzzy logic approach | Quality control of food inventories in the warehouse is complex as well as challenging due to the fact that food can easily deteriorate. Currently, this difficult storage problem is managed mostly by using a human dependent quality assurance and decision making process. This has however, occasionally led to unimaginative, arduous and inconsistent decisions due to the injection of subjective human intervention into the process. Therefore, it could be said that current practice is not powerful enough to support high-quality inventory management. In this paper, the development of an integrative prototype decision support system, namely, Intelligent Food Quality Assurance System (IFQAS) is described which will assist the process by automating the human based decision making process in the quality control of food storage. The system, which is composed of a Case-based Reasoning (CBR) engine and a Fuzzy rule-based Reasoning (FBR) engine, starts with the receipt of incoming food inventory. With the CBR engine, certain quality assurance operations can be suggested based on the attributes of the food received. Further of this, the FBR engine can make suggestions on the optimal storage conditions of inventory by systematically evaluating the food conditions when the food is receiving. With the assistance of the system, a holistic monitoring in quality control of the receiving operations and the storage conditions of the food in the warehouse can be performed. It provides consistent and systematic Quality Assurance Guidelines for quality control which leads to improvement in the level of customer satisfaction and minimization of the defective rate. | [
"case-based reasoning",
"fuzzy logic",
"decision support system",
"food quality",
"storage conditions",
"operation guidelines"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
1THMJTd | Coverage and connectivity in three-dimensional underwater sensor networks | Unlike a terrestrial network. an underwater sensor network call have significant height which makes it a three-dimensional network. There are many important sensor network design problems where the physical dimensionality of the network plays it significant role. One Such problem is determining how to deploy minimum number of sensor nodes so that all points inside the network is within the sensing range of at least one sensor and all sensor nodes call communicate with each other, possibly over a multi-hop path. The solution to this problem depends oil the ratio of the communication ran-e and the sensing range of each sensor. Under sphere-based communication and sensing model, placing a node at the center of each virtual cell created by truncated octahedron-based tessellation solves this problem when this ratio is greater than 1.7889. However, for smaller values of this ratio, the solution depends on how much communication redundancy the network needs. We provide Solutions for both limited and full communication redundancy requirements. Copyright (C) 2008 John Wiley & Sons, Ltd. | [
"coverage",
"connectivity",
"three-dimensional",
"polyhedron",
"node placement",
"sphere-based sensing and communication"
] | [
"P",
"P",
"P",
"U",
"M",
"R"
] |
14AuWZY | error correction of voicemail transcripts in scanmail | Despite its widespread use, voicemail presents numerous usability challenges: People must listen to messages in their entirety, they cannot search by keywords, and audio files do not naturally support visual skimming. SCANMail overcomes these flaws by automatically generating text transcripts of voicemail messages and presenting them in an email-like interface. Transcripts facilitate quick browsing and permanent archive. However, errors from the automatic speech recognition (ASR) hinder the usefulness of the transcripts. The work presented here specifically addresses these problems by evaluating user-initiated error correction of transcripts. User studies of two editor interfaces-a grammar-assisted menu and simple replacement by typing-reveal reduced audio playback times and an emphasis on editing important words with the menu, suggesting its value in mobile environments where limited input capabilities are the norm and user privacy is essential. The study also adds to the scarce body of work on ASR confidence shading, suggesting that shading may be more helpful than previously reported. | [
"error correction",
"voicemail",
"speech recognition",
"confidence shading",
"editor interfaces"
] | [
"P",
"P",
"P",
"P",
"R"
] |
1Qz3bXp | Study of stress waves in geomedia and effect of a soil cover layer on wave attenuation using a 1-D finite-difference method | The propagation and attenuation of blast-induced stress waves differs between geomedia such as rock or soil mass. This paper numerically studies the propagation and attenuation of blast-induced elastoplastic waves in deep geomedia by using a one-dimensional (I-D) finite-difference code. Firstly, the elastoplastic Cap models for rock and soil masses are introduced into the governing equations of spherical wave motion and a FORTRAN code based on the finite difference method is developed. Secondly, an underground spherical blast is simulated with this code and verified by software, RENEWTO. The propagation of stress-waves in rock and soil masses is numerically investigated, respectively. Finally, the effect of a soil cover layer on the attenuation of stress waves in the rear rock mass is studied. It is determined that large plastic deformation of geomedia can effectively dissipate the energy of stress-waves inward and the developed I-D finite difference code coupled with elastoplastic Cap models is convenient and effective in the numerical simulations for underground spherical explosion. (c) 2005 Elsevier Ltd. All rights reserved. | [
"geomedia",
"soil cover layer",
"attenuation",
"elastoplastic cap model",
"finite difference method",
"stress-waves"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
2bWzZ6c | Keyed hash function based on a dynamic lookup table of functions | In this paper, we present a novel keyed hash function based on a dynamic lookup table of functions. More specifically, we first exploit the piecewise linear chaotic map (PWLCM) with secret keys used for producing four 32-bit initial buffers and then elaborate the lookup table of functions used for selecting composite functions associated with messages. Next, we convert the divided message blocks into ASCII code values, check the equivalent indices and then find the associated composite functions in the lookup table of functions. For each message block, the four buffers are reassigned by the corresponding composite function and then the lookup table of functions is dynamically updated. After all the message blocks are processed, the final 128-bit hash value is obtained by cascading the last reassigned four buffers. Finally, we evaluate our hash function and the results demonstrate that the proposed hash algorithm has good statistical properties, strong collision resistance, high efficiency, and better statistical performance compared with existing chaotic hash functions. | [
"keyed hash function",
"lookup table of functions",
"piecewise linear chaotic map",
"composite function",
"chaos",
"transfer function"
] | [
"P",
"P",
"P",
"P",
"U",
"M"
] |
3FdJxLR | Exploring hierarchical multidimensional data with unified views of distribution and correlation | Data analysts explore data by inspecting features such as clustering, distribution and correlation. Much existing research has focused on different visualisations for different data exploration tasks. For example, a data analyst might inspect clustering and correlation with scatterplots, but use histograms to inspect a distribution. Such visualisations allow an analyst to confirm prior expectations. For example, a scatterplot may confirm an expected correlation or may show deviations from the expected correlation. In order to better facilitate discovery of unexpected features in data, however, a combination of different perspectives may be needed. In this paper, we combine distributional and correlational views of hierarchical multidimensional data. Our unified view supports the simultaneous exploration of data distribution and correlation. By presenting a unified view, we aim to increase the chances of discovery of unexpected data features, and to provide the means to explore such features in detail. Further, our unified view is equipped with a small number of primitive interaction operators which a user composes to facilitate smooth and flexible exploration. (C) 2013 Elsevier Ltd. All rights reserved. | [
"multidimensional data",
"correlation",
"data distribution",
"data analysis"
] | [
"P",
"P",
"P",
"M"
] |
-VuHg1& | Application driven network-on-chip architecture exploration & refinement for a complex SoC | This article presents an overview of the design process of an interconnection network, using the technology proposed by Arteris. Section 2 summarizes the various features a NoC is required to implement to be integrated in modern SoCs. Section 3 describes the proposed top-down approach, based on the progressive refinement of the NoC description, from its functional specification (Sect. 4) to its verification (Sect. 8). The approach is illustrated by a typical use-case of a NoC embedded in a hand-held gaming device. The methodology relies on the definition of the performance behavior and expectation (Sect. 5), which can be early and efficiently simulated against various NoC architectures. The system architect is then able to identify bottle-necks and converge towards the NoC implementation fulfilling the requirements of the target application (Sect. 6). | [
"architecture exploration",
"multimedia system-on-chip (soc)",
"network-on-chip (noc)",
"memory-mapped transaction interconnect",
"dynamic memory scheduling",
"quality-of-service (qos)",
"performance verification",
"systemc transaction level modeling (tlm)"
] | [
"P",
"M",
"R",
"M",
"U",
"M",
"R",
"M"
] |
3DKLyFJ | Real-valued MVDR beamforming using spherical arrays with frequency invariant characteristic | Complex-valued minimum variance distortionless response (MVDR) beamforming for wideband signals has very high computational amount. In this paper, we design a novel real-valued MVDR beamformer for spherical arrays. The dependence of the array steering matrix on source signal directions and frequencies is decoupled using spherical harmonic decomposition. Then a compensation network is designed to solve the frequency dependence of the array response and to get a new array response only determined by the spherical harmonics of the source directions. All frequency bins of wideband signals can be used together instead of being processed independently. By exploiting the property of the conjugate spherical harmonics, a unitary transform can be found to acquire a real-valued frequency invariant steering matrix (FISM). Based on the FISM, real-valued MVDR (RV-MVDR) is developed to obtain good performance with low computational amount. Simulation results demonstrate the performance of our proposed method for beamforming and direction-of-arrival (DOA) estimation by comparing with the complex-valued and real-weighted MVDR methods. | [
"spherical arrays",
"spherical harmonic decomposition",
"unitary transform",
"real-valued minimum variance distortionless response (mvdr)",
"frequency invariant beamforming"
] | [
"P",
"P",
"P",
"R",
"R"
] |
-TwiNUF | PRIVATE DATABASE QUERIES USING QUANTUM STATES WITH LIMITED COHERENCE TIMES | We describe a method for private database queries using exchange of quantum states with bits encoded in mutually incompatible bases. For technology with limited coherence time, the database vendor can announce the encoding after a suitable delay to allow the user to privately learn one of two items in the database without the ability to also definitely infer the second item. This quantum approach also allows the user to choose to learn other functions of the items, such as the exclusive-or of their bits, but not to gain more information than equivalent to learning one item, on average. This method is especially useful for items consisting of a few bits by avoiding the substantial overhead of conventional cryptographic approaches. | [
"quantum computing",
"private data access",
"digital property rights"
] | [
"M",
"M",
"U"
] |
1WAsNHy | Scheduling for information gathering on sensor network | We investigate a unique wireless sensor network scheduling problem in which all nodes in a cluster send exactly one packet to a designated sink node in an effort to minimize transmission time. However, node transmissions must be sufficiently isolated either in time or in space to avoid collisions. The problem is formulated and solved via graph representation. We prove that an optimal transmission schedule can be obtained efficiently through a pipeline-like schedule when the underlying topology is either line or tree. The minimum time required for a line or tree topology with n nodes is 3(n-2). We further prove that our scheduling problem is NP-hard for general graphs. We propose a heuristic algorithm for general graphs. Our heuristic tries to schedule as many independent segments as possible to increase the degree of parallel transmissions. This algorithm is compared to an RTS/CTS based distributed algorithm. Preliminary simulated results indicate that our heuristic algorithm outperforms the RTS/CTS based distributed algorithm ( up to 30%) and exhibits stable behavior. | [
"scheduling",
"sensor network",
"hybrid network",
"all-to-one information gathering"
] | [
"P",
"P",
"M",
"M"
] |
34p2NYX | synchronization analysis and control in chaos system based on complex network | For a certain kind of complex network, Lorenz chaos system is used to describe the state equation of nodes in network. By constructing a Lyapunov function, it is proved that this network model can achieve synchronization under the adaptive control scheme. The control strategy is simple, effective and easy for the engineering design in the future. The simulation results show the effectiveness of control scheme. | [
"synchronization",
"chaos system",
"complex network",
"adaptive control"
] | [
"P",
"P",
"P",
"P"
] |
3wE4GVH | Improved property in organic light-emitting diode utilizing two Al/Alq3 layers | We reported on the fabrication of organic light-emitting devices (OLEDs) utilizing the two Al/Alq3 layers and two electrodes. This novel green device with structure of Al(110nm)/tris(8-hydroxyquinoline) aluminum (Alq3)(65nm)/Al(110nm)/Alq3(50nm)/N,N?-dipheny1-N, N?-bis-(3-methy1phyeny1)-1, 1?-bipheny1-4, 4?-diamine (TPD)(60nm)/ITO(60nm)/Glass. TPD were used as holes transporting layer (HTL), and Alq3 was used as electron transporting layer (ETL), at the same time, Alq3 was also used as emitting layer (EL), Al and ITO were used as cathode and anode, respectively. The results showed that the device containing the two Al/Alq3 layers and two electrodes had a higher brightness and electroluminescent efficiency than the device without this layer. At current density of 14mA/cm2, the brightness of the device with the two Al/Alq3 layers reach 3693cd/m2, which is higher than the 2537cd/m2 of the Al/Alq3/TPD:Alq3/ITO/Glass device and the 1504.0cd/m2 of the Al/Alq3/TPD/ITO/Glass. Turn-on voltage of the device with two Al/Alq3 layers was 7V, which is lower than the others. | [
"oleds",
"transporting layer",
"emitting layer"
] | [
"P",
"P",
"P"
] |
1PgiXJR | Concept development for kindergarten children through a health simulation | According to many dental professionals, the decay process resulting from the accumulation of sugar on teeth is a very difficult concept for young children to learn. Playing the dental hygiene game with ThinkingTags not only brings context into the classroom, but also allows children to work with digital manipulatives that provide rich personal experiences and instant feedback. Instead of watching a demonstration of the accumulation of sugars on a computer screen, or being told about dental health, this simulation allows pre-school children to experience improving or decaying dental health without any real adverse health effects. Small, wearable, microprocessor-driven Tags were brought into the kindergarten classroom to simulate the decay process, providing information about sugars in foods and creating a discussion about teeth. Preliminary analyses suggest that this program was effective and enthusiastically received by this age group. | [
"simulation",
"pre-school",
"collaboration",
"dialogue",
"discourse analysis",
"wireless"
] | [
"P",
"P",
"U",
"U",
"U",
"U"
] |
UsWkxs9 | High output impedance current-mode four-function filter with reduced number of active and passive elements using the dual output current conveyor | This paper reports a new single-input multi-output current-mode multifunction filter which can simultaneously realise LP, HP, BP and BR filter functions all at high impedance outputs. The circuit permits orthogonal adjustment of quality factor Q and omega (0), employs only five grounded passive components and no element matching conditions are imposed. A second order all-pass function can easily be obtained. The passive sensitivities are shown to be low. | [
"current conveyors",
"multifunction filters",
"current-mode circuits"
] | [
"P",
"P",
"R"
] |
5QCi7zc | constraint programming for itemset mining | The relationship between constraint-based mining and constraint programming is explored by showing how the typical constraints used in pattern mining can be formulated for use in constraint programming environments. The resulting framework is surprisingly flexible and allows us to combine a wide range of mining constraints in different ways. We implement this approach in off-the-shelf constraint programming systems and evaluate it empirically. The results show that the approach is not only very expressive, but also works well on complex benchmark problems. | [
"constraint programming",
"itemset mining"
] | [
"P",
"P"
] |
2CmTk8B | experiences mining open source release histories | Software releases form a critical part of the life cycle of a software project. Typically, each project produces releases in its own way, using various methods of versioning, archiving, announcing and publishing the release. Understanding the release history of a software project can shed light on the project history, as well as the release process used by that project, and how those processes change. However, many factors make automating the retrieval of release history information difficult, such as the many sources of data, a lack of relevant standards and a disparity of tools used to create releases. In spite of the large amount of raw data available, no attempt has been made to create a release history database of a large number of projects in the open source ecosystem. This paper presents our experiences, including the tools, techniques and pitfalls, in our early work to create a software release history database which will be of use to future researchers who want to study and model the release engineering process in greater depth. | [
"release engineering",
"data mining"
] | [
"P",
"R"
] |
3&Z-CZE | Balancing throughput and response time in online scientific Clouds via Ant Colony Optimization (SP2013/2013/00006) | The Cloud Computing paradigm focuses on the provisioning of reliable and scalable infrastructures (Clouds) delivering execution and storage services. The paradigm, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. The goal of this work is to study private Clouds to execute scientific experiments coming from multiple users, i.e., our work focuses on the Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate hosts available in a Cloud. Then, correctly scheduling Cloud hosts is very important and it is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. The job scheduling problem is however NP-complete, and therefore many heuristics have been developed. In this work, we describe and evaluate a Cloud scheduler based on Ant Colony Optimization (ACO). The main performance metrics to study are the number of serviced users by the Cloud and the total number of created VMs in online (non-batch) scheduling scenarios. Besides, the number of intra-Cloud network messages sent are evaluated. Simulated experiments performed using CloudSim and job data from real scientific problems show that our scheduler succeeds in balancing the studied metrics compared to schedulers based on Random assignment and Genetic Algorithms. | [
"ant colony optimization",
"cloud computing",
"job scheduling",
"scientific problems",
"genetic algorithms",
"swarm intelligence"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
LjmySYF | Exploring the CSCW spectrum using process mining | Process mining techniques allow for extracting information from event logs. For example, the audit trails of a workflow management system or the transaction logs of an enterprise resource planning system can be used to discover models describing processes, organizations, and products. Traditionally, process mining has been applied to structured processes. In this paper, we argue that process mining can also be applied to less structured processes supported by computer supported cooperative work (CSCW) systems. In addition, the ProM framework is described. Using ProM a wide variety of process mining activities are supported ranging from process discovery and verification to conformance checking and social network analysis. | [
"cscw",
"process mining",
"business activity monitoring",
"business process intelligence",
"data mining"
] | [
"P",
"P",
"M",
"M",
"M"
] |
4tDX2vz | Effects of spatial and temporal variation in environmental conditions on simulation of wildfire spread | Implementation of a wildfire spread model based on the level set method. Investigation of wildfire propagation under stochastic wind and fuel conditions. Local variation in combustion condition slows the rate of propagation. Local variation in wind direction is found to increase flank spread. A harmonic mean is preferential for spatially varying parameters in spread models. | [
"simulation",
"modelling",
"level set",
"perimeter propagation",
"fire growth",
"spark"
] | [
"P",
"P",
"P",
"M",
"U",
"U"
] |
4w1BrPk | Utilization of spatial decision support systems decision-making in dryland agriculture: A Tifton burclover case study | FSAW delineated Wyoming agricultural land into relative ranks for burclover establishment. Defuzzification produced final output map with crisp scores and calculated centroid. Calculated centroid map demonstrated efficacy of SDSS in agricultural decision-making. Effective land suitability ranking validated value of ex-ante agricultural technologies. Presented information has potential to determine burclover feasibility in Wyoming. | [
"gis geographic information systems",
"idw inverse distance weighting",
"fsaw fuzzy simple additive weighting",
"madm multiple attribute decision-making",
"mcdm multiple criteria decision-making",
"sdss spatial decision support systems"
] | [
"M",
"U",
"M",
"M",
"M",
"R"
] |
4PU1:VM | Propagation engine prototyping with a domain specific language | Constraint propagation is at the heart of constraint solvers. Two main trends co-exist for its implementation: variable-oriented propagation engines and constraint-oriented propagation engines. Those two approaches ensure the same level of local consistency but their efficiency (computation time) can be quite different depending on the instance solved. However, it is usually accepted that there is no best approach in general, and modern constraint solvers implement only one. In this paper, we would like to go a step further providing a solver independent language at the modeling stage to enable the design of propagation engines. We validate our proposal with a reference implementation based on the Choco solver and the MiniZinc constraint modeling language. | [
"propagation",
"domain specific language",
"constraint solver",
"implementation"
] | [
"P",
"P",
"P",
"P"
] |
4jfAYZh | A Projection Pursuit framework for supervised dimension reduction of high dimensional small sample datasets | The analysis and interpretation of datasets with large number of features and few examples has remained as a challenging problem in the scientific community, owing to the difficulties associated with the curse-of-the-dimensionality phenomenon. Projection Pursuit (PP) has shown promise in circumventing this phenomenon by searching low-dimensional projections of the data where meaningful structures are exposed. However, PP faces computational difficulties in dealing with datasets containing thousands of features (typical in genomics and proteomics) due to the vast quantity of parameters to optimize. In this paper we describe and evaluate a PP framework aimed at relieving such difficulties and thus ease the construction of classifier systems. The framework is a two-stage approach, where the first stage performs a rapid compaction of the data and the second stage implements the PP search using an improved version of the SPP method (Guo et al., 2000, [32]). In an experimental evaluation with eight public microarray datasets we showed that some configurations of the proposed framework can clearly overtake the performance of eight well-established dimension reduction methods in their ability to pack more discriminatory information into fewer dimensions. | [
"projection pursuit",
"dimension reduction",
"classification",
"gene expression"
] | [
"P",
"P",
"U",
"U"
] |
1JwStM8 | Conservation Functions for 1-D Automata: Efficient Algorithms, New Results, and a Partial Taxonomy | We present theorems that can be used for improved efficiency in the calculation of conservation functions for cellular automata. We report results obtained from implementations of algorithms based on these theorems that show conservation laws for 1-D cellular automata of higher order than any previously known. We introduce the notion of trivial and core conservation functions to distinguish truly new conservation functions from simple extensions of lower-order ones. We then present the complete list of conservation functions up to order 16 for the 256 elementary 1-d binary cellular automata. These include CAs that were not previously known to have nontrivial conservation functions. | [
"conservation functions",
"taxonomy",
"cellular automata",
"linear algebra",
"classification scheme"
] | [
"P",
"P",
"P",
"U",
"U"
] |
g8Dx7se | A reference bacterial genome dataset generated on the MinION portable single-molecule nanopore sequencer | The MinION is a new, portable single-molecule sequencer developed by Oxford Nanopore Technologies. It measures four inches in length and is powered from the USB 3.0 port of a laptop computer. The MinION measures the change in current resulting from DNA strands interacting with a charged protein nanopore. These measurements can then be used to deduce the underlying nucleotide sequence. | [
"genomics",
"nanopore sequencing"
] | [
"P",
"P"
] |
2tAUEm& | Serial batching scheduling of deteriorating jobs in a two-stage supply chain to minimize the makespan | For the scheduling problem with a buffer, an optimal algorithm is developed for solving it. For the scheduling problem without buffer, some useful properties are derived. A heuristic is designed for solving it, and a novel lower bound is also derived. Two special cases are well analyzed, and two optimal algorithms are developed for solving them, respectively. | [
"batch scheduling",
"deterioration",
"supply chain",
"heuristic",
"transportation"
] | [
"P",
"P",
"P",
"P",
"U"
] |
-e43jMi | Entanglement monotones and maximally entangled states in multipartite qubit systems | We present a method to construct entanglement measures for pure states of multipartite qubit systems. The key element of our approach is an antilinear operator that we call comb in reference to the hairy-ball theorem. For qubits (i.e. spin 1/2) the combs are automatically invariant under SL(2, C). This implies that the filters obtained from the combs are entanglement monotones by construction. We give alternative formulae for the concurrence and the 3-tangle as expectation values of certain antilinear operators. As an application we discuss inequivalent types of genuine four-, five- and six-qubit entanglement. | [
"entanglement monotones",
"multipartite entanglement",
"antilineax operators"
] | [
"P",
"R",
"M"
] |
32nCxE& | Automatic verification of Java programs with dynamic frames | Framing in the presence of data abstraction is a challenging and important problem in the verification of object-oriented programs Leavens et al. (Formal Aspects Comput (FACS) 19:159-189, 2007). The dynamic frames approach is a promising solution to this problem. However, the approach is formalized in the context of an idealized logical framework. In particular, it is not clear the solution is suitable for use within a program verifier for a Java-like language based on verification condition generation and automated, first-order theorem proving. In this paper, we demonstrate that the dynamic frames approach can be integrated into an automatic verifier based on verification condition generation and automated theorem proving. The approach has been proven sound and has been implemented in a verifier prototype. The prototype has been used to prove correctness of several programming patterns considered challenging in related work. | [
"dynamic frames",
"data abstraction",
"program verification",
"frame problem"
] | [
"P",
"P",
"R",
"R"
] |
23HPKaP | Properties of the transmission of pulse sequences in a bistable chain of unidirectionally coupled neurons | We study the propagation of pulse sequences in a chain of neurons with sigmoidal inputoutput relations. The propagating speeds of pulse fronts depend on the widths of the preceding pulses and adjacent pulse fronts interact attractively. Sequences of pulse widths are then modulated through transmission. Equations for changes in pulse width sequences are derived with a kinematical model of propagating pulse fronts. The transmission of pulse width sequences in the chain is expressed as a linear system with additive noise. The gain of the system function increases exponentially with the number of neurons in a high-frequency region. The power spectrum of variations in pulse widths due to spatiotemporal noise also increases in the same manner. Further, the interaction between pulse fronts keeps the coherence and mutual information of initial and transmitted pulse sequences. Results of an experiment on an analog circuit confirm these properties. | [
"pulse",
"chain of neurons",
"noise",
"transmission line"
] | [
"P",
"P",
"P",
"M"
] |
19FKgP4 | Building geometric feature based maps for indoor service robots | This paper presents an efficient geometric approach to the Simultaneous Localization and Mapping problem based on an Extended Kalman Filter. The map representation and building process is formulated, fully implemented and successfully experimented in different indoor environments with different robots. The use of orthogonal shape constraints is proposed to deal with the inconsistency of the estimation. Built maps are successfully used for the navigation of two different service robots: an interactive tour guide robot, and an assistive walking aid for the frail elderly. | [
"service robot",
"simultaneous localization and mapping",
"extended kalman filter",
"inconsistency"
] | [
"P",
"P",
"P",
"P"
] |
-EiJt&D | The cross-entropy method with patching for rare-event simulation of large Markov chains | There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator. | [
"cross-entropy",
"importance sampling",
"rare events",
"large-scale markov chains"
] | [
"P",
"P",
"P",
"M"
] |
DwWnpnz | Combined simulation for process control: extension of a general purpose simulation tool | Combined discrete event and continuous views of production processes are important in designing computer control systems for both process industries and manufacturing. The paper presents an extension of the popular Matlab-Simulink simulation tool to facilitate the simulation of the discrete sequential control logic applied to continuous processes. The control system is modelled as a combined system where the discrete and the continuous parts of the system are separated and an interface is introduced between them. The sequential control logic is represented by a Sequential Function Chart (SFC). A SFC blockset is defined to enable graphical composition of the SFC and its integration into the Simulink environment. A simulation mechanism is implemented which is called periodically from the standard Simulink simulation engine and carries out the correct state transition sequence of the discrete model and executes corresponding SFC actions. Two simulation case studies are given to illustrate the possible application of the developed simulation environment: the simulation of a batch process cell, as an example from the area of process control and an example of a manufacturing system, i.e. the control of a laboratory scale modular production system. (C) 1999 Elsevier Science B.V. All rights reserved. | [
"simulation",
"hybrid systems",
"petri nets"
] | [
"P",
"M",
"U"
] |
1azL-Sm | Action recognition feedback-based framework for human pose reconstruction from monocular images | A novel framework based on action recognition feedback for pose reconstruction of articulated human body from monocular images is proposed in this paper. The intrinsic ambiguity caused by perspective projection makes it difficult to accurately recover articulated poses from monocular images. To alleviate such ambiguity, we exploit the high-level motion knowledge as action recognition feedback to discard those implausible estimates and generate more accurate pose candidates using large number of motion constraints during natural human movement. The motion knowledge is represented by both local and global motion constraints. The local spatial constraint captures motion correlation between body parts by multiple relevance vector machines while the global temporal constraint preserves temporal coherence between time-ordered poses via a manifold motion template. Experiments on the CMU Mocap database demonstrate that our method performs better on estimation accuracy than other methods without action recognition feedback. | [
"action recognition feedback",
"human pose reconstruction",
"motion correlation",
"manifold motion template"
] | [
"P",
"P",
"P",
"P"
] |
3b4XdUS | Burr size reduction in drilling by ultrasonic assistance | Accuracy and surface finish play an important role in modern industry. Undesired projections of materials, known as burrs, reduce the part quality and negatively affect the assembly process. A recent and promising method for reducing burr size in metal cutting is the use of ultrasonic assistance, where high-frequency and low-amplitude vibrations are added in the feed direction during cutting. Note that this cutting process is distinct from ultrasonic machining. This paper presents the design of an ultrasonically vibrated workpiece holder, and a two-stage experimental investigation of ultrasonically assisted drilling of A1100-0 aluminum workpieces. The results of 175 drilling experiments with uncoated and TiN-coated drills are reported and analyzed. The effect of ultrasonic assistance on burr size, chip formation, thrust forces and tool wear is studied. The results demonstrate that under suitable ultrasonic vibration conditions, the burr height and width can be reduced in comparison to conventional drilling. | [
"burr",
"drilling",
"ultrasonic assistance",
"metal cutting",
"ultrasonic assisted drilling",
"vibration assisted drilling"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
29fq&nB | Selecting Coherent and Relevant Plots in Large Scatterplot Matrices | The scatterplot matrix (SPLOM) is a well-established technique to visually explore high-dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data sets dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such small SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage large SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre-process reordering and a perception-based abstraction. With this scheme, the user can interactively find large amounts of relevant plots in large SPLOMs. | [
"scatterplot matrix",
"high-dimensional data",
"quality measure",
"visual analytics"
] | [
"P",
"P",
"P",
"M"
] |
1McQmTp | The random electrode selection ensemble for EEG signal classification | Pattern classification methods are a crucial direction in the current study of braincomputer interface (BCI) technology. A simple yet effective ensemble approach for electroencephalogram (EEG) signal classification named the random electrode selection ensemble (RESE) is developed, which aims to surmount the instability demerit of the Fisher discriminant feature extraction for BCI applications. Through the random selection of recording electrodes answering for the physiological background of user-intended mental activities, multiple individual classifiers are constructed. In a feature subspace determined by a couple of randomly selected electrodes, principal component analysis (PCA) is first used to carry out dimensionality reduction. Successively Fisher discriminant is adopted for feature extraction, and a Bayesian classifier with a Gaussian mixture model (GMM) approximating the feature distribution is trained. For a test sample the outputs from all the Bayesian classifiers are combined to give the final prediction for its label. Theoretical analysis and classification experiments with real EEG signals indicate that the RESE approach is both effective and efficient. | [
"eeg signal classification",
"bayesian classifier",
"gaussian mixture model",
"classifier ensemble",
"fisher discriminant analysis"
] | [
"P",
"P",
"P",
"R",
"R"
] |
31K8Xnq | Robust schur stability of polynomials with polynomial parameter dependency | The paper considers the robust Schur stability verification of polynomials with coefficients depending polynomially on parameters varying in given intervals. A new algorithm is presented which relies on the expansion of a multivariate polynomial into Bernstein polynomials and is based on the decomposition of the family of polynomials into its symmetric and antisymmetric parts. It is shown how the inspection of both polynomial families on the upper half of the unit circle can be reduced to the analysis of two related polynomial families on the real interval [-1, 1]. Then the Bernstein expansion can be applied in order to check whether both polynomial families have a zero in this interval in common. | [
"schur stability",
"bernstein polynomials",
"robust stability"
] | [
"P",
"P",
"R"
] |
s5ouVg5 | Use of nano-scale double-gate MOSFETs in low-power tunable current mode analog circuits | Use of independently-driven nano-scale double gate (DG) MOSFETs for low-power analog circuits is emphasized and illustrated. In independent drive configuration, the top gate response of DG-MOSFETs can be altered by application of a control voltage on the bottom gate. We show that this could be a powerful method to conveniently tune the response of conventional CMOS analog circuits especially for current-mode design. Several examples of such circuits, including current mirrors, a differential current amplifier and differential integrators are illustrated and their performance gauged using TCAD simulations. The topologies and biasing schemes explored here show how the nano-scale DG-MOSFETs may pave way for efficient, mismatch-tolerant and smaller circuits with tunable characteristics. | [
"dg-mosfet",
"integrated circuits",
"tunable analog circuits",
"current mode circuits",
"mixed-mode simulations"
] | [
"P",
"R",
"R",
"R",
"M"
] |
53WAde- | Cooperative triangulation in MSBNs without revealing subnet structures | Multiply sectioned Bayesian networks (MSBNs) provide a coherent framework for probabilistic inference in a cooperative multiagent distributed interpretation system. Inference in MSBNs can be performed effectively using a compiled representation. The compilation involves the triangulation of the collective dependency structure (a graph) defined in terms of the union of a set of local dependency structures (a set of graphs). Privacy of agents eliminates the option to assemble these graphs at a central location and to triangulate their union. Earlier work solved distributed triangulation in a restricted case. The method is conceptually complex and the correctness of its extension to the general case is difficult to justify. In this paper, we present a new method that is conceptually simpler and is efficient. We prove its correctness in the general case and demonstrate its performance experimentally (C) 2001 John Wiley & Sons, Inc. | [
"triangulation",
"multiply sectioned bayesian networks",
"bayesian networks",
"chordal graph",
"graph theory",
"distributed computation",
"multiagent systems",
"cooperation and coordination",
"approximate reasoning"
] | [
"P",
"P",
"P",
"M",
"M",
"M",
"R",
"M",
"U"
] |
3cL&sux | the complexity of parallel evaluation of linear recurrence | The concept of computers such as C.mmp and ILLIAC IV is to achieve computational speed-up by performing several operations simultaneously with parallel processors. This type of computer organization is referred to as a parallel computer. In this paper, we prove upper bounds on speed-ups achievable by parallel computers for a particular problem, the solution of first order linear recurrences. We consider this problem because it is important in practice and also because it is simply stated so that we might obtain some insight into the nature of parallel computation by studying it. | [
"complexity",
"parallel",
"evaluation",
"concept",
"computation",
"operability",
"processor",
"organization",
"parallel computation",
"paper",
"order",
"practical"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
3Dm-RbX | Leukocyte image segmentation using simulated visual attention | Computer-aided automatic analysis of microscopic leukocyte is a powerful diagnostic tool in biomedical fields which could reduce the effects of human error, improve the diagnosis accuracy, save manpower and time. However, it is a challenging to segment entire leukocyte populations due to the changing features extracted in the leukocyte image, and this task remains an unsolved issue in blood cell image segmentation. This paper presents an efficient strategy to construct a segmentation model for any leukocyte image using simulated visual attention via learning by on-line sampling. In the sampling stage, two types of visual attention, bottom-up and top-down together with the movement of the human eye are simulated. We focus on a few regions of interesting and sample high gradient pixels to group training sets. While in the learning stage, the SVM (support vector machine) model is trained in real-time to simulate the visual neuronal system and then classifies pixels and extracts leukocytes from the image. Experimental results show that the proposed method has better performance compared to the marker controlled watershed algorithms with manual intervention and thresholding-based methods. | [
"leukocyte image",
"image segmentation",
"visual attention",
"svm",
"machine learning"
] | [
"P",
"P",
"P",
"P",
"R"
] |
1CG5mXA | On the construction of an aggregated measure of the development of interval data | We analyse some possibilities for constructing an aggregated measure of the development of socio-economical objects in terms of their composite phenomenon (i.e., phenomenon described by many statistical features) if the relevant data are expressed as intervals. Such a measure, based on the deviation of the data structure for a given object from the benchmark of development is a useful tool for ordering, comparing and clustering objects. We present the construction of a composite phenomenon when it is described by interval data and discuss various aspects of stimulation and normalization of the diagnostic features as well as a definition of a benchmark of development (based usually on optimum or expected levels of these features). Our investigation includes the following options for the realization of this purpose: transformation of the interval model into a singlevalued version without any significant loss of its statistical properties, standardization of pure intervals as well as definition of the interval ideal object. For the determination of a distance between intervals, the Hausdorff formula is applied. The simulation study conducted and the empirical analysis showed that the first two variants are especially useful in practice. | [
"interval data",
"multifeature objects",
"aggregated measure of development",
"hausdorff distance"
] | [
"P",
"M",
"R",
"R"
] |
1PRL-i3 | user requirements for a web based spreadsheet-mediated collaboration | This paper reports the initial results of a research project to investigate how to develop a web based spreadsheet mediated business collaboration system that could notably enhance the business processes presently carried out by Small to Medium sized Enterprises. Using a scenario-based design approach, a set of user's requirements were extracted from an appropriate field study. These requirements were then analysed in the context of well-known usability principles, and a set of design implications were derived based on a selected set of HCI design patterns related to cooperative interaction design. Starting from that knowledge, suitable interactive collaboration scenarios have been drawn, from which a list of user interface requirements for a web based spreadsheet mediated collaboration system has been formulated. | [
"scenario-based design",
"usability principles",
"hci design patterns",
"artifact mediated collaboration"
] | [
"P",
"P",
"P",
"M"
] |
3:japx4 | Automated estimation and analyses of meteorological drought characteristics from monthly rainfall data | The paper describes a new software package for automated estimation, display and analyses of various drought indices continuous functions of precipitation that allow quantitative assessment of meteorological drought events to be made. The software at present allows up to five different drought indices to be estimated. They include the Decile Index (DI), the Effective Drought Index (EDI), the Standardized Precipitation Index (SPI) and deviations from the long-term mean and median value. Each index can be estimated from point and spatially averaged rainfall data and a number of options are provided for months' selection and the type of the analysis, including a running mean, single value or multiple annual values. The software also allows spell/run analysis to be performed and maps of a specific index to be constructed. The software forms part of the comprehensive computer package, developed earlier and designed to perform the multitude of water resources analyses and hydro-meteorological data processing. The 7-step procedure of setting up and running a typical drought assessment application is described in detail. The examples of applications are given primarily in the specific context of South Asia where the software has been used. | [
"drought indices",
"monthly rainfall time series",
"spatsim"
] | [
"P",
"M",
"U"
] |
1Gydyeu | Introduction to the special issue on statistical signal extraction and filtering | The papers of the Special Issue on Statistical Signal Extraction and Filtering are introduced briefly and the invitation to contribute to the next issue to be devoted to this topic is reiterated. There follows an account of the history and the current developments in the areas of WienerKolmogorov and Kalman filtering, which is a leading topic of the present issue. Other topics will be treated in like manner in subsequent introductions. | [
"statistical signal extraction",
"kalman filtering",
"wienerkolmogorov filtering"
] | [
"P",
"P",
"R"
] |
492g8EV | Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks | The study of phylogenetic networks is of great interest to computational evolutionary biology and numerous different types of such structures are known. This article addresses the following question concerning rooted versions of phylogenetic networks. What is the maximum value of p?[0,1] p ? [ 0 , 1 ] such that for every input set T of rooted triplets, there exists some network N N such that at least p|T| p | T | of the triplets are consistent with N N ? We call an algorithm that computes such a network (where p is maximum) worst-case optimal. Here we prove that the set containing all triplets (the full triplet set) in some sense defines p . Moreover, given a network N N that obtains a fraction p? p ? for the full triplet set (for any p? p ? ), we show how to efficiently modify N N to obtain a fraction ?p? ? p ? for any given triplet set T . We demonstrate the power of this insight by presenting a worst-case optimal result for level-1 phylogenetic networks improving considerably upon the 5/12 fraction obtained recently by Jansson, Nguyen and Sung. For level-2 phylogenetic networks we show that p?0.61 p ? 0.61 . We emphasize that, because we are taking |T| | T | as a (trivial) upper bound on the size of an optimal solution for each specific input T, the results in this article do not exclude the existence of approximation algorithms that achieve approximation ratio better than p. Finally, we note that all the results in this article also apply to weighted triplet sets. | [
"triplet",
"phylogenetic network",
"level-k network"
] | [
"P",
"P",
"M"
] |
1JaPT4h | Direct search of feasible region and application to a crashworthy helicopter seat | The paper proposes a novel approach to identify the feasible region for a constrained optimisation problem. In engineering applications the search for the feasible region turns out to be extremely useful in the understanding of the problem as the feasible region defines the portion of the domain where design parameters can be ranged to fulfil the constraints imposed on performances, manufacturing and regulations. The search for the feasible region is not a trivial task as non-convex, irregular and disjointed shapes can be found. The algorithm presented in this paper moves from the above considerations and proposes a recursive feasible-infeasible segment bisection algorithm combined with Support Vector Machine (SVM) techniques to reduce the overall computational effort. The method is discussed and then illustrated by means of three simple analytical test cases in the first part of the paper. A real-world application is finally presented: the search for the survivability zone of a crashworthy helicopter seat under different crash conditions. A finite element model, including an anthropomorphic dummy, is adopted to simulate impacts that are characterised by different deceleration pulses and the proposed algorithm is used to investigate the influence of pulse shape on impact survivability. | [
"direct search",
"feasible region",
"crashworthiness",
"support vector machine"
] | [
"P",
"P",
"P",
"P"
] |
vb5Wvct | feasibly constructive proofs and the propositional calculus (preliminary version) | The motivation for this work comes from two general sources. The first source is the basic open question in complexity theory of whether P equals NP (see [1] and [2]). Our approach is to try to show they are not equal, by trying to show that the set of tautologies is not in NP (of course its complement is in NP ). This is equivalent to showing that no proof system (in the general sense defined in [3]) for the tautologies is super in the sense that there is a short proof for every tautology. Extended resolution is an example of a powerful proof system for tautologies that can simulate most standard proof systems (see [3]). The Main Theorem (5.5) in this paper describes the power of extended resolution in a way that may provide a handle for showing it is not super. The second motivation comes from constructive mathematics. A constructive proof of, say, a statement @@@@A must provide an effective means of finding a proof of A for each value of x, but nothing is said about how long this proof is as a function of x. If the function is exponential or super exponential, then for short values of x the length of the proof of the instance of A may exceed the number of electrons in the universe. In section 2, I introduce the system PV for number theory, and it is this system which I suggest properly formalizes the notion of a feasibly constructive proof. | [
"version",
"motivation",
"general",
"complexity",
"theory",
"systems",
"examples",
"power",
"standardization",
"paper",
"mathematics",
"effect",
"value",
"value",
"functional",
"values"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
-KCw:sM | SINA: Semantic interpretation of user queries for question answering on interlinked data | The architectural choices underlying Linked Data have led to a compendium of data sources which contain both duplicated and fragmented information on a large number of domains. One way to enable non-experts users to access this data compendium is to provide keyword search frameworks that can capitalize on the inherent characteristics of Linked Data. Developing such systems is challenging for three main reasons. First, resources across different datasets or even within the same dataset can be homonyms. Second, different datasets employ heterogeneous schemas and each one may only contain a part of the answer for a certain user query. Finally, constructing a federated formal query from keywords across different datasets requires exploiting links between the different datasets on both the schema and instance levels. We present Sina, a scalable keyword search system that can answer user queries by transforming user-supplied keywords or natural-languages queries into conjunctive SPARQL queries over a set of interlinked data sources. Sina uses a hidden Markov model to determine the most suitable resources for a user-supplied query from different datasets. Moreover, our framework is able to construct federated queries by using the disambiguated resources and leveraging the link structure underlying the datasets to query. We evaluate Sina over three different datasets. We can answer 25 queries from the QALD-1 correctly. Moreover, we perform as well as the best question answering system from the QALD-3 competition by answering 32 questions correctly while also being able to answer queries on distributed sources. We study the runtime of SINA in its mono-core and parallel implementations and draw preliminary conclusions on the scalability of keyword search on Linked Data. | [
"question answering",
"keyword search",
"sparql",
"hidden markov model",
"disambiguation",
"rdf"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
1YhGb2L | Generalized median string computation by means of string embedding in vector spaces | In structural pattern recognition the median string has been established as a useful tool to represent a set of strings. However, its exact computation is complex and of high computational burden. In this paper we propose a new approach for the computation of median string based on string embedding. Strings are embedded into a vector space and the median is computed in the vector domain. We apply three different inverse transformations to go from the vector domain back to the string domain in order to obtain a final approximation of the median string. All of them are based on the weighted mean of a pair of strings. Experiments show that we succeed to compute good approximations of the median string. | [
"generalized median",
"string",
"embedding",
"vector space",
"lower bound"
] | [
"P",
"P",
"P",
"P",
"U"
] |
2R:oUEp | efficient indexing of the historical, present, and future positions of moving objects | Although significant effort has been put into the development of efficient spatio-temporal indexing techniques for moving objects, little attention has been given to the development of techniques that efficiently support queries about the past, present, and future positions of objects. The provisioning of such techniques is challenging, both because of the nature of the data, which reflects continuous movement, and because of the types of queries to be supported. This paper proposes the BB x -index structure, which indexes the positions of moving objects, given as linear functions of time, at any time. The index stores linearized moving-object locations in a forest of B + -trees. The index supports queries that select objects based on temporal and spatial constraints, such as queries that retrieve all objects whose positions fall within a spatial range during a set of time intervals. Empirical experiments are reported that offer insight into the query and update performance of the proposed technique. | [
"indexing",
"b-tree",
"mobile objects"
] | [
"P",
"U",
"M"
] |
1zwDe:- | towards model-driven unit testing | The Model-Driven Architecture (MDA) approach for constructing software systems advocates a stepwise refinement and transformation process starting from high-level models to concrete program code. In contrast to numerous research efforts that try to generate executable function code from models, we propose a novel approach termed model-driven monitoring. On the model level the behavior of an operation is specified with a pair of UML composite structure diagrams (visual contract), a visual notation for pre- and post-conditions. The specified behavior is implemented by a programmer manually. An automatic translation from our visual contracts to JML assertions allows for monitoring the hand-coded programs during their execution. In this paper we present how we extend our approach to allow for model-driven unit testing, where we utilize the generated JML assertions as test oracles. Further, we present an idea how to generate sufficient test cases from our visual contracts with the help of model-checking techniques. | [
"visual contracts",
"test case generation",
"model checking",
"design by contract"
] | [
"P",
"R",
"M",
"M"
] |
-QZ7w8D | Sliding window-based frequent pattern mining over data streams | Finding frequent patterns in a continuous stream of transactions is critical for many applications such as retail market data analysis, network monitoring, web usage mining, and stock market prediction. Even though numerous frequent pattern mining algorithms have been developed over the past decade, new solutions for handling stream data are still required due to the continuous, unbounded, and ordered sequence of data elements generated at a rapid rate in a data stream. Therefore, extracting frequent patterns from more recent data can enhance the analysis of stream data. In this paper, we propose an efficient technique to discover the complete set of recent frequent patterns from a high-speed data stream over a sliding window. We develop a Compact Pattern Stream tree (CPS-tree) to capture the recent stream data content and efficiently remove the obsolete, old stream data content. We also introduce the concept of dynamic tree restructuring in our CPS-tree to produce a highly compact frequency-descending tree structure at runtime. The complete set of recent frequent patterns is obtained from the CPS-tree of the current window using an FP-growth mining technique. Extensive experimental analyses show that our CPS-tree is highly efficient in terms of memory and time complexity when finding recent frequent patterns from a high-speed data stream. | [
"sliding window",
"frequent pattern",
"data stream",
"tree restructuring"
] | [
"P",
"P",
"P",
"P"
] |
2b7GT5T | modeling cryptographic properties of voice and voice-based entity authentication | Strong and/or multi-factor entity authentication protocols are of crucial importancein building successful identity management architectures. Popular mechanisms to achieve these types of entity authentication are biometrics, and, in particular, voice, for which there are especially interesting business cases in the telecommunication and financial industries, among others. Despite several studies on the suitability of voice within entity authentication protocols, there has been little or no formal analysis of any such methods. In this paper we embark into formal modeling of seemingly cryptographic properties of voice. The goal is to define a formal abstraction for voice, in terms of algorithms with certain properties, that are of both combinatorial and cryptographic type. While we certainly do not expect to achieve the perfect mathematical model for a human phenomenon, we do hope that capturing some properties of voice in a formal model would help towards the design and analysis of voice-based cryptographic protocols, as for entity authentication. In particular, in this model we design and formally analyze two voice-based entity authentication schemes, the first being a voice-based analogue of the conventional password-transmission entity authentication scheme. We also design and analyze, in the recently introduced bounded-retrieval model [4], one voice-and-password-based entity authentication scheme that is additionally secure against intrusions and brute-force attacks, including dictionary attacks. | [
"voice",
"entity authentication",
"biometrics",
"modeling human factors"
] | [
"P",
"P",
"P",
"M"
] |
M:aKVNY | Inference of finite-state transducers from regular languages ? | Finite-state transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, where the approaches that are based on building models automatically from training examples are becoming more and more attractive. Finite-state transducers are very adequate to be used in constrained tasks where training samples of pairs of sentences are available. A technique to infer finite-state transducers is proposed in this work. This technique is based on formal relations between finite-state transducers and finite-state grammars. Given a training corpus of inputoutput pairs of sentences, the proposed approach uses statistical alignment methods to produce a set of conventional strings from which a stochastic finite-state grammar is inferred. This grammar is finally transformed into a resulting finite-state transducer. The proposed methods are assessed through series of machine translation experiments within the framework of the EUTRANS project. | [
"machine translation",
"grammatical inference",
"formal language theory",
"stochastic finite-state transducers",
"natural language processing"
] | [
"P",
"M",
"M",
"R",
"M"
] |
4jVhe3- | Particle swarm optimization with preference order ranking for multi-objective optimization | A new optimality criterion based on preference order (PO) scheme is used to identify the best compromise in multi-objective particle swarm optimization (MOPSO). This scheme is more efficient than Pareto ranking scheme, especially when the number of objectives is very large. Meanwhile, a novel updating formula for the particles velocity is introduced to improve the search ability of the algorithm. The proposed algorithm has been compared with NSGA-II and other two MOPSO algorithms. The experimental results indicate that the proposed approach is effective on the highly complex multi-objective optimization problems. | [
"particle swarm",
"preference order",
"multi-objective optimization",
"best compromise",
"pareto dominance"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4NZ9WJv | real-time deformation using modal analysis on graphics hardware | This paper presents an approach for fast simulating deformable objects that is suitable for interactive applications in computer graphics. Linear modal analysis is often used to simulate small-amplitude deformation. Compared to traditional linear modal analysis where the CPU has been used to calculate the nodal displacements, the vertex program of GPU has been found widely adopted in the current applications. However the calculation suffers from great errors due to the limitation of the number of the input registers on GPU vertex pipeline. In our approach, we solve this problem by the fragment program. A series of 2D floating point textures are used to hold the model displacement matrix, the fragment program multiplies this matrix with the modal amplitude and sums up the results. Experiments show that the proposed technique fully utilizes the parallelism nature of GPU, and runs in real-time even for the complex models. | [
"deformation",
"modal analysis",
"graphics hardware",
"physically based modeling"
] | [
"P",
"P",
"P",
"M"
] |
46mP6-s | STABILITY ANALYSIS OF A CLASS OF GENERAL PERIODIC NEURAL NETWORKS WITH DELAYS AND IMPULSES | Based on the inequality analysis, matrix theory and spectral theory, a class of general periodic neural networks with delays and impulses is studied. Some sufficient conditions are established for the existence and globally exponential stability of a unique periodic solution. Furthermore, the results are applied to some typical impulsive neural network systems as special cases, with a real-life example to show feasibility of our results. | [
"neural network",
"delay",
"impulse",
"global exponential stability",
"periodic solution"
] | [
"P",
"P",
"P",
"P",
"P"
] |
-CVnjBj | Neuroprotective properties of resveratrol and derivatives | Stilbenoid compounds consist of a family of resveratrol derivatives. They have demonstrated promising activities in vitro and in vivo that indicate they may be useful in the prevention of a wide range of pathologies, such as cardiovascular diseases and cancers, as well have anti-aging effects. More recently stilbenoid compounds have shown promise in the treatment and prevention of neurodegenerative disorders, such as Huntingtons, Parkinsons, and Alzheimer's diseases. This paper primarily focuses on the impact of stilbenoids in Alzheimer's disease and more specifically on the inhibition of ?-amyloid peptide aggregation. | [
"stilbenoid",
"?-amyloid peptide",
"alzheimers disease",
"inhibition of aggregation"
] | [
"P",
"P",
"R",
"R"
] |
3cdAthd | A new fuzzy multicriteria decision making method and its application in diversion of water | Taking account of uncertainty in multicriteria decision making problems is crucial due to the fact that depending on how it is done, ranking of alternatives can be completely different. This paper utilizes linguistic values to evaluate the performance of qualitative criteria and proposes using appropriate shapes of fuzzy numbers to evaluate the performance of quantitative criteria for each problem with respect to its particular conditions. In addition, a process to determine the weights of criteria using fuzzy numbers, which considers their competition to gain greater weights and their influence on each other is described. A new fuzzy methodology is proposed to solve such a problem that utilizes parametric form of fuzzy numbers. The case study of diversion of water into Lake Urmia watershed, which is defined using triangular, trapezoidal, and bell-shape fuzzy numbers demonstrates the utility of the proposed method. (C) 2010 Elsevier Ltd. All rights reserved. | [
"multicriteria decision making",
"fuzzy numbers with different shapes",
"water resource planning and management"
] | [
"P",
"R",
"M"
] |
2PN4B9t | Scheduling divisible workloads on heterogeneous platforms ? | In this paper, we discuss several algorithms for scheduling divisible workloads on heterogeneous systems. Our main contributions are (i) new optimality results for single-round algorithms and (ii) the design of an asymptotically optimal multi-round algorithm. This multi-round algorithm automatically performs resource selection, a difficult task that was previously left to the user. Because it is periodic, it is simpler to implement, and more robust to changes in the speeds of the processors and/or communication links. On the theoretical side, to the best of our knowledge, this is the first published result assessing the absolute performance of a multi-round algorithm. On the practical side, extensive simulations reveal that our multi-round algorithm outperforms existing solutions on a large variety of platforms, especially when the communication-to-computation ratio is not very high (the difficult case). | [
"scheduling",
"asymptotical optimality",
"multi-round algorithms",
"divisible tasks"
] | [
"P",
"P",
"P",
"R"
] |
-ycoyMG | A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world | Prior studies have shown how knowledge diffusion occurs in classrooms and structured small groups around assigned tasks yet have not begun to account for widespread knowledge sharing in more native, unstructured group settings found in online games and virtual worlds. In this paper, we describe and analyze how an insider gaming practice spread across a group of tween players ages 912years in an after-school gaming club that simultaneously participated in a virtual world called Whyville.net. In order to understand how this practice proliferated, we followed the club members as they interacted with each other and members of the virtual world at large. Employing connective ethnography to trace the movements in learning and teaching this practice, we coordinated data records from videos, tracking data, field notes, and interviews. We found that club members took advantage of the different spaces, people, and times available to them across Whyville, the club, and even home and classroom spaces. By using an insider gaming practice, namely teleporting, rather than the more traditional individual person as our analytical lens, we were able to examine knowledge sharing and diffusion across the gaming spaces, including events in local small groups as well as encounters in the virtual world. In the discussion, we address methodological issues and design implications of our findings. | [
"connective ethnography",
"knowledge sharing",
"virtual worlds",
"knowledge diffusion",
"peer pedagogy"
] | [
"P",
"P",
"P",
"P",
"M"
] |
24GWAF4 | Unsupervised classification of SAR images using normalized gamma process mixtures | We propose an image prior for the model-based nonparametric classification of synthetic aperture radar (SAR) images that allows working with infinite number of mixture components. In order to enclose the spatial interactions of the pixel labels, the prior is derived by incorporating a conditional multinomial auto-logistic random field into the Normalized Gamma Process prior. In this way, we obtain an image classification prior that is free from the limitation on the number of classes and includes the smoothing constraint into classification problem. In this model, we introduced a hyper-parameter that can control the preservation of the important classes and the extinction of the weak ones. The recall rates reported on the synthetic and the real TerraSAR-X images show that the proposed model is capable of accurately classifying the pixels. Unlike the existing methods, it applies a simple iterative update scheme without performing a hierarchical clustering strategy. We demonstrate that the estimation accuracy of the proposed method in number of classes outperforms the conventional finite mixture models. | [
"sar images",
"normalized gamma process mixtures",
"image classification",
"nonparametric bayesian"
] | [
"P",
"P",
"P",
"M"
] |
-esx77z | Two Couple-Resolution Blocking Protocols on Adaptive Query Splitting for RFID Tag Identification | How to accelerate tag identification is an important issue in Radio Frequency Identification (RFID) systems. In some cases, the RFID reader repeatedly identifies the same tags since these tags always stay in its communication range. An anticollision protocol, called the adaptive query splitting protocol (AQS), was proposed to handle these cases. This protocol reserves information obtained from the last process of tag identification so that the reader can quickly identify these staying tags again. This paper proposes two blocking protocols, a couple-resolution blocking protocol (CRB) and an enhanced couple-resolution blocking protocol (ECRB), based on AQS. CRB and ECRB not only have the above-mentioned capability as AQS but also use the blocking technique, which prohibits unrecognized tags from colliding with staying tags, to reduce the number of collisions. Moreover, CRB adopts a couple-resolution technique to couple staying tags by simultaneously transmitting two ID prefixes from the reader, while ECRB allows the reader to send only one ID prefix to interrogate a couple of staying tags. Thus, they only need half time to identify staying tags. We formally analyze the identification delay of CRB and ECRB in the worst and average cases. Our analytic and simulation results show that they obviously outperform AQS, and ECRB needs less transmitted bits than CRB. | [
"couple-resolution",
"blocking protocol",
"rfid",
"tag identification",
"anticollision"
] | [
"P",
"P",
"P",
"P",
"P"
] |
1DFTkbU | an active measurement system for shared environments | Testbeds composed of end hosts deployed across the Internet enable researchers to simultaneously conduct a wide variety of experiments. Active measurement studies of Internet path properties that require precisely crafted probe streams can be problematic in these environments. The reason is that load on the host systems from concurrently executing experiments (as is typical in PlanetLab) can significantly alter probe stream timings. In this paper we measure and characterize how packet streams from our local PlanetLab nodes are affected by experimental concurrency. We find that the effects can be extreme. We then set up a simple PlanetLab deployment in a laboratory testbed to evaluate these effects in a controlled fashion. We find that even relatively low load levels can cause serious problems in probe streams. Based on these results, we develop a novel system called MAD that can operate as a Linux kernel module or as a stand-alone daemon to support real-time scheduling of probe streams. MAD coordinates probe packet emission for all active measurement experiments on a node. We demonstrate the capabilities of MAD , showing that it performs effectively even under very high levels of multiplexing and host system load. | [
"active measurement",
"mad"
] | [
"P",
"P"
] |
1qNLmVb | Policy-based inconsistency management in relational databases | We define inconsistency management policies (IMPs) for real world applications. We show how IMPs relate to belief revision postulates, CQA, and relational algebra operators. We present several approaches to efficiently implement an IMP-based framework. | [
"inconsistency management",
"relational databases"
] | [
"P",
"P"
] |
1PFewvU | A new delay-dependent stability criterion for linear neutral systems with norm-bounded uncertainties in all system matrices | This paper deals with the problem of robust stability for a class of uncertain linear neutral systems. The uncertainties under consideration are of norm-bounded type and appear in all system matrices. A new delay-dependent stability criterion is obtained and formulated in the form of linear matrix inequalities (LMIs). Neither model transformation nor bounding technique for cross terms is involved through derivation of the stability criterion. Numerical examples show that the results obtained in this paper significantly improve the estimate of the stability limit over some existing results in the literature. | [
"stability",
"neutral systems",
"uncertainty",
"linear matrix inequality",
"linear systems",
"time delay"
] | [
"P",
"P",
"P",
"P",
"R",
"U"
] |
38hSv-T | NEUTRALIZATION: NEW INSIGHTS INTO THE PROBLEM OF EMPLOYEE INFORMATION SYSTEMS SECURITY POLICY VIOLATIONS | Employees' failure to comply with information systems security policies is a major concern for information technology security managers. In efforts to understand this problem, IS security researchers have traditionally viewed violations of IS security policies through the lens of deterrence theory. In this article, we show that neutralization theory. a theory prominent in Criminology but not yet applied in the context of IS, provides a compelling explanation for IS security policy violations and offers new insight into how employees rationalize this behavior. In doing so, we propose a theoretical model in which the effects of neutralization techniques are tested alongside those of sanctions described by deterrence theory. Our empirical results highlight neutralization as an important factor to take into account with regard to developing and implementing organizational security policies and practices. | [
"is security",
"is security policies",
"deterrence theory",
"neutralization theory",
"compliance"
] | [
"P",
"P",
"P",
"P",
"U"
] |
vDvGAYv | Simplifying complex environments using incremental textured depth meshes | We present an incremental algorithm to compute image-based simplifications of a large environment. We use an optimization-based approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20 - 40 frames a second on a PC with little loss in visual fidelity. | [
"simplification",
"textured depth meshes",
"interactive display",
"spatial encoding",
"walkthrough"
] | [
"P",
"P",
"U",
"R",
"U"
] |
2VJ67BS | A Neural Approach to the Underdetermined-Order Recursive Least-Squares Adaptive Filtering | The incorporation of the neural architectures in adaptive filtering applications has been addressed in detail. In particular, the Underdetermined-Order Recursive Least-Squares (URLS) algorithm, which lies between the well-known Normalized Least Mean Square and Recursive Least Squares algorithms, is reformulated via a neural architecture. The response of the neural network is seen to be identical to that of the algorithmic approach. Together with the advantage of simple circuit realization, this neural network avoids the drawbacks of digital computation such as error propagation and matrix inversion, which is ill-conditioned in most cases. It is numerically attractive because the quadratic optimization problem performs an implicit matrix inversion. Also, the neural network offers the flexibility of easy alteration of the prediction order of the URLS algorithm which may be crucial in some applications. It is rather difficult to achieve in the digital implementation, as one would have to use Levinson recursions. The neural network can easily be integrated into a digital system through appropriate digital-to-analog and analog-to-digital converters. | [
"adaptive filtering",
"neural networks",
"underdetermined recursive least squares",
"analog adaptive filter"
] | [
"P",
"P",
"M",
"M"
] |
-TefUZu | Bottleneck flows in unit capacity networks | The bottleneck network flow problem (BNFP) is a generalization of several well-studied bottleneck problems such as the bottleneck transportation problem (BTP), bottleneck assignment problem (BAP), bottleneck path problem (BPP), and so on. The BNFP can easily be solved as a sequence of O(log n) maximum flow problems on almost unit capacity networks. We observe that this algorithm runs in O(min{m(3/2) . n(2/3) m} log n) time by showing that the maximum flow problem on an almost unit capacity graph can be solved in O(min{m(3/2) . n(2/3)m}) time. We then propose a faster algorithm to solve the unit capacity BNFP in O(min{m(n log n)(2/3). m(3/2) root log n}) time, an improvement by a factor of at least 3 root log n. For dense graphs, the improvement is by a factor of root log n. On unit capacity simple graphs, we show that BNFP can be solved in O root n log n) time, an improvement by a factor of root log n. As a consequence we have an O(m root n log n) algorithm for the BTP with unit arc capacities. (C) 2008 Elsevier B.V. All rights reserved. | [
"unit capacity",
"network flows",
"algorithms",
"graphs",
"combinatorial problems",
"minimum cost flow"
] | [
"P",
"P",
"P",
"P",
"M",
"M"
] |
51&uFbi | Taylor's decomposition on four points for solving third-order linear time-varying systems | In the present paper, the use of three-step difference schemes generated by Taylor's decomposition on four points for the numerical solutions of third-order time-varying linear dynamical systems is presented. The method is illustrated for the numerical analysis of an up-converter used in communication systems. | [
"taylor's decomposition on four points",
"three-step difference schemes",
"third-order differential equation",
"approximation order",
"periodically time-varying systems"
] | [
"P",
"P",
"M",
"U",
"M"
] |
BYi6VKN | BEM formulation for von Krmn plates | This work deals with nonlinear geometric plates in the context of von Krmn's theory. The formulation is written such that only the boundary in-plane displacement and deflection integral equations for boundary collocations are required. At internal points, only out-of-plane rotation, curvature and in-plane internal force representations are used. Thus, only integral representations of these values are derived. The nonlinear system of equations is derived by approximating all densities in the domain integrals as single values, which therefore reduces the computational effort needed to evaluate the domain value influences. Hyper-singular equations are avoided by approximating the domain values using only internal nodes. The solution is obtained using a Newton scheme for which a consistent tangent operator was derived. | [
"bending plates",
"geometrical nonlinearities"
] | [
"M",
"R"
] |
4hBiuik | On X-Variable Filling and Flipping for Capture-Power Reduction in Linear Decompressor-Based Test Compression Environment | Excessive test power consumption and growing test data volume are both serious concerns for the semiconductor industry. Various low-power X-filling techniques and test data compression schemes were developed accordingly to address the above problems. These methods, however, often exploit the very same "don't-care" bits in the test cubes to achieve different objectives and hence may contradict each other. In this paper, we propose novel techniques to reduce scan capture power in linear decompressor-based test compression environment, by employing algorithmic solutions to fill and flip X-variables supplied to the linear decompressor. Experimental results on benchmark circuits demonstrate that our proposed techniques significantly outperform existing solutions. | [
"capture-power reduction",
"linear decompressor-based test compression",
"x-filling"
] | [
"P",
"P",
"P"
] |
-WwsP1R | WWW-based access to object-oriented clinical databases: the KHOSPAD project | KHOSPAD is a project aiming at improving the quality of the process of patient care concerning general practitionerpatienthospital relationships, using current information and networking technologies. The studied application field is a cardiology division, with hemodynamic laboratory and the population of PTCA patients. Data related to PTCA patients are managed by ARCADIA, an object-oriented database management system developed for the considered clinical setting. We defined a remotely accessible view of ARCADIA medical record, suitable for general practitioners (GPs) caring patients after PTCA, during the follow-up period. Using a PC, a modem and Internet, an authorized GP can consult remotely the medical records of his PTCA patients. Main features of the application are related to the management and display of complex data, specifically characterized by multimedia and temporal features, based on an object-oriented temporal data model. | [
"object-oriented clinical databases",
"internet",
"temporal databases",
"www",
"java",
"software architecture",
"temporal data visualization"
] | [
"P",
"P",
"R",
"U",
"U",
"U",
"M"
] |
1UCndJH | Fuzzy R-subgroups with thresholds of near-rings and implication operators | Using the belongs to relation (q) and quasi-coincidence with relation (q) between fuzzy points and fuzzy sets, the concept of (alpha, beta)-fuzzy R-subgroup of a near-ring where alpha , beta are any two of {epsilon, q, epsilon boolean AND q , epsilon boolean OR q} with alpha not equal epsilon boolean AND q is introduced and related properties are investigated. We also introduce the notion of a fuzzy R-subgroup with thresholds which is a generalization of an ordinary fuzzy R-subgroup and an (epsilon, epsilon boolean OR q)-fuzzy R-subgroup. Finally, we give the definition of an implication-based fuzzy R-subgroup. | [
"fuzzy r-subgroup",
"near-ring",
"fuzzy point",
"fuzzy set",
"(epsilon, epsilon boolean or q)-fuzzy r-subgroup",
"level set"
] | [
"P",
"P",
"P",
"P",
"R",
"M"
] |
36ZhQYY | A time accurate pseudo-wavelet scheme for two-dimensional turbulence | In this paper, we propose a wavelet-Taylor-Galerkin method for solving the two-dimensional Navier-Stokes equations. The discretization in time is performed before the spatial discretization by introducing second-order generalization of the standard time stepping schemes with the help of Taylor series expansion in time step. Wavelet-Taylor-Galerkin schemes taking advantage of the wavelet bases capabilities to compress both functions and operators are presented. Results for two-dimensional turbulence are shown. | [
"wavelets",
"turbulence",
"navier-stokes equations",
"taylor-galerkin method"
] | [
"P",
"P",
"P",
"M"
] |
1hq4:n5 | Audio-augmented paper for therapy and educational intervention for children with autistic spectrum disorder ? | Physical tokens are artifacts which sustain cooperation between the children and therapists. Therapists anchor children's attention through physical tokens. Therapists controlled children's attention through physical tokens. The environment provides to the therapists the control of the flow of the therapeutic activity. The environment provides a good mean to stimulate fun and consequently to help children's attention on listening tasks. | [
"audio-augmented paper",
"autism spectrum disorder",
"social competence",
"social story",
"interaction design",
"tangible user interface"
] | [
"P",
"M",
"U",
"U",
"U",
"U"
] |
43ScaLu | TREATING EPILEPSY VIA ADAPTIVE NEUROSTIMULATION: A REINFORCEMENT LEARNING APPROACH | This paper presents a now methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed EEG signal, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, in particular the reinforcement learning paradigm, to formalize this optimization problem. We present an algorithm which is able to automatically learn an adaptive neurostimulation strategy directly from labeled training data acquired from animal brain tissues. Our results suggest that this methodology can be used to automatically find a stimulation strategy which effectively reduces the incidence of seizures, while also minimizing the amount of stimulation applied. This work highlights the crucial role that modern machine learning techniques can play in the optimization of treatment strategies for patients with chronic disorders such as epilepsy. | [
"epilepsy",
"neurostimulation",
"reinforcement learning"
] | [
"P",
"P",
"P"
] |
2mzk&XX | Load-Balanced Parallel Streamline Generation on Large Scale Vector Fields | Because of the ever increasing size of output data from scientific simulations, supercomputers are increasingly relied upon to generate visualizations. One use of supercomputers is to generate field lines from large scale flow fields. When generating field lines in parallel, the vector field is generally decomposed into blocks, which are then assigned to processors. Since various regions of the vector field can have different flow complexity, processors will require varying amounts of computation time to trace their particles, causing load imbalance, and thus limiting the performance speedup. To achieve load-balanced streamline generation, we propose a workload-aware partitioning algorithm to decompose the vector field into partitions with near equal workloads. Since actual workloads are unknown beforehand, we propose a workload estimation algorithm to predict the workload in the local vector field. A graph-based representation of the vector field is employed to generate these estimates. Once the workloads have been estimated, our partitioning algorithm is hierarchically applied to distribute the workload to all partitions. We examine the performance of our workload estimation and workload-aware partitioning algorithm in several timings studies, which demonstrates that by employing these methods, better scalability can be achieved with little overhead. | [
"streamlines",
"flow visualization",
"parallel processing",
"3d vector field visualization"
] | [
"P",
"R",
"M",
"M"
] |
34s&TkZ | An integrated research tool for X-ray imaging simulation | This paper presents a software simulation package of the entire X-ray projection radiography process including beam generation, absorber structure and composition, irradiation set up, radiation transport through the absorbing medium, image formation and dose calculation. Phantoms are created as composite objects from geometrical or voxelized primitives and can be subjected to simulated irradiation process. The acquired projection images represent the two-dimensional spatial distribution of the energy absorbed in the detector and are formed at any geometry, taking into account energy spectrum, beam geometry and detector response. This software tool is the evolution of a previously presented system, with new functionalities, user interface and an expanded range of applications. This has been achieved mainly by the use of combinatorial geometry for phantom design and the implementation of a Monte Carlo code for the simulation of the radiation interaction at the absorber and the detector. | [
"imaging",
"simulation",
"projection radiography",
"monte carlo"
] | [
"P",
"P",
"P",
"P"
] |
53qMrWp | Requirements and solutions to software encapsulation and engineering in next generation manufacturing systems: OOONEIDA approach | This paper addresses the solutions enabling agile development, deployment and reconfiguration of software-intensive automation systems both in discrete manufacturing and process technologies. As the key enabler for reaching the required level of flexibility of such systems, the paper discusses the issues of encapsulation, integration and re-use of the automation intellectual property (IP). The goals can be fulfilled by the use of a vendor-independent concept of a reusable portable and scalable software module (function block), as well as by a vendor-independent automation device model. This paper also discusses the requirements of the methodology for the application of such modules in the time- and cost-effective specification, design, validation, realization and deployment of intelligent mechatronic components in distributed industrial automation and control systems. A new global initiative OOONEIDA is presented, that targets these goals through the development of the automation object concept based on the recognized industrial standards IEC61131, IEC61499, IEC61804 and unified modelling language (UML); and through the creation of the technological infrastructure for a new, open-knowledge economy for automation components and automated industrial products. In particular, a web-based repository for standardized automation solutions will be developed to serve as an electronic-commerce facility in industrial automation businesses. | [
"industrial automation",
"intelligent manufacturing systems"
] | [
"P",
"R"
] |
2TuCMNK | Robust camera pose and scene structure analysis for service robotics | Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot's position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object's depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system. (C) 2011 Elsevier B.V. All rights reserved. | [
"robustness",
"feedback control",
"robot vision systems",
"stereo vision",
"3d reconstruction"
] | [
"P",
"P",
"P",
"M",
"U"
] |
CD7kS-L | NML, a schematic extension of F. Esteva and L. Godo's logic MTL | A schematic extension NML of F.Esteva and L.Godo's Logic MTL is introduced in this paper. Based on a new left-continuous but discontinuous t-norm, which was proposed by S.Jenei and can be regarded as a kind of distorted nilpotent minimum, the semantics of NML is interpreted and the standard completeness theorem of NML is proved. The fact that the maximum and the minimum are definable from the negation and implication in NML and NM is discovered, which also leads to a modification of the NM axiom system. (C) 2003 Elsevier B.V. All rights reserved. | [
"non-classical logics",
"left-continuous t-norm",
"mtl system",
"nm system",
"lukasiewicz system",
"nml system"
] | [
"M",
"M",
"R",
"R",
"M",
"R"
] |
2sy1F37 | verifying safety properties of concurrent java programs using 3-valued logic | We provide a parametric framework for verifying safety properties of concurrent Java programs. The framework combines thread-scheduling information with information about the shape of the heap. This leads to error-detection algorithms that are more precise than existing techniques. The framework also provides the most precise shape-analysis algorithm for concurrent programs. In contrast to existing verification techniques, we do not put a bound on the number of allocated objects. The framework even produces interesting results when analyzing Java programs with an unbounded number of threads. The framework is applied to successfully verify the following properties of a concurrent program: Concurrent manipulation of linked-list based ADT preserves the ADT datatype invariant [19]. The program does not perform inconsistent updates due to interference. The program does not reach a deadlock. The program does not produce run-time errors due to illegal thread interactions. We also find bugs in erroneous versions of such implementations. A prototype of our framework has been implemented. | [
"verification",
"concurrency",
"program",
"logic",
"parametric",
"thread",
"informal",
"shape",
"errors",
"algorithm",
"precise",
"concurrent program",
"object",
"manipulation",
"invariance",
"update",
"interference",
"deadlock",
"interaction",
"bugs",
"version",
"implementation",
"prototype",
" framework ",
"scheduling",
"timing",
"error detection",
"shape analysis"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"U",
"U",
"M",
"M"
] |
3JamDgg | Automatic discovery of theorems in elementary geometry | We present here a further development of the well-known approach to automatic theorem proving in elementary geometry via algorithmic commutative algebra and algebraic geometry. Rather than confirming/refuting geometric statements (automatic proving) or finding geometric formulae holding among prescribed geometric magnitudes (automatic derivation), in this paper we consider (following Kapur and Mundy) the problem of dealing automatically with arbitrary geometric statements (i.e., theses that do not follow, in general, from the given hypotheses) aiming to find complementary hypotheses for the statements to become true. First we introduce some standard algebraic geometry notions in automatic proving, both for self-containment and in order to focus our own contribution. Then we present a rather successful but noncomplete method for automatic discovery that, roughly, proceeds adding the given conjectural thesis to the collection of hypotheses and then derives some special consequences from this new set of conditions. Several examples are discussed in detail. | [
"elementary geometry",
"automatic theorem proving",
"grobner basis"
] | [
"P",
"P",
"U"
] |
-WjaEJb | Using support vector machines with a novel hybrid feature selection method for diagnosis of erythemato-squamous diseases | In this paper, we developed a diagnosis model based on support vector machines (SVM) with a novel hybrid feature selection method to diagnose erythemato-squamous diseases. Our proposed hybrid feature selection method, named improved F-score and Sequential Forward Search (IFSFS), combines the advantages of filter and wrapper methods to select the optimal feature subset from the original feature set. In our IFSFS, we improved the original F-score from measuring the discrimination of two sets of real numbers to measuring the discrimination between more than two sets of real numbers. The improved F-score and Sequential Forward Search (SFS) are combined to find the optimal feature subset in the process of feature selection, where, the improved F-score is an evaluation criterion of filter method, and SFS is an evaluation system of wrapper method. The best parameters of kernel function of SVM are found out by grid search technique. Experiments have been conducted on different training-test partitions of the erythemato-squamous diseases dataset taken from UCI (University of California Irvine) machine learning database. Our experimental results show that the proposed SVM-based model with IFSFS achieves 98.61% classification accuracy and contains 21 features. With these results, we conclude our method is very promising compared to the previously reported results. (C) 2010 Elsevier Ltd. All rights reserved. | [
"feature selection",
"erythemato-squamous diseases",
"support vector machines (svm)",
"sequential forward search (sfs)"
] | [
"P",
"P",
"P",
"P"
] |
-bAnAV8 | Domain-specific languages: From design to implementation application to video device drivers generation | Domain-Specific languages (DSL) have many potential advantages in terms of software engineering ranging from increased productivity to the application of formal methods. Although they have been used in practice for decades, there has been little study of methodology or implementation tools for the DSL approach. In this paper, we present our DSL approach and its application to a realistic domain: the generation of video display device drivers. The presentation focuses on the validation of our proposed framework for domain-specific languages, from design to implementation. The framework leads to a flexible design and structure, and provides automatic generation of efficient implementations of DSL programs. Additionally, we describe an example of a complete DSL for video display adaptors and the benefits of the DSL approach for this application. This demonstrates some of the generally claimed benefits of using DSLs: increased productivity, higher-level abstraction, and easier verification. This DSL has been fully implemented with our approach and is available. Compose project URL: http://www.irisa.fr/compose/gal. | [
"domain-specific language",
"device drivers",
"gal",
"video cards",
"partial evaluation"
] | [
"P",
"P",
"U",
"M",
"U"
] |
4JuKy9y | mapping visual notations to mof compliant models with qvt relations | Model-centric methodologies rely on the definition of domain-specific modeling languages for being able to create domain-specific models. With MOF the OMG adopted a standard which provides the essential constructs for the definition of semantic language constructs (abstract syntax). However, there are no specifications on how to define the notations (concrete syntax) for abstract syntax elements. Usually, the concrete syntax of MOF compliant languages is described informally. We propose to define MOF-based metamodels for abstract syntax and concrete syntax and to connect them by model transformations specified with QVT Relations in a flexible, declarative way. Using a QVT based transformation engine one can easily implement a Model View Controller architecture by integrating modeling tools and metadata repositories | [
"qvt relations",
"model transformation",
"visual languages",
"domain specific languages",
"ocl"
] | [
"P",
"P",
"R",
"M",
"U"
] |
AqbKo9w | Financial early warning system model and data mining application for risk detection | One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation. | [
"early warning systems",
"data mining",
"smes",
"financial distress",
"financial risk",
"chaid"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
aK-6cye | A new wavelet algorithm to enhance and detect microcalcifications | We have proposed a new thresholding technique applied over wavelet coefficients for mammogram enhancement. We have utilized Shannon entropy to find the best t in the wavelet domain. We have utilized Tsallis entropy to find the best t in the wavelet domain. The proposed technique has better FROC test with 96.5% true positives and 0.36 false positives. | [
"shannon entropy",
"tsallis entropy",
"wavelet transform",
"otsu",
"microcalcifications and mammograms"
] | [
"P",
"P",
"M",
"U",
"R"
] |