text
stringlengths
0
6.48M
meta
dict
Explicitly correlated coupled-cluster singles and doubles method based on complete diagrammatic equations. The explicitly correlated coupled-cluster singles and doubles (CCSD-R12) and related methods-its linearized approximation CCSD(R12) and explicitly correlated second-order Moller-Plesset perturbation method-have been implemented into efficient computer codes that take into account point-group symmetry. The implementation has been largely automated by the computerized symbolic algebra SMITH that can handle complex index permutation symmetry of intermediate tensors that occur in the explicitly correlated methods. Unlike prior implementations that invoke the standard approximation or the generalized or extended Brillouin condition, our CCSD-R12 implementation is based on the nontruncated formalisms [T. Shiozaki et al., Phys. Chem. Chem. Phys. 10, 3358 (2008)] in which every diagrammatic term that arises from the modified Ansatz 2 is evaluated either analytically or by the resolution-of-the-identity insertion with the complementary auxiliary basis set. The CCSD-R12 correlation energies presented here for selected systems using the Slater-type correlation function can, therefore, serve as benchmarks for rigorous assessment of other approximate CC-R12 methods. Two recently introduced methods, CCSD(R12) and CCSD(2)(R12), are shown to be remarkably accurate approximations to CCSD-R12.
{ "pile_set_name": "PubMed Abstracts" }
Q: A non-constant polynomial with odd-integer co-efficients and of even degree , has no rational root? Let $f(x)$ be a non-constant polynomial in $\mathbb Z[x]$ with odd-integer co-efficients and even degree ; then is it true that $f$ has no rational root ? A: Let $f(x) = a_{2n}x^{2n} + \cdots + a_1x + a_0$ be a polynomial with odd integer coefficients. By the rational root test, any rational root of $f(x)$ is of the form $c/d$ where $c$ divides $a_0$ and $d$ divides $a_{2n}$. Since $a_0$ and $a_{2n}$ are odd, so are $c$ and $d$. We have \begin{equation*} \begin{aligned} &\mathrel{\phantom{=}} a_{2n}\left(\frac{c}{d}\right)^{2n} + a_{2n-1}\left(\frac{c}{d}\right)^{2n-1} + \cdots + a_1\left(\frac{c}{d}\right) + a_0 \\ &=\frac{a_{2n}c^{2n} + a_{2n-1}c^{2n-1}d + \cdots + a_1cd^{2n-1} + a_0d^{2n}}{d^{2n}} \\ &=0, \end{aligned} \end{equation*} hence $$a_{2n}c^{2n} + a_{2n-1}c^{2n-1}d + \cdots + a_1cd^{2n-1} + a_0d^{2n} = 0.$$ Now since each $a_i$ and $c$ and $d$ are odd, so is each integer in the above sum. So we have an odd number of odd integers summing to zero, which is a clear contradiction.
{ "pile_set_name": "StackExchange" }
Q: How to manipulate a smoke simulation to look like a realistic candle flame I'm trying to figure out how to adjust my flame. It's a calm flame with Noise Method-Strength set to zero and all Vorticity settings all set to zero. My flame currently tapers flat like a cone and somewhat rounded and wavy at the tip. I'd like to have it go straight up for about two thirds and then taper into a point but not a flat taper. I'd like the taper to be a bit rounded. What settings do I need to play with to get this effect? The effect I'm looking for would be like a candle or lighter flame. Also, I'd like to know what settings to play with to change the curve of color transitions in the flame? I'm currently only able to, either have the two colors blend flat and gradient or a hard transition in color where the top color curves like an arc downward to the next blend of color and I'd like to reverse that curvature upward like the effect is in a real candle flame. All arc curvatures in color transitions are upward in a real flame. A: You can control the shape of the flames quite well by shaping the flow object. Here I've used a transparent cone as the flow object. I added some loop cuts to its sides to be able to make it slightly rounded, but I don't think this shape is exactly what you want. You will have to model the shape to fit your needs. The flow object's material: The domain material: The flame texture: The colour ramp is simply the default one from Quick Smoke, with just changed alpha levels. The domain settings: The flow object settings: And the result: If you use high transparency on the flames, the flow object may become visible although it's fully transparent. This is because you see where the flames are emitted from. To get around this, simply hide the flow object inside the lighter and make the lighter a collision object, to prevent the flames from seeping through its faces. Depending on whether you want to move the lighter around or not, you will need to change the Collision type. Animated = manual animation, Rigid = automatic animation by rigid body physics, Static = not moving. And an example .blend file As I said, you will probably need to model the shape further, and also tweak the settings to your liking. This is simply meant to be a pointer in the right direction. Furthermore, you have your flames rising far to fast, and they reach the upper boundary of the domain object before they come together into a singular point. If I were you, I'd delete the entire sim, keeping only the lighter object and start over with the flow and domain objects.
{ "pile_set_name": "StackExchange" }
142nd meridian east The 142nd meridian east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, Australasia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole. The 142nd meridian east is the estimated location of the boundary between Spain and Portugal (as per the Treaty of Zaragoza) signed on 22 April 1529. Consequently, at Possession Island 142°24'E, just before sunset on Wednesday 22 August 1770, Captain Cook declared the coast to be British territory in the name of King George III. The Coast to the west was already Dutch territory. The 142nd meridian east forms a great circle with the 38th meridian west. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 142nd meridian east passes through: {| class="wikitable plainrowheaders" ! scope="col" width="130" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakha Republic — Kotelny Island, New Siberian Islands |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | East Siberian Sea | style="background:#b0e0e6;" | Sannikov Strait |- | ! scope="row" | | Sakha Republic — Great Lyakhovsky Island, New Siberian Islands |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | East Siberian Sea | style="background:#b0e0e6;" | Laptev Strait |-valign="top" | ! scope="row" | | Sakha Republic Khabarovsk Krai — from |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakhalin Oblast — island of Sakhalin |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Strait of Tartary | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakhalin Oblast — island of Sakhalin |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Strait of Tartary | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakhalin Oblast — island of Sakhalin |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Strait of Tartary | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakhalin Oblast — island of Sakhalin |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | La Pérouse Strait | style="background:#b0e0e6;" | |- | ! scope="row" | | Hokkaidō Prefecture — island of Hokkaidō |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Pacific Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Iwate Prefecture — island of Honshū |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Pacific Ocean | style="background:#b0e0e6;" | Passing just west of the island of Chichi-jima, Tokyo Prefecture, (at ) Passing just west of the island of Haha-jima, Tokyo Prefecture, (at ) |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arafura Sea | style="background:#b0e0e6;" | Passing just west of the Torres Strait Islands, (at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Gulf of Carpentaria | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Queensland New South Wales — from Victoria — from |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Indian Ocean | style="background:#b0e0e6;" | Australian authorities consider this to be part of the Southern Ocean |- | ! scope="row" | | Victoria — Lady Julia Percy Island |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Indian Ocean | style="background:#b0e0e6;" | Australian authorities consider this to be part of the Southern Ocean |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Antarctica | Adélie Land, claimed by |- |} See also 141st meridian east 143rd meridian east References e142 meridian east
{ "pile_set_name": "Wikipedia (en)" }
<!DOCTYPE html> <html> <head> <title>Issue 6282: CSS panel fails on interpreting @page</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <link href="../../_common/testcase.css" type="text/css" rel="stylesheet"/> <style type="text/css"> #teststyle1 { background-color: green; } #teststyle2 { background-color: red; } @page { margin: 0.5cm; } </style> </head> <body> <header> <h1><a href="http://code.google.com/p/fbug/issues/detail?id=6282">Issue 6282</a>: CSS panel fails on interpreting @page</h1> </header> <div> <section id="description"> <h3>Steps to reproduce</h3> <ol> <li>Open Firebug</li> <li>Switch to the <em>CSS</em> panel</li> <li>In the Location Menu switch to <em>issue6282.html</em></li> </ol> <h3>Expected result</h3> <ul> <li>The <em>CSS</em> panel should show the contents of <em>issue6282.html</em>.</li> </ul> </section> <footer>Sebastian Zartner, [email protected]</footer> </div> </body> </html>
{ "pile_set_name": "Github" }
Tinnitus Related Conditions Tinnitus is affiliated with a range of comorbid conditions, including vestibular disorders, audiological problems, and behavioral health issues. Tinnitus is a symptom of a wide range of underlying health issues. It is also a condition that often exists comorbidly (concurrently, at the same time) with other health maladies. Below is a list of the health issues most frequently associated with tinnitus, and most often reported as comorbid conditions by tinnitus patients. The causal relationship between tinnitus and each comorbid condition is variable and complex. In some instances the comorbid condition is itself the primary cause of tinnitus. This is certainly true with hearing loss and Ménière’s Disease, in which tinnitus is one of several symptoms caused by the parent disorder. In some situations tinnitus may exacerbate the comorbid condition, as is the case with hyperacusis. And, in other instances, tinnitus and the comorbid condition have shared causality. This appears to be the case with behavioral health issues, which can be the product of burdensome tinnitus, but also a cause of it. The following health conditions are commonly associated with tinnitus: Hearing Loss Hearing loss is the primary catalyst for tinnitus symptoms; it is common for patients to experience both conditions simultaneously. The prevalence of comorbid hearing loss and tinnitus fluctuate widely. One large research project stated that 56% of all tinnitus patients reported some hearing loss. In a 2014 survey of ATA’s membership, 39% of respondents said they experienced hearing loss. However, many researchers and clinicians believe that subjective tinnitus cannot exist without some prior loss of hearing — even if such loss is not discernable by the patient. If true, the reported prevalence of hearing loss and tinnitus in conjuction is severely underreported. Ménière’s Disease Sometimes called endolymphatic hydrops, Ménière’s Disease is a vestibular disorder in the inner ear that can affect hearing and balance. Patients with Ménière’s often experience bouts of mild-to-severe vertigo, along with sporadic tinnitus. It is estimated that approximately .02% of the U.S. population (615,000 individuals) has Ménière’s.1 3% of ATA’s membership reported being diagnosed with the condition. Hyperacusis Hyperacusis is an abnormal, extreme sensitivity to noise, including ordinary environmental sounds presented at a normal volume. Patients experiencing hyperacusis experience physical pain (as opposed to emotional annoyance) when exposed to sound. Estimates for the prevalence of hyperacusis range from 7.7-15% of the population. 12% of ATA’s members report having hyperacusis symptoms. Misophonia Also known as selective sound sensitivity, misophonia is an abnormal negative emotional reaction to specific sounds. Patients with misophonia feel extreme anger, disgust, or fear toward select noises. They may often have similar reactions to particular visual stimuli. The prevalence of misophonia on the population level is unknown, but it is estimated that 4-5% of tinnitus patients experience some form of the condition. Less than 1% of ATA’s members self-identified as having misophonia. Phonophobia Phonophobia is a fearful emotional reaction specific to loud sounds. The prevalence of phonophobia, both within the general population and the tinnitus population, is unknown. Depression and Anxiety Psychiatric issues can be both a contributing factor to burdensome tinnitus and a consequence of burdensome tinnitus. Tinnitus symptoms often generate feelings of despair and anxiety in many patients. Current estimates suggest that 48-78% of patients with severe tinnitus also experience depression, anxiety or some other behavioral disorder. 13% of ATA’s membership self-identified as being diagnosed with a mental health issue. At the same time, pre-existing behavioral conditions may make it more likely that the patient will experience tinnitus as a burdensome condition. For example, one large population study posits that people with generalized anxiety disorder are nearly 7 times more likely to experience chronic, burdensome tinnitus. Other Vestibular Conditions The vestibular system, which manages balance and spatial orientation, is closely connected with the auditory system, which controls hearing functions. Several structures in the inner ear play key roles in both sensory systems. As such, damage to one system (as evidenced by tinnitus) is often mirrored by a correlated vestibular condition.
{ "pile_set_name": "Pile-CC" }
Team Cleveland Flag $35 Now you can show your love for this city and your favorite teams with some of GV's best Cleveland design now available as a limited edition Flag. No need to change up your flag for each event. This Team Cleveland Flag represents the whole city! This Team Cleveland Flag measures at 62" X 36". This product and its graphic design is not endorsed or licensed in any way by any team, or organization. Now you can show your love for this city and your favorite teams with some of GV's best Cleveland design now available as a limited edition Flag. No need to change up your flag for each event. This Team Cleveland Flag represents the whole city! This Team Cleveland Flag measures at 62" X 36". This product and its graphic design is not endorsed or licensed in any way by any team, or organization.
{ "pile_set_name": "Pile-CC" }
NEW YORK—In Vancouver, they had bracelets. They looked like watches, simple ones, all black except for a blue line along the wristband and the red numbers on the face. They monitored wrist movement, and therefore how much players slept, when they slept — on the bus? On the plane? — and how well. The Canucks all wore them. That helped, right? “Yeah, I couldn’t go out for a beer after the game,” says Willie Mitchell, the Los Angeles Kings defenceman who was in Vancouver when they experimented with sleep science. “They’re actually biofeedback devices on your wrist, so we sent one of the rookies back to the hotel and had his wear 25 of ’em.” If you are playing hockey right now, you are tired. The Los Angeles Kings became the first team in the modern era to win three seven-game series and make the Stanley Cup final; the New York Rangers, slackers that they are, played one fewer on the way here. No team had ever gone the full seven in the first two rounds and won the Cup. This year, someone will. This may explain the sloppy work by the Kings in Games 1 and 2, and for that matter, some looser-than-normal play from the Rangers. Sure, the ice in L.A. was a mess. And Game 1 went to overtime, before double overtime in Game 2. It’s hard to get this far. “You can get IVs, those work best,” said Kings defenceman Drew Doughty before Game 3 of the Stanley Cup final. “And obviously just crushing waters and Gatorades and stuff like that. A lot of it is just getting rest, too. I’m the best couch-sitter in the world. I can sit there all day, so I make sure to do that a lot.” Mitchell tweeted out a picture of the team plane, where between seats he had set up a makeshift bed, full of white blankets and pillows. Apparently most Kings slept, a lot. As defenceman Alec Martinez put it, “We’re not that highbrow. We can sleep on the floor.” “When you switch west coast to east coast, and you guys do that all the time too, you know that your sleep schedule kind of gets messed up,” says Mitchell. “So I’m a big believer in taking it when you can get it, because sometimes you get here and you can’t sleep at night. And if you can get two, three hours on the flight out east, you grab it then. And that was kind of the case. I’m sure the Rangers were doing the same thing.” The Rangers, though, are one of the two least-travelled teams in the league, along with the Islanders, while the Kings rank in the top five for mileage. That being said, at this time of year, there are those who eschew the notion that the way NHL players travel is too exhausting. “On a plane I’m sitting in a chair 36,000 feet in the air — I think Louis C.K. talks about that,” said Martinez. “I mean, if you’re complaining about that, you’ve got problems.” Comedian Louis C.K. did the famous bit about how we take air travel for granted, which included this passage: “People like to say there’s delays on flights. Delays? Really? New York to California in five hours. That used to take 30 years to do that and a bunch of you would die on the way there and have a baby. You’d be with a whole different group of people by the time you got there.” That being said, a significant group of NHL media took a Delta flight from LAX Sunday that was supposed to take off at 9 a.m., and after a soul-destroying voyage, the plane door wouldn’t open when they finally landed at JFK at 3 a.m. Courage. The Kings and Rangers haven’t faced that, yet. But the need to ramp up the emotional engine again and again, more than any team in history, takes its toll. Montreal had no emotion to start Game 1 against the Rangers after topping the Bruins in seven. The Kings were slow to start Game 1 in this series after Game 7 against Chicago, but Kings coach Darryl Sutter says they were better, scoreboard aside, in Game 2. It’s a tournament of the fittest, in every way. “I think the longer series go, the longer the playoffs go, (it’s about) courage, determination, extra effort,” Sutter said. “You’re never going to feel fresh. You’re never going to feel as good as you did in November. That’s the way it works. That’s for sure. They’re people.” Doughty put it another way, though: “The heart doesn’t get tired.” “I think there’s teams that can, and teams that can’t,” says Mitchell. “And I think the great teams are the ones that manage their game and find a way when you’re in that situation, and fortunately for us we’ve been able to do that over the course of the past few years. “Look at the great teams. You look at Detroit — when they had all their success, you’re telling me they had their A game every night? No. It’s those good team that can find a way to win when you don’t have your A game every night, and that’s the difference-maker. We’ll look for our A game tonight, right?” You, and the other guys. Good luck. More on thestar.com We value respectful and thoughtful discussion. Readers are encouraged to flag comments that fail to meet the standards outlined in our Community Code of Conduct. For further information, including our legal guidelines, please see our full website Terms and Conditions.
{ "pile_set_name": "Pile-CC" }
1. Introduction {#sec1} =============== Under the furiously competitive market, the competition among similar enterprises has been focused on new product development, that is, a powerful guarantee of the enterprise who wants to enhance the core competitiveness and gain superiority. However, new product development is not only of high cost, but also of higher cost of failure development. Successful new product development is not just a purely technical problem; moreover, it needs to consider various influencing factors. To improve success rate, developers must comprehensively integrate various factors and then correctly evaluate and choose the better development. New product development is a dynamic process that includes raising, analyzing, and solving problems. The decision making, which is affected by wide range factors such as the product\'s complex structure, is semistructured or even nonstructured. With the maturation of information processing technology and software auxiliary tools, the decision making contributes to not only reducing costs and increasing efficiency, but also improving business strategy \[[@B1]\]. Developers are beginning to make full use of the superiority of software auxiliary tools for better and easier obtaining of new product development. In recent years, many models for new product development have been proposed. However, these highlight the influencing factors including product and market and seldom regard customer needs. Only Chan et al. overcome this shortcoming and propose a new model that integrates three recognized influencing factors of new product development-product attributes specified by designers, customer requirements and satisfaction, and marketing competence. Using system dynamics, the progress for a new product launching the market is stimulated, and the customer lifetime value (CLV) can be obtained. Relative to previous research, Chan\'s model has incomparable superiority whether on evaluation perspective and method or the influencing factor. But judging from "the basic goal of software engineering" \[[@B2]\], effectively developing software with systematical construction and engineered management, it is obvious that developing systematic software can better meet user needs. From above perspective, this study finds out three major deficiencies by further analyzing Chan\'s model. (1) To evaluate the effectiveness of each influencing factor, the experts\' workload is too heavy and work time is too long. Meanwhile, the evaluation results have obvious subjectivity deviations. (2) The effectiveness of each influencing factor for different levels of customers is not yet distinguished. (3) Different levels of customers have the same transition probability which is static but dynastic. To overcome the above three deficiencies in Chan\'s model, this study proposes a CBR-based and MAHP-based customer value prediction model for new product development (C&M-CVPM), and its advantages are as follows. (1) Case based reasoning (CBR) can reduce experts\' evaluation workload and work time, which could make up for deficiency one. (2) Multiplicative analytic hierarchy process (MAHP) uses the actual effectiveness of the influencing factor rather than the overall average in stimulation, which could make up for deficiency two. (3) The dynamic customers\' transition probability is more close to reality, which could make up for deficiency three. This section has given the general background to the study. [Section 2](#sec2){ref-type="sec"} discusses the literature on new product development. The methodology for the proposed C&M-CVPM is presented in [Section 3](#sec3){ref-type="sec"}. [Section 4](#sec4){ref-type="sec"} introduces the model\'s framework and its three modules. A stimulation experiment is conducted in [Section 5](#sec5){ref-type="sec"}. Some concluding remarks are offered in [Section 6](#sec6){ref-type="sec"}. 2. Literature Review {#sec2} ==================== 2.1. The Present State of the Research {#sec2.1} -------------------------------------- In recent years, many conventional and mature models for new product development have been proposed. Xu et al. use fuzzy set and utility and theory to evaluate the typical phases during new product\'s whole lifecycle \[[@B3]\]; Besharati et al. measure new product development according to the size of customer expected utility, that is, market demand based on customers\' preference, designers\' experience preference, and uncertainty of product designing characteristics \[[@B4]\]; Alexouda applies algorithm and enumeration method in new product development and then combines the optimal product from the perspective of customer utility maximization and at last returns market share\'s maximization \[[@B5]\]; Matsatsinis and Siskos discuss market share maximization by historical data (economic or customer survey data) \[[@B6]\]. Hu and Bidanda utilize Markov to build sustainable green product lifecycle\'s evolution system and then make decisions for new product development from the maximization of the present discount value during the whole product lifecycle \[[@B7]\]. The following three areas, which cover various influencing factors, have been identified as significant and requisite in new product development: (a) product attributes specified by designers \[[@B3], [@B8]--[@B10]\], (b) customer requirements and satisfaction \[[@B11]--[@B14]\], and (c) marketing competence \[[@B15]--[@B18]\]. However, none of the currently available models considers all of these areas concurrently. Only Chan and Ip \[[@B19]\] integrates three recognized factors and takes the time value of money into consideration, so a new, comprehensive model is proposed on new product development. Within Chan\'s model, the involved influencing factors are classified into five parts: overall product attractiveness (OPA), overall customer satisfaction (OCS), word of mouth (WOM), marketing approach (MA), and remarketing approach (RA). Based on purchasing frequency, new product customers are divided into five levels: potential customers, first-time customers, regular customers, frequent customers, and loyal customers. While the system is running, the overall average effectiveness of the above five influencing factor is evaluated by experts or historical data. Then, the customer purchasing model is established by system dynamics and the customer lifetime value (CLV) can be predicted by Markov model. And, at last, this model could evaluate new product development. 2.2. Analysis of the Research Status {#sec2.2} ------------------------------------ Relative to previous research, Chan\'s model has incomparable superiority whether on evaluation perspective and method or the influencing factor. But, in order to better meet user needs, after further analyzing with the evaluation methods used in Chan\'s model, it can easily find out three specific deficiencies which need to be solved urgently. *Deficiency (1).* The experts\' workload must be reduced by auxiliary software. To be specific, when Chan\'s model predicts customer lifetime value of new product, the parameters\' effectiveness such as production cost is basically obtained with the concerted efforts of experts. However, there is no available data for the product that does not yet enter the market. Therefore, it is hardly feasible to investigate and evaluate those parameters one by one. This is because, firstly, it will impose large workloads to experts and last a long period of time to evaluate each parameter. Secondly, system developers may not have necessarily adequate ability to accurately evaluate each parameter. In addition, the subjectivity from experts could easily lead to certain deviations. *Deficiency (2).* For improving the accuracy of evaluation, it is necessary to distinguish the effectiveness of the influencing factors for different levels of customers. Chan thinks that different levels of customers have the same effectiveness of one influencing factor that equals to overall average effectiveness of all levels of customers. However, in real life, the high-frequency purchasing customers often have a relatively high recognition to the product during their purchasing. When the product\'s overall average effectiveness has reached a certain value, it may not attract the low-frequency purchasing customers, but it can attract the high-frequency ones. *Deficiency (3).* Dynamic customers\' transition probability is much more realistic. When stimulating the purchasing, Chan uses original nonzero probability as all levels of customers\' transition probability. Namely, the level of (*i*) customers\' transition probability equals the retained customers\' number at time (*i* − 1) divided the customers\' number at time (*i* − 2), and it is obvious that the probability is irrelevant to time. So, Chan believes that different levels of customers have the same transition probability that is static but dynastic. However, the real system is filled with dynamics, complexity, and product periodicity. As time passed, each level of customers\' transition probability will change within a certain range. This study takes the advantages of CBR and MAHP and introduces the dynamic transition probability. Based on what is discussed above, C&M-CVPM is proposed. In response to the three deficiencies from Chan\'s model, this study puts forward three solutions, which are the main contributions and innovations simultaneously. *Solution (1)*. CBR in C&M-CVPM helps decision makers to match out the new product which is most similar to the preexisting products, from enterprise\'s historical data. The preexisting products\' parameters are used for the new product. If and only if it lacks historical data, the experts\' evaluation is required. *Solution (2)*. MAHP in C&M-CVPM distinguishes the effectiveness of each influencing factor for different levels of customers though calculating the influencing weight from experts\' preference matrix. *Solution (3)*. The dynamic customers\' transition probability in C&M-CVPM means that the probability is changing in a certain range, so that the model has a more realistic significance. 3. Study Approach {#sec3} ================= 3.1. CBR (Case Based Reasoning) {#sec3.1} ------------------------------- ### 3.1.1. Overview of CBR {#sec3.1.1} CBR is firstly proposed by Roger Schank in 1982 \[[@B20]\] and has become an important machine-learning method in the field of artificial intelligence \[[@B21]\]. CBR can obtain the solutions of new problems by adjusting solutions of old problems similar to new ones \[[@B22]\]. Cognitive science is the logical origin of CBR\'s theoretical system \[[@B23]\]. Cognitive science suggests that human can pass on the perceived information to their brains \[[@B24]\] and then brains store and remember the information. So, in the future, if brains encounter with one problem which shares the same or similar stored information, brains can quickly provide the information to solve this problem. As is known to us, this form of human cognition is a common way to solve the problem in reality. And its development inspires and enlightens the artificial intelligence experts and scholars greatly. Based on the above way of human cognition, CBR uses the knowledge and experience that happened in the accumulated cases to provide reference for those similar cases. Using CBR to deal with the present case is not entirely restarting over again, but it matches the past cases with the present case and modifies the existing methods in order to make the method more suitable to the present case. Therefore, CBR is more appropriate for those areas such as imperfect knowledge but relatively rich experience. CBR is different from the traditional reasoning method and it can collect the methods resolved the past problems which is the same as or similar to the present one, whereas the traditional reasoning method requires complete domain knowledge. CBR is generally divided into four steps, which are usually called 4R \[[@B25]\]: retrieve, reuse, revise, and retain (see [Figure 1](#fig1){ref-type="fig"}). For a new case, CBR will retrieve the same and most similar case from the case vase, reuse the case, and correspondingly revise the suggested solution to get a confirmed solution for the new case. Simultaneously, the amended case will be retained as a new case in the case-base for subsequent uses. ### 3.1.2. Scope of CBR {#sec3.1.2} Nowadays, CBR has been playing an important role in practical applications because of its advantages, such as the online services applications \[[@B26]\], scheduling and process planning \[[@B27], [@B28]\], hydraulic mechanical design \[[@B29]\], architectural design \[[@B30]\], customer relationship management \[[@B31]\], troubleshooting \[[@B32], [@B33]\], design and implementation of knowledge management \[[@B34], [@B35]\], the prediction of information systems outsourcing success \[[@B36]\], and customer and market plans \[[@B37], [@B38]\]. CBR has solved many problems successfully. Analyzing and summarizing the findings, CBR is mainly applied in the following cases.The large amount of tacit knowledge stored in experts\' brains is not easily extracted. Therefore, when it is not convenient to extract knowledge from vast areas quickly and easily, CBR can straightly extract the valuable tacit knowledge out of the experts\' brains to solve problems.When using rule-based reasoning, the amount of tacit rules and professional knowledge stored in experts\' brains is huge. So, it is easy to cause combinatorial explosion between each rule and field. Eventually it will lead to reasoning fault and low efficiency. However, CBR can avoid such combinatorial explosion problem.While demanding to quickly update the domain knowledge and rules, CBR can avoid people being involved in the problems of exponential knowledge and information by incremental learning method. And, to a certain extent, CBR can also reduce the problems caused by outdated or incomplete previous experience and knowledge.When facing incomplete past experience, knowledge and rules, or difficult model and structure, such as effective plans of suddenly occurred events, CBR can effectively avoid some problems that are caused by these complex social management or economic system.CBR can be applied in the field of knowledge management which attaches great importance to tacit knowledge. It is far reaching significant for effectively mining the tacit knowledge and making it into explicit knowledge. The knowledge or information carried in the case from CBR is relatively complete fragments, and it includes tacit knowledge to be effectively mined in knowledge management. ### 3.1.3. Application of CBR in This Study {#sec3.1.3} With aggravated global competition, in order to gain greater competitive advantages, enterprises have to develop new products among the same present products \[[@B24]\]. Belecheanu et al. point out that CBR is more suitable than KBS (knowledge systems) to solve the problem of complex and uncertain new product development \[[@B40]\]. In fact, CBR has been widely used in the product development at present. Therefore, it is feasible to apply CBR in this study. 3.2. MAHP {#sec3.2} --------- MAHP (multiplicative analytic hierarchy process) is an improvement and extension of AHP (analytic hierarchy process), so it is necessary to do an overview of AHP. ### 3.2.1. Overview of AHP {#sec3.2.1} In 1970s, Saaty, a famous American operational research expert, proposes analytic hierarchy process (AHP), which divides multicriteria and multiobjective decision making into object, criteria, and project. AHP does quantitative and qualitative analysis on hierarchy. AHP can deeply analyze the nature of complex decision problems, factors, and intrinsic relationships then use minor quantitative information to make thinking progress mathematical and at last provide an easy way for complex decision making mixed of qualitative and quantitative problems. AHP divides relevant elements into goals, guidelines, and programs, and, based on which, it makes qualitative and quantitative analysis decisions \[[@B41]\]. AHP belongs to subjective weighting methods. That is, the decision makers compare pairwise the importance for multiple criteria according to their previous knowledge and experience and then determine the final weight for each criterion from comparison matrix through relevant mathematical model. ### 3.2.2. Overview of MAHP {#sec3.2.2} Although AHP has been widely used in multiobjective decision, it still has many shortcomings. For example, when using this method, an important step is to check whether the comparison matrix meets the condition of consistency or not, namely, to revise the comparison matrix to ensure the consistency of matrix. However, the consistency always exists in the real world, and this becomes the most fundamental and fatal flaw of AHP. In addition, the presence of incompatibility, incomplete information and reverse caused by the inconsistency of judgment matrix are also the flaws of the AHP. To solve these problems, in 1990s, Lootsma first proposed a method to improve the original AHP, known as multiplicative analytic hierarchy process (MAHP) \[[@B42]\]. MAHP makes up three decencies in AHP-weight synthesizing, order preserving, and weight scaling. MAHP can improve determination of the weights under various criteria of decision making \[[@B43]\]. ### 3.2.3. Application of MAHP in This Study {#sec3.2.3} Therefore, this study uses MAHP in order to better determine the weights of influencing factors for different levels of customers \[[@B1], [@B44]\]. 4. C&M-CVPM {#sec4} =========== 4.1. The System Framework of C&M-CVPM {#sec4.1} ------------------------------------- The main functions of C&M-CVPM are providing the customer value for new product development through changing-trend diagram of the net customer lifetime value (NCLV) and the customers\' number. The implementation of the functions is divided into three modules: CBR, MAHP, and NCLV prediction module. Each module is running by data or related technical support through model server (MS), knowledge server (KS), database server (DBS), data warehouse server (DWS), and online analytical processing and data mining server (ODS) (see [Figure 2](#fig2){ref-type="fig"}). While C&M-CVPM is running, CBR module could work out the decision-set attributes for the preexisting product which is most similar to the new product and then input these parameters as the new product\'s corresponding parameters (*E* ~1~, *E* ~2~, *E* ~4~, and *E* ~5~) to MAHP module. The decision-set attributes include each parameter and developing experience. The development can adjust, reevaluate, and also modify the inappropriate attributes to gain more satisfying results. The development experience is the lessons learned from the preexisting product, marketing, or remarketing experience. In addition, if the matched similarity is lower than the threshold set by experts during running CBR module, C&M-CVPM will ask for experts\' assistance to provide the value of those parameters to be evaluated through experts\' investigation or information provided from the relevant servers. C&M-CVPM uses the integrated construction on network \[[@B46]\]; each server uses both C/S (client/server) mode, on which the resources can be shared and services can be provided to other severs or different clients simultaneously. C&M-CVPM\'s users include the product developer, who is the actual operator of the system, the domain experts, who are responsible for stabling and maintaining the sharing resources on all kinds of servers, such as the knowledge in KS and the models in MS, and the system administrators, who should be duty bound to maintain the whole system from the information technology perspective. The clients correspond to the integrated components of traditional decision support system, the system for problem synthesis and interaction. Firstly, the new produce development is inputted into the clients, and then the relevant system control program is generated; at last all types of servers through the network are called to evaluate the new product development. Meanwhile, the developer can also query the shared resources on all types of serves to meet their own needs. MS, KS, DBS, DWS, and ODS are responsible for related data and technical support. Database server (DBS) updates single user\'s database and database management systems and adds network protocol, communication protocols, concurrency control, security mechanisms, and other server functions, mainly for storing all source data of enterprise\'s production, sales and research, and so on. DBS provides support for DWS. Data warehouse server (DWS) is composed of warehouse management system (WMS), data warehouse (DW), and analysis tools. For better centralizing and simplifying part work of DWS, and also for more efficiently achieving the function of DWS, analysis tools here do not include online analytical processing and data mining. Data warehouse mainly stores the data from DBS by the extracting, transforming, and loading. The data set is subject-oriented, integrated, stable, and at different time, including historical data of enterprise\'s all products, current data, and integrated data. DWS offers support for ODS. Online analytical processing and data mining server (ODS) is independent of DWS. During its runtime, a large amount of data extract from DWS to ODS builds a temporary warehouse; then ODS can find out a deeper level of assistant decision information. ODS is mainly responsible for updating and maintaining preexisting products\' information, finding the relationship between the loss rate of customers (LR) and overall customer satisfaction (OCS) and the relationship between word of mouth (WM) and OCS. ODS offers support for MS. Model server (MS), based on single user\'s model base and model-base management system, increases network protocol, communication protocols, security mechanisms, and other server functions. MS mainly stores all mathematical models and data processing models necessary for C&M-CVPM\'s three modules; mathematical models includes all the algorithms and equations such as the similarity calculation equations in CBR, while data processing models are used for selecting, sorting, and summarizing the data. The models are implemented during MS runtime; meanwhile the data, information, and knowledge on other severs can also be called up as required. Knowledge server (KS) upgrades single user\'s knowledge base management system, knowledge base, and inference engine and furthermore adds network protocol, communication protocols, concurrency control, security mechanisms, and so forth. KS is generally responsible for storing a large number of production rule knowledge and fact knowledge such as the threshold in CBR of matched similarity. 4.2. The CBR Module {#sec4.2} ------------------- As discussed in the introduction, CBR in C&M-CVPM can help decision makers to match out the new product case, which is most similar to the preexisting product cases, from historical enterprise\'s data \[[@B47]\]. The preexisting product\'s parameters are able to use for the new product. In Chan\'s model, these parameters can be acquired only by experts\' evaluation, including CS (the cost of the goods), MC (the marketing cost), RC (remarketing cost), PC (the number of potential customers), RP (the retail price of the product), *E* ~1~ (overall product attractiveness), *E* ~2~ (overall customer satisfaction), *E* ~4~ (marketing effectiveness), and *E* ~5~ (remarketing effectiveness). The key techniques in CBR are the indexation for case representation and the similarity evaluation \[[@B48]\] and can efficiently match out the most similar preexisting case from the case-base with the new product. Therefore, from these two aspects, the section below will illustrate how CBR helps C&M-CVPM obtain the above parameters to be predicted. ### 4.2.1. Case Representation {#sec4.2.1} There are many approaches for case representation and the knowledge expression method in artificial intelligence. An overwhelming majority of cases in intelligent case-base use the frame representation. In reality, identification of different products can be done by their structural features (including specific functions, appearance, and quality and implementation techniques). So, the frame representation can obviously better meet the product\'s characteristics. Because the frame representation expresses the same thoughts and ideas with CBR, this study uses the framework representation to express the product case in CBR module (see [Figure 3](#fig3){ref-type="fig"}). The dotted boxes represent the condition-set attributes, and the solid boxes represent the decision-set attributes (i.e., the parameters to be predicted). CBR will automatically work out the similarity of the condition-set attributes between new and preexisting products and then output the decision-set attributes which have maximum similarity as the corresponding parameters for new product. ### 4.2.2. Similarity Evaluation {#sec4.2.2} There are many algorithms to evaluate the similarity of the product\'s condition-set attributes. Gu et al. \[[@B49]\] propose the FRAWO which has obvious advantages over the traditional approaches in retrieval efficiency and the quality of retrieval results. Therefore, making use of FRAWO, C&M-CVPM can realize similarity evaluation between the products. Specific processes are as follows. Preexisting product cases are stored in the product case-base. Assuming that the product case-base has *m* cases, the case (*i*)---*X* ~*i*~ has *n* attributes values such as *X* ~*i*1~, *X* ~*i*2~,..., *X* ~*in*~, and the values of *n* condition-set attributes for the target new product case *G* are *G* ~1~, *G* ~2~,..., *G* ~*n*~; then the condition-set attributes matrix *D*(*X* ~*ij*~) can be obtained by ([1](#EEq1){ref-type="disp-formula"}). And the average of each column of *D*(*X* ~*ij*~) is calculated by ([2](#EEq2){ref-type="disp-formula"}), with the intermediate variable being ([3](#EEq3){ref-type="disp-formula"}): $$\begin{matrix} {D\left( X_{ij} \right)} \\ {\quad = \begin{bmatrix} X \\ G \\ \end{bmatrix}} \\ {\quad = \begin{matrix} \begin{matrix} \overset{n{\,\,}\text{condition}\, - \,\text{set}{\,\,}\text{attributes}}{\overset{︷}{\begin{bmatrix} X_{11} & X_{12} & \cdots & X_{1n} \\ X_{l1} & X_{l2} & \cdots & X_{ln} \\ \vdots & \vdots & \ddots & \vdots \\ X_{m1} & X_{m2} & \cdots & X_{mn} \\ G_{1} & G_{2} & \cdots & G_{n} \\ \end{bmatrix}}} & \begin{matrix} {m{\,\,}\text{cases}{\,\,}\text{in}{\,\,}\text{case}{\,\,}\text{base}} \\ {  + \,\text{target}{\,\,}\text{case}{\,\,}G} \\ \end{matrix} \\ \end{matrix} \\ \end{matrix}} \\ \end{matrix}$$ $$\begin{matrix} {\overset{¯}{C_{j}} = \frac{\sum_{i = 1}^{m + 1}X_{ij}}{m + 1},} \\ \end{matrix}$$ $$\begin{matrix} {M_{ij} = \frac{X_{ij -}\overset{¯}{C_{j}}}{C_{j}}.} \\ \end{matrix}$$ The normalized effectiveness matrix *D*(*X* ~*ij*~′) is expressed in ([4](#EEq4){ref-type="disp-formula"}), and *X* ~*ij*~′ is stated in ([5](#EEq5){ref-type="disp-formula"}): $$\begin{matrix} {D\left( X_{ij}^{\prime} \right) = \begin{bmatrix} X_{i}^{\prime} \\ G^{\prime} \\ \end{bmatrix}} \\ \end{matrix}$$ $$\begin{matrix} {X_{ij}^{\prime} = \frac{1 - \exp\left( - M_{ij} \right)}{1 + \exp\left( - M_{ij} \right)}\quad X_{ij}^{\prime} \in \left\lbrack {- 1,1} \right\rbrack} \\ \end{matrix}$$ The similarity between the preexisting product case (*i*) and the target new product case is determined by ([6](#EEq6){ref-type="disp-formula"}):$$\begin{matrix} \begin{matrix} {\text{sim}\left( X_{i},G \right) = \text{sim}\left( X_{i}^{\prime},G^{\prime} \right) = 1 - \text{dis}\left( X_{i}^{\prime},G^{\prime} \right) = 1 - \sqrt{\sum\limits_{j = 1}^{n}{w_{j} \cdot d\left( X_{ij}^{\prime},G_{j}^{\prime} \right)}}} \\ {d\left( X_{ij}^{\prime},G_{j}^{\prime} \right) = 1 - \text{sim}\left( X_{ij}^{\prime},G_{j}^{\prime} \right)} \\ {\text{sim}\left( X_{ij}^{\prime},G_{j}^{\prime} \right) = \begin{cases} \begin{cases} {1,} & {X_{ij}^{\prime} = G_{j}^{\prime}} \\ {0,} & {X_{ij}^{\prime} \neq G_{j}^{\prime},} \\ \end{cases} & {\text{when}{\,\,}X_{ij}^{\prime}{\,\,}\text{and}{\,\,}G_{j}^{\prime}{\,\,}\text{are}{\,\,}\text{symbol}{\,\,}\text{properties}} \\ {\exp\left( \frac{- \left| {X_{ij}^{\prime} - G_{j}^{\prime}} \right|}{\max\left( i \right) - \min\left( i \right)} \right),} & {\text{when}{\,\,}X_{ij}^{\prime}{\,\,}\text{and}{\,\,}G_{j}^{\prime}{\,\,}\text{are}{\,\,}\text{certainly}{\,\,}\text{numeric}{\,\,}\text{properties}} \\ {\frac{a\left( X_{ij}^{\prime} \cap G_{j}^{\prime} \right)}{a\left( X_{ij}^{\prime} \right) + a\left( G_{j}^{\prime} \right) - a\left( X_{ij}^{\prime} \cap G_{j}^{\prime} \right)},} & {\text{when}{\,\,}X_{ij}^{\prime}{\,\,}\text{and}{\,\,}G_{j}^{\prime}{\,\,}\text{fuzzy}{\,\,}\text{properties}.} \\ \end{cases}} \\ \end{matrix} \\ \end{matrix}$$ *a* stands for the surface of corresponding subordinate functions, and *a*(*X* ~*ij*~′∩*G* ~*j*~′) stands for the intersection of two fuzzy surfaces. *w* ~*j*~ is the matched weight of the similarity for the condition-set attributes, and its initial value is set by experts based on their experience. When using the corresponding parameters for new product, the strategy for adjusting the weight is for those preexisting product cases whose similarity exceeds the threshold set by experts; if the C&M-CVPM\'s predictive result is correct, the system must increase the weights of condition-set attributes which are of the same value as the target new product case, and every increscent is Δ*i*/*k* ~*c*~. *k* ~*c*~ stands for the times that the preexisting product case is properly matched, while Δ*i* is the adjustment range of weights set by experts\' experience. Of course, Δ*i* can also be adjusted as actual needs. 4.3. MAHP Module {#sec4.3} ---------------- As mentioned earlier, MAHP in C&M-CVPM can calculate the weight of some influencing factors for different levels of customers through experts\' preference matrix and then work out the gap between the effectiveness of this influencing factor and overall average effectiveness. Finally the effectiveness of this influencing factor for different levels of customers can be obtained. The specific procedures are as follows. The overall average effectiveness of these five influencing factors for new product is set as *E* ~1~ (overall product attractiveness, OPA), *E* ~2~ (overall customer satisfaction, OCS), *E* ~3~ (word of mouth, WOM), *E* ~4~ (marketing effectiveness, MA), and *E* ~5~ (remarketing effectiveness, RA). The weight for the influencing factor (*c*) for level (*i*) customers is expressed as *w* ~*i*~, where *i* = 1,2,..., 5 represents potential, first-time, regular, frequent, and loyal customers and *c* = 1,2,..., 5 represents the five influencing factors: OPA, OCS, WOM, MA, and RA. Totally *k* experts participate in the evaluation, according to [Table 1](#tab1){ref-type="table"}, the expert (*p*)  (*p* = 1, 2 ..., *k*) should determine the influencing gap *δ* ~*ij*~ ^(*p*)^ between level (*i*) and level (*j*) customers of the influencing factor (*c*). The preference matrix for the expert (*p*) is expressed in ([7](#EEq9){ref-type="disp-formula"}), and the relationship between *δ* ~*ij*~ ^(*p*)^ and the weights *w* ~*ic*~ ^(*p*)^ and *w* ~*jc*~ ^(*p*)^ is stated in ([8](#EEq10){ref-type="disp-formula"}). The constrain is determined by ([9](#EEq11){ref-type="disp-formula"}). Consider $$\begin{matrix} {D_{c}^{(p)} = \left( \delta_{ij}^{(p)} \right)_{c},} \\ \end{matrix}$$ $$\begin{matrix} {\frac{w_{ic}^{(p)}}{w_{jc}^{(p)}} = e^{- \ln\sqrt{2}\delta_{ij}^{(p)}},} \\ \end{matrix}$$ $$\begin{matrix} {{\prod\limits_{i = 1}^{5}w_{ic}^{(p)}} = 1.} \\ \end{matrix}$$ Using a logarithmic least squares estimation, the influencing weight from the expert (*p*) for level (*i*) customers can be predicted by ([10](#EEq12){ref-type="disp-formula"}). The influencing weight of all *k* experts for level (*i*) customers is expressed mathematically by ([11](#EEq13){ref-type="disp-formula"}). By normalization, the weight of the influencing factor (*c*) for level (*i*) customers is determined by ([12](#EEq14){ref-type="disp-formula"}). Consider $$\begin{matrix} {w_{ic}^{(p)} = \exp\left( {\frac{\ln\sqrt{2}}{5}{\sum\limits_{j = 1}^{5}\delta_{ij}^{(p)}}} \right),} \\ \end{matrix}$$ $$\begin{matrix} {w_{ic}^{\ast} = \exp\left( {\frac{\ln\sqrt{2}}{5k}{\sum\limits_{p = 1}^{k}{\,{\sum\limits_{i = 1}^{5}\delta_{_{ij}}^{^{(p)}}}}}} \right)} \\ \end{matrix}$$ $$\begin{matrix} {w_{ic} = \frac{w_{ic}^{\ast}}{\sum_{j = 1}^{5}w_{jc}^{\ast}},\quad i = 1,2,\ldots,5.} \\ \end{matrix}$$ As in Chan\'s model, different levels of customers have the same effectiveness of the influencing factor, and every level\'s effectiveness of the influencing factor (*c*) is *E* ~*c*~/5. After dividing by MAHP, the allocated effectiveness of the influencing factor (*c*) for level (*i*) customers is *w* ~*ic*~ · *E* ~*c*~. Therefore, the gap of between the effectiveness of the influencing factor (*c*) and the overall average of level (*i*) customers is stated in ([13](#EEq15){ref-type="disp-formula"}). If Δ*E* ~*ic*~ \< 0, the effectiveness does not reach the overall average; if Δ*E* ~*ic*~ \> 0, it exceeds the overall average; if Δ*E* ~*ic*~ = 0, it exactly equates to the overall average. *D*(*E* ~*ic*~) stands for the effectiveness matrix to each influencing factor for different levels of customers, and then *E* ~*ic*~ is easy to get by ([14](#EEq16){ref-type="disp-formula"}): $$\begin{matrix} {\Delta E_{ic} = w_{ic} \cdot E_{c} - \frac{E_{c}}{5},} \\ \end{matrix}$$ $$\begin{matrix} {E_{ic} = E_{c} + \Delta E_{ic}.} \\ \end{matrix}$$ 4.4. The Predictive Module for NCLV {#sec4.4} ----------------------------------- The main functions of this module are to predict the new product customer value by stimulating customer purchasing and inputting the parameters; those are the output of CBR and MAHP modules (see [Figure 4](#fig4){ref-type="fig"}). The parameters to be stimulated include CS, MC, RC, PC, RP, and *D*(*E* ~*ic*~). The stimulation of customer purchasing, used in the predictive module for NCLV (net customer lifetime value), has the same basic idea in line with Chan\'s model proposed by system dynamics. But when calculating CLV (customer lifetime value) by Markov model, the randomly dynamic customers\' transition probability replaces the static probability. Correspondingly, the relationships between some nodes are adjusted appropriately for better logicality, understandability, and practicability. The following is the specific process of the simulation. Following Chan\'s study, NCLV (net customer lifetime value) is defined as the sum of the current lifetime values of all customers, where customers lifetime value refers to the present value of future profit from a customer. As mentioned earlier, new product\'s customers are divided into five levels: potential, first-time, regular, frequent, and loyal customers. In Chan\'s model, the second to the fourth level customer is regarded as a whole, that is, active customer; the customers\' increase is derived from the second level. Within Chan\'s formula, the customers\' retention, which refers to the increment from the second level to the third, fourth, and fifth level, used the numbers of the third, fourth, and the fifth level from the input port. But apparently, for one specific level, the increase in purchasing frequency and transiting quantity to next level must be affected by the level itself rather than the next level. Hence, on one hand, this study follows the calculation equations in Chan\'s model to randomly obtain the "retention rate" (RR) and the "acquisition rate" (AR) (see ([16](#EEq19){ref-type="disp-formula"})). On the other hand, the relationships and customers number must be made appropriate adjustments. Obviously, the influence factors for different customer\'s levels are not the same. The potential customers of the product make purchasing on the basis of OPA, ME, and WOM. The first-time and regular customers are influenced by OPA, WOM, RE, and OCS. The frequent customers are affected by OPA, RE, and OCS. However, the loyal customers only trust enterprise\'s certain product because of their complete allegiance and will not look for other alternatives. So in general the loyal customers are not affected by any factors (see [Figure 5](#fig5){ref-type="fig"}). In C&M-CVPM, the customer purchasing behavior for new product is divided into three statues. The first statue is that the customers will no longer buy this new product because of dissatisfaction; thereby, the first level customers mean the lost ones who will become the potential customers. The second statue is that the customers, who are very satisfied with the new product, will increase purchasing frequency for the next time and thus come into the next level. The third statue is that the customers, who neither like nor dislike the new product, will keep the former purchasing frequency. Most notably, the lost customers, who are not satisfied with this new product, do not mean that they will never buy it in future; they may be affected by other various factors and purchase this new product again. The relationship between different levels is shown as in [Figure 6](#fig6){ref-type="fig"}. *C* ~*s*,*t*~ denotes the number of customers in level (*s*) at time (*t*); AR~*s*,*t*~ means the acquisition rate of the customers who transit to the next level in level (*s*) at time (*t*); *A* ~*s*,*t*~ indicates the transition number of customers in level (*s*) at time (*t*); LR~*s*,*t*~ implies the loss rate of customers in level (*s*) at time (*t*); *L* ~*s*,*t*~ describes the transition number of customers in level (*s*) at time (*t*). The relationships between *C* ~*s*,*t*~, AR~*s*,*t*~, LR~*s*,*t*~, and *L* ~*s*,*t*~ are expressed in ([15](#EEq17){ref-type="disp-formula"}): $$\begin{matrix} {C_{s,t} \times \text{A}\text{R}_{s,t} = A_{s,t},} \\ {C_{s,t} \times \text{L}\text{R}_{s,t} = L_{s,t}.} \\ \end{matrix}$$ The relationships between the five influencing factors, the "acquisition rate" (AR), and the "loss rate" (LR) can be observed from [Figure 7](#fig7){ref-type="fig"}. It is easy to know that word of mouth (*E* ~3~) and LR are mainly decided by overall customer satisfaction (*E* ~2~). Therefore, the relationships, between *E* ~2~ and *E* ~3~, LR and *E* ~2~, can be acquired through data mining of historical product. So, once the value of *E* ~2~ is worked out by CBR similarity matching, then the value of *E* ~3~ and LR could be generated automatically. The relationships between the unity matrix *D*(*E* ~*ic*~) and the acquisition rate (AR) are expressed in ([16](#EEq19){ref-type="disp-formula"}): $$\begin{matrix} {\text{A}\text{R}_{1,t} = \text{random}\left\{ {\min\left( {E_{11},E_{13},E_{14}} \right),} \right.} \\ {\left. {\max\left( {E_{11},E_{13},E_{14}} \right)} \right\},} \\ {\text{A}\text{R}_{2,t} = \text{random}\left\{ {\min\left( E_{21},E_{22},E_{23},E_{25} \right),} \right.} \\ {\quad\quad\quad\quad\left. {{\max}\left( {E_{21},E_{22},E_{23},E_{25}} \right)} \right\},} \\ {\text{A}\text{R}_{3,t} = \text{random}\left\{ {\min\left( E_{31},E_{32},E_{33},E_{35} \right),} \right.} \\ {\left. {{\max}\left( {E_{31},E_{32},E_{33},E_{35}} \right)} \right\},} \\ {\text{A}\text{R}_{4,t} = \text{random}\left\{ {\min\left( E_{41},E_{42},E_{43},E_{45} \right),} \right.} \\ {\left. {{\max}\left( {E_{41},E_{42},E_{43},E_{45}} \right)} \right\}.} \\ \end{matrix}$$ Assumptions are as follows: one, customers buy new products under noncontractual relationship. That is to say, customers purchasing is entirely based on their own preferences; two, all customers make purchasing decisions at the same time point, and the time interval (*d* ~*t*~) is equivalent. Hence, *C* ~*s*,*t*~, the number of customers at level (*s*) at time (*t*), are determined by ([17](#EEq23){ref-type="disp-formula"}): $$\begin{matrix} {C_{1,t} = \begin{cases} {C_{1,t - dt} + \left( {L_{2,t - dt} + L_{3,t - dt} + L_{4,t - dt}} \right.} & \\ {\quad\quad\quad\quad\left. {+ \, L_{5,t - dt} - A_{1,t - dt}} \right),} & {t \neq 0} \\ {\text{PC},} & {t = 0} \\ \end{cases}} \\ {C_{2,t} = \begin{cases} {C_{2,t - dt} + \left( {A_{1,t - dt} - L_{2,t - dt} - A_{2,t - dt}} \right),} & {t \neq 0} \\ {0,} & {t = 0} \\ \end{cases}} \\ {C_{3,t} = \begin{cases} {C_{3,t - dt} + \left( {A_{2,t - dt} - L_{3,t - dt} - A_{3,t - dt}} \right),} & {t \neq 0} \\ {0,} & {t = 0} \\ \end{cases}} \\ {C_{4,t} = \begin{cases} {C_{4,t - dt} + \left( {A_{3,t - dt} - L_{4,t - dt} - A_{4,t - dt}} \right),} & {t \neq 0} \\ {0,} & {t = 0} \\ \end{cases}} \\ {C_{5,t} = \begin{cases} {C_{5,t - dt} + \left( {A_{4,t - dt} - L_{5,t - dt}} \right),} & {t \neq 0} \\ {0,} & {t = 0.} \\ \end{cases}} \\ \end{matrix}$$ Set *D* as the current discount rate, by the randomly dynamic transition probability; ([18](#EEq28){ref-type="disp-formula"}) is applied to calculate the NCLV: $$\begin{matrix} {\text{NCLV} = \left( {\sum\limits_{t = 0}^{\infty}\left\lbrack {C_{2,t} \times \left( {\text{RP} - \text{MC} - \text{CS}} \right)} \right.} \right.} \\ {\quad\quad   + \left( {C_{3,t} + C_{4,t}} \right) \times \left( {\text{RP} - \text{RC} - \text{CS}} \right)} \\ {\quad\quad  \left. \left. {+ C_{5,t} \times \left( {\text{RP} - \text{CS}} \right)} \right\rbrack \right) \times \left( \left( {1 + D} \right)^{t} \right)^{- 1}.} \\ \end{matrix}$$ 4.5. C&M-CVPM\'s Characteristics and Application Scope {#sec4.5} ------------------------------------------------------ ### 4.5.1. C&M-CVPM\'s Characteristics {#sec4.5.1} To overcome the above three shortcomings in Chan\'s model, this study proposes a CBR-based and MAHP-based customer value prediction model for new product development (C&M-CVPM). The following are the new model\'s characteristics.C&M-CVPM evaluates the quality of new product development by using the net customer lifetime value (NCLV), which is in line with the customer-oriented market tendency. Therefore, it has an advantage over formers studies in the prediction perspective.C&M-CVPM considers the time value of profits bringing by customers. Hence, it would work out the net present customer value of new product development.C&M-CVPM stimulates the customer purchasing behavior with fully considering all the three recognized influencing factors for new product development, which are product, customer, and market.C&M-CVPM reduces the experts\' workload by using CBR so that removes the unnecessary evaluation of each influencing factor on each product.C&M-CVPM shortens the time to predict the customer value for new product development, simultaneously improving the evaluation\'s agility.C&M-CVPM, by using MAHP, distinguishes the effectiveness of each influencing factor for different levels of customers.C&M-CVPM uses dynamic customers\' transition probability to simulate customer purchasing behavior, which can make the model more realistic and authenticity.C&M-CVPM, by using CBR, avoids the same mistake in the developing process as low as possible, and it can also promote the retention and heritance for enterprises\' tacit knowledge to prevent possible losses caused by a mature technology leaving the company. Also, C&M-CVPM can provide references for the new product development which is conducive to the existence and improvement of development level.C&M-CVPM takes the marketing cost into account to evaluate the customer value, and this is also helpful for knowledge exchanging and sharing between enterprises design and marketing department.C&M-CVPM expands the system users\' scope so that the ordinarily nonexpert developer could also predict the new product\'s customer value to better satisfy user needs.C&M-CVPM, by using CBR, can solve the new problems as human thinking mode with the aid of the formerly accumulated knowledge and experience. Basically, it can eliminate bottlenecks in retrieval due to knowledge and information index growth; furthermore the previous case is relatively easy to be collected. ### 4.5.2. The Application Scope of C&M-CVPM {#sec4.5.2} Although C&M-CVPM has many advantages over former studies, there are still some limitations for the application scope of C&M-CVPM.There is a premised assumption for CBR applying to C&M-CVPM; that is, similar problems have similar solutions. So, new product development to be predicted must have certain similarity with the cases in case-base. For example, food products in case-base cannot be used to predict commodity products. And it may not make sense even if the case to be predicted and the case in case-base belong to one same type. For example, if predicting electric toothbrush by ordinary toothbrush, the effect may not be ideal, and vice versa.The new product customers are divided into five levels: potential, first-time, regular, frequent, and loyal customers. However, this division may not be applied to some products, such as household appliances that the customer will not buy another in the short term. So, the new product customers might be divided into two or three levels as well. At the same time, those products which own five-level customers usually belong to FMCG (fast moving consumer goods).If the case-base has insufficient product cases, the predictive results may be not accurate enough, so C&M-CVPM is more suitable for those enterprises that already have a lot of mature products.From the foregoing, sufficient mature cases provide a strong guarantee for the good predictive results. Consequently, building the original case-base is a huge project that needs to consume certain human, material, and financial resources. Enterprise\'s decision whether to use C&M-CVPM or not is depending on its own situation.CBR, based on the incremental learning, has an automatic-learning mechanism with the formerly accumulated knowledge and experience. But seen in another way, because of unconditionally passive learning, each reserved case in the case-base may easily lead to an unmanageable state. Correspondingly, the system will run with a low efficiency and the retrieval cost is going up. In conclusion, CBR should try active learning in the face of a large number of samples.The Markov model that is applied to calculate customer lifetime value in C&M-CVPM is the most flexible model within the present studies, but it still constitutes some limitations. For example, during the transaction between customers and enterprises, each interval time is same and fixed. 5. Simulation Experiment and Analysis {#sec5} ===================================== In this section, to verify C&M-CVPM\'s validity, a simulation experiment is conducted to predict the NCLV for some regular toothbrush. The computer auxiliary software is MATLAB R2010a. Firstly, this section describes the reasons why we choose regular toothbrush as the simulation case. Secondly it gives out the experiment\'s general design, including the purposes, preparations, and producers. At last, it analyzes the experiment\'s results in detail. 5.1. The Simulation Case {#sec5.1} ------------------------ In order to make the experiment more intuitive and understandable, and according to C&M-CVPM\'s assumption, a new regular toothbrush is chosen as our experimental case. Here are the reasons: (1) to some extent, toothbrush has become the necessities of life; (2) toothbrush belongs to FMCG (fast moving consumer goods) because of its regular replacement; (3) there are two main types in the market, regular and electric toothbrush, and regular toothbrush gains relatively larger market share due to the price factor and people\'s habits. 5.2. The Experiment\'s General Design {#sec5.2} ------------------------------------- ### 5.2.1. The Experiment\'s Purpose {#sec5.2.1} C&M-CVPM\'s first major advantage lies in reducing the experts\' workload during new product development. With the introduction of CBR, the parameters of the running system are decided by the most similar historical product\'s data automatically provided by system, rather than totally depending on experts\' evaluation. If and only if the similarity is less than experts\' setting threshold, experts just do the evaluation. The above advantage, combining with the latter two advantages, improves the accuracy of the system. Thus, verifying the validity of C&M-CVPM is equal to verifying the reduction of experts\' workload and the increment of system accuracy compared to Chan\'s model. However, on the other hand, the accuracy for the system proposed by Chan is strongly subjective because it is more likely to rely on the experts\' ability. In reality, the accuracy of the running system may be different when one expert predicts different products or different experts predict one product. Therefore, one or a few experiments are meaningless and far from enough and it cannot distinguish the good development from the bad. For another hand, when running C&M-CVPM, the system accuracy can be verified very well with comparing the predictive value and the actual value for product customer value. The purpose of this experiment is to test that (a) C&M-CVPM can reduce the experts\' workload when predicting new product customer value for a regular toothbrush and that (b) the deviation from the predictive to the actual value can be acceptable in reasonable range. ### 5.2.2. The Experiment\'s Preparations {#sec5.2.2} *Experiment Tool*. This study uses MATLAB that has strong processing power and ease of use as computer auxiliary tool to verify the C&M-CVPM\'s validity. *Case Representation*. According to product\'s structural features of the regular toothbrush, the frame representation can be expressed as [Figure 8](#fig8){ref-type="fig"}. *Data Preparation*. CBR is an automatic machine-learning technology, and it can get the overall average of new product\'s influencing factors and stimulate other running models\' parameters by analyzing the inherent rule from historical products\' data, so the experiment results seldom rely on authentication of the product data. According to product\'s structural features and their inherent relationships, this experiment randomly generates 70 toothbrush cases as the classic case-base. In order to enhance the result\'s trustworthiness, the study does strict validity design. Except randomly generating experimental data, the data for automatic learning is separated from the data for testing the C&M-CVPM\'s running accuracy. 60 cases of the 70 classic cases are served as a case-base (enterprise\'s preexisting products), and the other 10 cases are served as test case (new products). 10 cases as a batch and the case-base can be constructed by inputting six batches, and each batch must do once validity verification after the input is complete. For the sake of the experiment objectivity, this study invites three professors and two doctors from Business School of Central South University as the expert team. They compare all levels of customers\' importance of each influencing factor and then provide 25 importance comparison tables. From those tables, the effectiveness matrix *D*(*E* ~*ic*~) can be got. Things to note are as follows.The initial value of matching weight for each condition-set attribute\'s similarity is set to 1/*n*, where *n* refers to the number of product cases\' condition-set attributes; namely, each condition-set attribute has equal contribution to similarity evaluation at the beginning.To adjust the weight to the best as soon as possible, the weight adjustment Δ*i* is set to 0.02 for the first three batches and 0.01 for the last three batches.The cases are increasing along with more and more experiments, and the weights of each decision-set value have constant adjustment. The later on the C&M-CVPM going, the larger similarity matched by CBR module is. In order to increase the validity for each experiment comparing results, the detected threshold is set to 0.4 for the first three experiments and 0.85 for the last three experiments.The accuracy of C&M-CVPM depends on the NCLV deviation rate of the predictive result for new product test case. Because C&M-CVPM has used the most similar case in case-base as the test case, the NCLV deviation rate measures how far the predictive NCLV value from the actual value for the test case is.The criteria based on experience to judge the consistency of similarity matched results are that the NCLV deviation rate is 3% at most.*d* ~*t*~ is set to 3 months, because the dentist advises that toothbrush should be changed every three months. And the potential customers\' number is set to 8 million.Per unit of experts\' workload equals to the experts workload to predict the parameters, CS, MC, RC, PC, RP, *E* ~1~, *E* ~2~, *E* ~4~, and *E* ~5~, when running the Chan\'s model one time. 5.3. The Experiment\'s Procedures {#sec5.3} --------------------------------- 10 test cases are numbered sequentially from 1 to 10, and six batches for constructing the case-base are numbered sequentially from 1 to 6.The initial value of matching weight for each condition-set attribute\'s similarity in CBR module is set to 1/*n*, where *n* refers to the number of product cases\' condition-set attributes.The expert team is responsible for comparing all levels of customers\' importance of each influencing factors and then provides their preference matrix. Finally the effectiveness matrix *D*(*E* ~*ic*~) can be calculated by ([8](#EEq10){ref-type="disp-formula"})--([10](#EEq12){ref-type="disp-formula"}).The first batch cases are inputted into case-base while doing the first experiment. For each inputted case, the matching weight of similarity during each condition-set attribute in CBR module must be adjusted once, where Δ*i* is set to 0.2.After inputting 10 test cases, the following data must be recorded for each case: (a) the highest similarity after each test case matching all the cases, (b) the predictive NCLV deviation rate for each case, (c) the number of new produce development by C&M-CVPM\'s prediction with the deviation rate limited to 5% most, and (d) the experts\' workload units in C&M-CVPM when the cases in case-base are not sufficient.The average for NCLV deviation rate refers to the reduced percentage for the experts\' workload. Assume that the experts\' workload in Chan\'s model is 10 units for 10 text cases, so the reduced percentage = (10 − the experts\' workload units in one C&M-CVPM experiment)/10.Inputting the next batch, the matching weight of similarity during each condition-set attribute in CBR module must be adjusted once, where Δ*i* is set to 0.2 for the former three experiments and 0.1 for the latter ones. Repeat steps (4)--(6) to finish all the six experiments. 5.4. The Experiment\'s Results {#sec5.4} ------------------------------ ### 5.4.1. The Maximum Similarity {#sec5.4.1} The similarity comparison results of six tests are shown in [Figure 9](#fig9){ref-type="fig"}. The abscissa represents experiment sequence, and the ordinate represents the value of the similarity. The points on the dotted lines mean the maximum similarity after each test case matching all the cases, and the points on the solid lines mean the average for all the maximum similarity. Along with the increasing cases, the matching weight of similarity during each condition-set attribute in CBR module has been constantly adjusted and obviously remains rising. When Δ*i* is fixed, the increase range is progressively decreasing for both the former three experiments and the latter three. More specifically, the increase range for the third and the sixth experiment is inconspicuous. The result is further evidence of the validity that Δ*i* is adjusted from 0.2 to 0.1. The maximum similarity for the sixth experiment trends toward 0.9, and it illustrates that a certain amount of cases has been included in the case-base. On one hand, each case could retrieve the case very similar to itself; on the other hand, 60 cases selected in the case-base are feasible. ### 5.4.2. The NCLV Deviation Rate {#sec5.4.2} In [Figure 10](#fig10){ref-type="fig"}, the abscissa represents experiment sequence and the ordinate represents the NCLV deviation rate. The points on dotted lines mean the ratio from the NCLV actual value to the predictive value for each test case in C&M-CVPM\'s every experiment, and the points on solid lines mean the average for overall NCLV deviation rates. The cases are increasing along with more and more experiments, but the NCLV deviation rate remains declining obviously, as well as the average of overall NCLV deviation rates which are slowly falling and going to 0 at last. For one thing, it demonstrates that six experiments are feasible; for another, C&M-CVPM\'s capability to correctly predict test cases\' NCLV is progressively enhancing. Hence, C&M-CVPM can always better play its unique advantage in predicting new produce development for those enterprises that have owned plenty of mature products. ### 5.4.3. The C&M-CVPM\'s Validity {#sec5.4.3} From a qualitative angle (see Figures [9](#fig9){ref-type="fig"} and [10](#fig10){ref-type="fig"}), C&M-CVPM\'s capability to match out the most similar and accurate case is gradually strengthened, with the cases\' incensement and the similarity\'s matching weight adjustment. From a quantitative angle, [Table 2](#tab2){ref-type="table"} can better illustrate the above advantage by the related data. Along with continues case number\'s growth and the weight\'s adjustment, even the detected thresholds for the latter three experiments clearly improve better than the former three; C&M-CVPM\'s ability to effectively reduce the experts\' workload and actually predict customer value is still improving (see [Table 2](#tab2){ref-type="table"}). It is also suggested that C&M-CVPM always plays its unique advantage in predicting new produce development for those enterprises that have owned plenty of mature products. To further clarify the accuracy that C&M-CVPM can predict the new product customer value, from all six experiments, this study chooses the 4th test case with the maximum similarity and the 10th test case with the minimum similarity and then compares their NCLV\'s predictive and actual values (see Figures [11](#fig11){ref-type="fig"} and [12](#fig12){ref-type="fig"}). The dotted lines mean the predictive number of all levels customers and the solid lines mean the actual number. This study compares the NCLV predictive value with the actual values only for the first 24 time points (3 months as a time point and 24 points equal to 6 years). The comparison results for only 24 time points can be more intuitive, and furthermore it is not need to focus on the NCLV\'s value during all product life cycle. From Figures [11](#fig11){ref-type="fig"} and [12](#fig12){ref-type="fig"}, it is easy to know that, although there is a small deviation in predicting customers\' number, it is still accurate in predicting NCLV\'s value. That is, C&M-CVPM\'s ability to actually predict customer value for new product development can be well-trusted. ### 5.4.4. Other Results {#sec5.4.4} Because of limited samples and simulated experiments, the accuracy to predictive customers\' number in C&M-CVPM is not obvious at present. But, with after further analysis of [Figure 13](#fig13){ref-type="fig"}, the potential customer is active and the first-time customer is rapidly increasing when the previous year for selling new product. Moreover, both of them start falling one year later. Consequently, enterprises must lay stress on marketing competence for new product which can obtain much more customer value. More specifically, it is more suitable for enterprises to do advertisement, sales exhibition, internet marketing, and so on. While, one year later, the first-time customers start to reduce along with that the active customers begin to increase. Namely, enterprise strategy should pay more focus on retaining customers and provide more remarketing activities such as affiliate campaigns. Besides, [Figure 13](#fig13){ref-type="fig"} also makes clear that the active customers\' number has reached maximum at the 15th time point (around the 3.75th year). This indicates that the target market is already saturated, which means that customers will not increase their purchase frequency and be not attracted anymore; therefore, seldom new potential customers will come in. In such a situation, enterprises should develop more new products to attract and retain customers. On the other hand, this is also the reason why constant developing new product is the key of enterprise\'s core competence. 5.5. The Experiment\'s Summary {#sec5.5} ------------------------------ Notwithstanding, if this study can get a lot of specific enterprises\' historical data and compare the accuracy with Chan\'s model many times, the validity of C&M-CVPM can be verified more thorough. But under the condition of the limited human, material, and financial resources, the randomly generated data to verify the C&M-CVPM\'s validity is feasible in fact. Also, this study can be treated as a periodically fundamental work on the current condition. Here are the two reasons: (a) the system running results seldom rely on the data\'s authenticity because CBR can automatically obtain the values of the influencing factors through historical data and (b) in order to enhance the result\'s trustworthiness, the study does strict validity design, not only randomly generating experiment data according to the product features, but also separating the data for automatic learning from the data used for testing the running accuracy of C&M-CVPM. 6. Conclusions {#sec6} ============== To sum up, better than Chan\'s model, C&M-CVPM distinguishes the effectiveness of the influencing factors for different levels of customers and thinks about the random variation in a certain range and meanwhile reduces the experts\' workload to investigate and evaluate the required parameters with the lack of historical data for new product\'s customer value. Besides, better than Chan\'s model, C&M-CVPM has the following four functions: (a) it expands the system users\' scope so that the ordinarily nonexpert developers can also evaluate the new product\'s customer value. Meanwhile, the same mistakes can be possibly prevented from happening to improve the accuracy of problem solution; (b) it promotes the retention and heritance for enterprise\'s tacit knowledge; (c) it outputs the similar product development\'s experience for decision-maker reference which can help to enhance the developers skill and also gather enterprise marketing team\'s knowledge to promote information sharing; (d) it eliminates bottlenecks for knowledge retrieval. In conclusion, superior to Chan\'s model, C&M-CVPM can better satisfy user needs and urge the decision progress more scientification, procedures, and automation. The purpose of C&M-CVPM as a periodically fundamental work has been achieved. Future research could gather larger and more complex practical samples. Meanwhile, more work is needed to obtain the specific enterprise\'s historical data to test the validity of C&M-CVPM and compare repetitiously with Chan\'s model. However, the divisions of five levels customers in C&M-CVPM may not apply to some products expect FMCG (fast moving consumer goods). The Markov model in C&M-CVPM, which is the most flexible model within the present models, used to calculate customer lifetime value still constitutes some limitation such as that each interval time is same and fixed. The authors would like to express their sincere thanks to the Central South University for its financial support of this research work under project that is supported by the Innovation Group Project National Natural Science Foundation of China (Grant no. 71221061). Conflict of Interests ===================== The authors declare that there is no conflict of interests regarding the publication of this paper. ![The flowchart of CBR.](TSWJ2014-459765.001){#fig1} ![The system framework of C&M-CVPM.](TSWJ2014-459765.002){#fig2} ![The frame representation results for the product case.](TSWJ2014-459765.003){#fig3} ![The stimulation figure of customer purchasing.](TSWJ2014-459765.004){#fig4} ![The relationships between customers\' levels and influencing factors.](TSWJ2014-459765.005){#fig5} ![The relationship between different levels.](TSWJ2014-459765.006){#fig6} ![The relationship between influencing factors and AR or LR.](TSWJ2014-459765.007){#fig7} ![The frame representation of a regular toothbrush.](TSWJ2014-459765.008){#fig8} ![The similarity comparison results of six tests.](TSWJ2014-459765.009){#fig9} ![The comparison results of the NCLV deviation rate of six tests.](TSWJ2014-459765.010){#fig10} ![The comparison results for the test case with maximum similarity.](TSWJ2014-459765.011){#fig11} ![The comparison results for the test case with minimum similarity.](TSWJ2014-459765.012){#fig12} ![The tendency figure of the actual and predictive customers number of the test case (4).](TSWJ2014-459765.013){#fig13} ###### The influencing gap *δ* ~*ij*~ ^(*p*)^ on different levels of customers. *δ* ~*ij*~ ^(*p*)^ Definition -------------------- ------------------------------------------------------------------------------------ 0 Influence of factor (*c*) on customer (*i*) is the same as customer (*j*) 2 Influence of factor (*c*) on customer (*i*) is a little bigger than customer (*j*) 4 Influence of factor (*c*) on customer (*i*) is a bit bigger than customer (*j*) 6 Influence of factor (*c*) on customer (*i*) is much bigger than customer (*j*) 8 Influence of factor (*c*) on customer (*i*) is rather bigger than customer (*j*) ###### The verified results of the validity of C&M-CVPM. Tests The threshold The products number Experts\' evaluation time The reduction for experts\' workload (%) The average of overall NCLV deviation rate (%) ------- --------------- --------------------- --------------------------- ------------------------------------------ ------------------------------------------------ 1 0.4 2 8 20 42.32 2 0.4 6 4 60 27.82 3 0.4 10 0 100 26.52 4 0.85 6 4 60 18.15 5 0.85 8 2 80 15.69 6 0.85 10 0 100 1.21 [^1]: Academic Editor: Juan M. Corchado
{ "pile_set_name": "PubMed Central" }
Q: How can I convert a variable from one collation to another? I have a script that switches between two databases: master & 'B'. Database master has collation 'SQL_Latin1_General_CP1_CI_AS' Database B has collation 'Latin1_General_CI_AS' I have tried using the COLLATE and CAST commands but to no avail so far. USE B DECLARE @ProductsUserName varchar(200) SET @ProductsUserName = 'SomeValue' USE master DECLARE @UserNameMaster varchar(200) = @ProductsUserName COLLATE SQL_Latin1_General_CP1_CI_AS DECLARE @GrantViewServerStatement varchar(200) = 'GRANT VIEW SERVER STATE TO ' + @UserNameMaster The Query will blow up on the last line and will give the error: 'Implicit conversion of varchar value to varchar cannot be performed because the collation of the value is unresolved due to a collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in add operator.' Obviously there is some kind of problem using the '+' operator between two varchars that have different collations. But not sure how to "cast" the variable into the new collation! A: Perform the collation as part of the string concatenation. i.e.: USE B DECLARE @ProductsUserName varchar(200) SET @ProductsUserName = 'SomeValue' USE master DECLARE @UserNameMaster varchar(200) = @ProductsUserName DECLARE @GrantViewServerStatement varchar(200) = 'GRANT VIEW SERVER STATE TO ' + (@UserNameMaster COLLATE SQL_Latin1_General_CP1_CI_AS)
{ "pile_set_name": "StackExchange" }
Monday, April 22, 2013 View of Mt. McKinley from newly plowed section of Denali Highway I have driven the Denali Highway several times, but in the past always west to east. The state DOT started plowing the Denali Highway this past week and we drove in as far as the plowing had progressed (about 17 miles). It was only when we came back out I realized what a great view of Mt. McKinley you get from the Denali Highway. The hills rise up above the floor of Broad Pass enough to give an excellent panorama. Search This Blog Bloggers.com About Me I am a free-lance artist specializing in pen and ink drawings, mainly of old equipment, historic structures, and other culturally significant sites. My home is in Fairbanks, the Golden Heart of Alaska. For 30 years I have been tramping the back roads of Interior Alaska, documenting its mining camps, homesteads, cemeteries and other sites before time, vandals and development erase them from the landscape. Recently I have also been writing a column in my local newspaper about historic sites around Interior Alaska. On the pages of my blog you will find images of my art, writings on an artist's life in Fairbanks, and some of my columns. I hope you enjoy my ramblings.
{ "pile_set_name": "Pile-CC" }
Free Download: LGBT Valentines Day Card I created this card for all my LGBT friends out there that don't have the best options when it comes to giving a sweet Valentines Day card. Either all the options are too hetero-normative, badly designed, or even worse the cards come off more offensive than heartfelt. Just because Valentines Day is a mostly made up holiday, doesn't mean an entire group of people should be alienated from it. I plan on making a series of LGBT cards throughout the year to put up in my store. But for now, you can grab this card completely free! Just download this file, print it out, and write something inside to give to your partner. Don't have a printer? That's ok! I turned this card into a fun gif for you to share with your loved ones that you can grab on Giphy.
{ "pile_set_name": "Pile-CC" }
Something to Tell You Something to Tell You is the second studio album by American pop rock band Haim. It was released on July 7, 2017, by Columbia Records. The album's lead single, "Want You Back", was released on May 3, 2017, followed by the release of the promotional single "Right Now". On May 10, the album cover was revealed, along with the preorder announcement. "Little of Your Love" was then announced as the second single on June 18, 2017, via Twitter. "Nothing's Wrong" was released as the third single on August 21, 2017. "Walking Away" was released as the fourth single on December 8, 2017. Background and recording Haim toured for two years to support their previous release, Days Are Gone, the three sisters' 2013 debut album that was met with a great deal of critical and commercial success. With the conclusion of their tour came the beginning of the process of crafting Something to Tell You: "All we knew for two years was wake up, soundcheck, play the show, go to sleep and fit in a slice of pizza at some point. We needed to turn our brains from touring brains back to writing brains. When we came home, we literally got off the bus, took a nap and went right into the studio." The initial sessions for the album were unfruitful; the band questioned the quality of the songs, wondering if they were on par with the debut album. However, a breakthrough came after the producers of the 2015 Judd Apatow-directed romantic comedy Trainwreck asked the band to write a song for the film's soundtrack. "Little of Your Love", the album's second single, was produced in under a week at the film producers' request, and although it was ultimately not selected for the soundtrack, completing the song gave the band the confidence they needed to write new material for the album. In the following years, the band developed the album, taking breaks and continuing to perform at various shows and festivals, much of which would further inspire the album. The band switched between Valentine Studios, an infrequently used 1970's production facility in Valley Village (a neighborhood in Los Angeles, California), and producer Ariel Rechtshaid's home studio. Promotion "Want You Back" was released as the album's first single on May 3, 2017, and was followed a week later by the promotional single, "Right Now". On May 13, the band performed "Want You Back" along with "Little of Your Love" on Saturday Night Live. Director Paul Thomas Anderson filmed a documentary about the making of the album. Titled Valentine, the film was first screened in July 2017, before being officially uploaded to the internet that September. Anderson would go on to direct music videos for three of the album's tracks "Right Now", "Little of Your Love", and a live version of "Night So Long". Anderson became interested in the group after learning that the sisters were the daughters of one of his former art teachers. They made their first UK appearance at BBC Radio 1's Big Weekend on May 27, performing "Want You Back" and "Right Now" as part of their set. The trio also embarked on their second headlining tour, the Sister Sister Sister Tour which begun on April 3, 2018. Critical reception Something to Tell You has received a generally positive reception from music critics. At Metacritic, which assigns a normalized rating out of 100 to reviews from mainstream critics, the album has an average score of 69 out of 100 based on 30 reviews, which indicates "generally favorable" reception. Writing in Pitchfork, Jenn Pelly said, "No other rock band in popular music (an anomalous statement already) has mixed styles so seamlessly—rattling and gliding from one hook to another...Haim's latticed arrangements and heavily percussive melodies make their music fly." Pelly as well as several other reviewers stressed the influence of Stevie Nicks and other 1970s and '80s rock; in the Los Angeles Times, Mikael Wood said the album "makes you believe that rock might have a future," bearing "the polished sound of vintage Fleetwood Mac and the Eagles, and here the sisters continue to rely on guitars and the like at a moment when many of their peers have little use for them." In Rolling Stone, Jon Dolan said, "You can hear the studied sense of craft all over Something to Tell You...These songs don't always explode with the sunny ebullience of the first LP, but the melodies, beats and ideas are layered and piled high." In The Guardian, Kitty Empire writes, "Haim really know what they are doing. There are digressions to kill for here, what you might once have called middle eights, indefatigable melodies, and weird little noises – a horse neigh and a seagull coda on Want You Back, a fax machine on Found It in Silence, the gasping on Nothing’s Wrong – to keep you clamping your headphones to your ears in delight." Commercial performance Something to Tell You debuted at number seven on the US Billboard 200 with 32,000 album-equivalent units, of which 26,000 were pure album sales. It also debuted at number two on the UK Albums Chart, selling 18,319 copies in its first week. Track listing Notes signifies an additional producer Personnel Credits adapted from AllMusic and album’s liner notes. Haim Alana Haim – vocals , guitar , percussion , keyboards Danielle Haim – vocals , guitar , drums , percussion , Hi-hat , synthesizer , glass bottle Este Haim – vocals , bass guitar , percussion Musicians Rostam Batmanglij – acoustic guitar , drum programming , electric guitar , harmonizer , Moog Bass , piano , rhythm guitar , synthesizer Matt Bauder – saxophone Andrew Bulbrook - violin Lenny Castro – congas Devonté Hynes – DX7 electric piano Jim-E Stack – keyboards Tommy King – bass synth , CP70 , organ , piano , synthesizer Greg Leisz – guitar , pedal steel guitar , slide guitar George Lewis Jr. – guitar Roger Manning – synthesizer Serena McKinney – violin David Moyer – saxophone Nico Muhly – prepared piano and strings Mike Olsen – cello Owen Pallett – viola Ariel Rechtshaid – acoustic guitar , background vocals , celeste , guitar , keyboards , marimba , organ , percussion , piano , rhythm guitar , synthesizer , vocoder Buddy Ross – dulcimer , Fender Rhodes , keyboards , percussion , synthesizer Gus Seyffert – percussion Ruud Wiener - Simmons Silicon Mallet Technical personnel Chris Allgood – assistant mastering engineer Rostam Batmanglij – additional production , engineer , producer , string arrangements BloodPop – additional production Martin Cooke – assistant engineer Rich Costey – mixing Laura Coulson – photography John DeBold – engineer Scott Desmarais – assistant mixing engineer Robin Florent – assistant mixing engineer Nicolas Fournier – assistant engineer Dave Fridmann – mixing Chris Galland – mixing engineer Michael Harris – engineer Chris Kasych – engineer Emily Lazar – mastering George Lewis Jr. – producer Ted Lovett – creative director Manny Marroquin – mixing Serena McKinney – string arrangements Rob Orton - mixing Ariel Rechtshaid – drum programming , engineer , producer , programming , string arrangements Nick Rowe – engineer David Schiffman – engineer Gus Seyffert – engineer Charts Weekly charts Year-end charts Certifications Release history References Category:2017 albums Category:Haim (band) albums Category:Columbia Records albums Category:Albums produced by Ariel Rechtshaid Category:Albums produced by Rostam Batmanglij
{ "pile_set_name": "Wikipedia (en)" }
Adopting the Metric system Metric’s Joshua Winstead talks to Oisín Kealy ahead of his band’s sold out show at Oran Mor. You self released Fantasies, what was it that motivated you to do that with this release, had you experienced much friction with labels in the past? Well we never really received the label attention and the deals we wanted. We were always too underground for the mainstream, and too mainstream for the underground, and people were always trying to give deals that don’t really work for the artist anymore, like three-sixty deals that take from touring because they are trying to survive. One of the things we realised was that with the deals they were giving us, we could make our own label and maintain our artistry and be in control of it in the way we want it, because who knows how much they are spending on publicity or whatever? As well a lot of times you don’t get to choose the crew you’re working with. We like to work with people we enjoy working with and make sure we are all heading towards the same thing. It seemed like, not the easiest, but the smartest way at this moment in time was to do it ourselves. How did you react to the leak of the record, inevitable as it was? At first you’re a little bummed, because when you work on something, especially art, you are doing it for yourself and to present to other people. When it gets leaked and it isn’t the master copy it’s kind of like a present but…it’s not finished yet, it’s not what you want them to hear. That was a little sad because we like to present something beautiful. On the other side, we were working with Nigel Godrich on putting this song out with an Andrew Wright movie and he gave us some advice. He was like “The only time it’s a problem is when the record is bad”, and we thought, well, you’re right. It’s a little easier to say if you are Radiohead because they’ve already succeeded in many many ways that are unimaginable to other bands, but he is right in a sense. That ability for people to reach you on many different levels, for them to get the music in any form kind of helps because there is so much competition and there’s not much money flying around. We were still proud of the album but it wasn’t quite what we wanted yet. It hasn’t really affected very much, people seem to still be interested in getting the finished product. It’s been almost four years since the last record, but you’ve all spent the time with side projects like Bang Lime and obviously Emily with the Soft Skeleton. Do you think that was required to stop the band going stale? Maybe for ourselves, but not within Metric, that really had nothing to do with Metric. It was more a thing that we had time off, but we all continue to do music all the time. It wasn’t a thing like “Oh I’ve been doing so much music I’m going to stop”. The business side is finally starting to work, so that’s a business, but even when I’m home not doing something else… I mean, here, I’ve been sitting around playing piano just because that’s what I like to do. I think it was a natural progression, a lot of people were asking if that was leading to the break up of Metric but that’s not even close. Everyone takes Metric as the main focus, but coming back after doing other things gives you insight into how to become a stronger musician. Road testing was very important for this record, I followed it on Youtube and Myspace and a lot of the songs changed hugely. Gimme sympathy is a different song altogether. Yeah, changed dramatically, There are many songs that changed a lot, which is great. It’s a luxury we never had in the past. Whether they became better or worse is arguable on musical taste, and because of that some songs didn’t make the record, but yeah they changed and it was great. Would you have based the changes on you playing them every night, or the audience’s reaction? Both, if we were bored or other people were bored. Sometimes you’re playing a song and you think this is going to be the best and playing it three nights in row you’re seeing that nobody is reacting to this one part that we really thought would be something. You cant ignore that. As well it’s like, you might just think it’s just not hitting me the right way. In terms of composition, the chord structures of ‘Fantasies’ are a lot more straightforward than before, there used to be very surprising progressions. As much as I love the sound of Old World Underground I think it has really worked for this record. Was that a conscious move? It was and it wasn’t. It was more like relaxing and being ourselves, and understanding that simplicity sometimes creates that magic within music. When you’re trying to standout sometimes you do those crazy chord changes but I think it was an understanding and belief in ourselves that we can still build up amazing music around pretty natural chord progressions. There’s a song called Roscoe by Midlake which is just three chords, and he’s got like six melodies that go over it and you don’t get bored of those chords, it’s amazing. Front row deals with the relationship between fans and artists. This might be a bit of a tired question because everyone is talking about it, but the band has a twitter, and that has become integral in the past few months in closing the distance between bands and their fan base. Do you think it is a good thing closing that distance, or do you think it ruins the mystery? Like if Fleet Foxes write about not being able to find their socks or something similar. Those are not the things we twitter about so I can’t tell you for everybody if it’s better, but we still maintain an air of mystery around us because that’s who we are. We don’t really want people to know if we can’t find our socks, that might what some bands do were they’re trying to be all “Oh come in, you can see our whole life!”, but we still understand that there is a part of entertainment and a mystery value. If you give us a curtain on stage, we will use it, and it will be dramatic and fun. I remember we were travelling to Japan one time and on my visa it didn’t say musician it said “entertainer” and I thought, you know what, that’s kinda true, that’s kinda right. Part of my job is to take you away from the reality of life a little bit. When you come to a show you want to be taken away, you don’t want to be thinking about what you did at work that day– or that you couldn’t find you socks either. Of course some days you can’t find your socks, I’m still looking for a T-shirt on the bus, but those are things we don’t really reveal about ourselves. Up-and-coming Glasgow alternative rock band We Came From Wolves have an exciting year ahead – a freshly released debut EP, a UK tour, as well as talks of a potential album later this year. Band members… Bold and electrifying, the American multi-instrumentalist’s fourth studio album just hit the shelves. The self-titled record sees the singer transformed into a white-haired near future cult leader as she explores darkly futuristic themes and manages… I spot Mumford’s Ted and Ben outside King Tut’s, they seem pleased to be recognised, I introduce and identify myself as having been sent to interview them. After a small tour around their dressing room and… Oisín Kealy The four years between ‘Live It Out’ and this new, self-released album have seen a change come over the Canadian quartet. The production here is tighter than ever, but their music plays it safer than before. When…
{ "pile_set_name": "Pile-CC" }
** BASIC USEABILITY * The way brracketing sytnaxes are handled is wrong. Bracketing syntaxes are treated as inherited attributes but syntax inheritance goes the other way--from children to parent. The bracketing syntax should be passed down like the precedence is. * Parsing problem: class A { int a[]; syntax('!' a*); } We want something like A : {...} | '!' TOK_INTEGER {...} | A '!' TOK_INTEGER {...} * Write tests for error cases, and improve error messages. * More comprehensive test for non-error cases. * DONE (dgudeman) Get alternation working * Currently there is no storage management of the parse tree. Add ownership of pointers and freeing them when possible. * Get namespaces working. See ParserBase::namespace_tag. * Figure out the story for lexical rules. How can you specify things like whitespace definition and case-insensitivity in one place to apply to all rules? How do you then write lexical rules that ignore these specifications? See (http://tutor.rascal-mpl.org/Rascal/Concepts/Concepts.html) for another sytem that does this. * Inheritance matching: name an inherited type instead of an attribute and it expands the inherited syntax in place. * Array matching is currently restricted to (<attribute> * <string>). Generalize to (<re> <attribute> <re> * <re>) where <re> matches any pattern that does not have an attribute. Also allow (<re> <alt> <re> * <re>) where <alt> is an alternation. For example: ((extern_flag | static_flag | register_flag)*) to parse a sequence of flags. * DONE (dgudeman) Implement assignment patterns: <attr> '{' (<re> '->' <value>*|) '}' * We need to deal with passing a string as the value of a production. We could either pass pointers to string and deal with allocation, or we could define our own data types to enable it. * Make it work with only standard tools (not dependent on google3). * Get the output compiling and executing. Produce a simple API for the user. See ParserBase::directory. * Seperate class defs into .h and .cc files. * DONE (dgudeman) Get inheritance working: class A { int x; syntax('(' $0 ')'); // parentheses syntax } class B extends A { int y; syntax( 'first' x ', second' y ); } 1. DONE Every class needs to know what its child classes are. 2. DONE The rule for a class A with children is an alternation of the syntax for A and the nonterminals for all of its children. The actual class type produced is usually a child type such as B rather than A. 3. DONE Generalize this alternation concept so that a single class can have multiple class syntaxes and they become alternations. 4. DONE A '$$' in a class syntax causes a recursive match of the class, but does not create a new object. For example, if you match the parentheses syntax for A then it recursively matches A, but only one A object is created. * Right now it is an unchecked error to have more than one regular (that is, not '$$') syntaxes for a class. * DONE Precedence and associativity(dgudeman) These are associated with rules, not symbols, so they have to be implemented by structuring the grammar rules insead of using directives like %left. For example class Operation extends Expression { class Expression args[]; } class Addition extends Operation { syntax (args[1] '+' args[2], left 6); } class Multiplication extends Operation { syntax (args[1] '*' args[2], left 4); } ** FEATURES AND PERFORMANCE * Multiple syntaxes. Write a syntax for language A and one for language B, and you automatically get an A->B translator and a B->A translator. For example: class A { syntax sql("...") c("..."); } Multiple syntaxes may also be needed for a single language. Suppose there is a class C that is parsed entirely differently in two different contexts A and B, but it still makes sense to make it a single class in the AST. Then we can do it like this: class C { ... syntax A_context {...} syntax B_context {...} } class A { ... C c syntax(... A_context:c ...) } class B { ... C c syntax(... B_context:c ...) } * Cut operator: (<matcher> ! <re>). If <matcher> ends in an iterator (+ or *) then it is normally greedy, but this will cut it off before any expression that matches <re>. * Cooperation operator: (<matcher1> & <matcher2>). Matches only if it both <matcher1> and <matcher2> match. * Output for other parser generators and languages. The parser generator has to handle left recursion so not JavaCC. I also don't know if we are going to need other features that can't be parsed by LR or LL parsers so it would be best to look into general parsers (GLR, GLL, Earley, etc.) like Elkhound. Note that bison has a GLR mode if we need it. * Generate our own GLL parser and eliminate the requirement for a separate parser generator. * Expand double-quotes into parse expressions. Eg: class Addition { Expression args[]; syntax("$1 + $2"); // equivalent to syntax ( args[1] '+' args[2]) } * Implement syntax elements attached to attributes. The semantics are as follows class A { int x syntax(... $0 ...); // attribute syntax for x syntax(... x ...); // class syntax } When you see x in the class syntax or in any other attribute syntax, then it is replaced by the attribute syntax for x. The $0 in the attribute syntax for x is replaced by a parser for x. * Extend array syntax so that you can parse syntax('prefix' array 'suffix'*'separator); * If there are two more places where an array of T is parsed with the same prefix, suffix, and separator, then only generate one production for all of them. ** DOCUMENTATION * internal web site * user manual ** SELF PARSING We should get to a self-parsing situation as soon as possible so we don't have to maintain the bison code. Still needed: * precedence and associativity * enums * An alternation that includes array attributes. The array attribute may appear multiple times, adding to the appropriate array.
{ "pile_set_name": "Github" }
"Friendly & Efficient" Orbis are always helpful. When we need something doing urgently, Orbis' teams always do their best to oblige. We have worked with them for many years, so have a good relationship. Jo (London Borough Barking & Dagenham) Local Authority Security for Local Authorities Orbis has been delivering our tailored solutions successfully to Councils and Local Authorities for over 37 years as evidenced by 94% of our customers saying they would recommend us to others. We are the vacant property services experts delivering a one-stop-shop solution to reduce turnaround times, ensuring our clients meet their occupancy targets in the most cost-effective way, allowing them to focus valuable time and resources on their core business. Our understanding of this sector has been achieved by building local relationships and working with our clients, getting to know their aims and needs so that we can help them deliver for their tenants. Additionally Orbis is also a proud member of several national framework agreements such as: Efficiency East Midlands, LHC, Fusion 21, ESPO and PFH. We employ around 500 people from the local communities in which we work. We ensure our workforce are expertly trained and qualified to work in roles from depot-based administrators to multi-skilled operatives. This commitment to our employee’s personal development has led to most our workforce staying with us for several years. We are a full accredited nationwide vacant property security company with a local focus.
{ "pile_set_name": "Pile-CC" }
<?xml version="1.0"?> <!DOCTYPE profile> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <add-on> <add_on_products config:type="list"> <listentry> <media_url><![CDATA[http://download.opensuse.org/update/11.4/]]></media_url> <product>openSUSE-updates</product> <product_dir>/</product_dir> </listentry> <listentry> <media_url><![CDATA[http://download.opensuse.org/distribution/11.4/repo/oss/]]></media_url> <product>openSUSE-oss</product> <product_dir>/</product_dir> </listentry> <listentry> <media_url><![CDATA[http://download.opensuse.org/distribution/11.4/repo/non-oss/]]></media_url> <product>openSUSE-non-oss</product> <product_dir>/</product_dir> </listentry> </add_on_products> </add-on> <bootloader> <device_map config:type="list"> <device_map_entry> <firmware>hd0</firmware> <linux>/dev/sda</linux> </device_map_entry> </device_map> <global> <activate>true</activate> <default>openSUSE 11.4 - 2.6.37.1-1.2</default> <generic_mbr>true</generic_mbr> <lines_cache_id>2</lines_cache_id> <timeout config:type="integer">8</timeout> </global> <initrd_modules config:type="list"> <initrd_module> <module>ahci</module> </initrd_module> <initrd_module> <module>ata_piix</module> </initrd_module> <initrd_module> <module>ata_generic</module> </initrd_module> <initrd_module> <module>thermal</module> </initrd_module> <initrd_module> <module>processor</module> </initrd_module> <initrd_module> <module>fan</module> </initrd_module> </initrd_modules> <loader_type>grub</loader_type> <sections config:type="list"> <section> <append>resume=/dev/sda1 splash=silent quiet showopts</append>  <initial>1</initial> <initrd>(hd0,1)/boot/initrd-2.6.37.1-1.2-default</initrd> <lines_cache_id>0</lines_cache_id> <name>openSUSE 11.4 - 2.6.37.1-1.2</name> <original_name>linux</original_name> <root>/dev/sda2</root> <type>image</type> </section> <section> <append>showopts apm=off noresume nosmp maxcpus=0 edd=off powersaved=off nohz=off highres=off processor.max_cstate=1 nomodeset x11failsafe</append>  <initrd>(hd0,1)/boot/initrd-2.6.37.1-1.2-default</initrd> <lines_cache_id>1</lines_cache_id> <name>Failsafe -- openSUSE 11.4 - 2.6.37.1-1.2</name> <original_name>failsafe</original_name> <root>/dev/sda2</root> <type>image</type> </section> </sections> </bootloader> <deploy_image> <image_installation config:type="boolean">false</image_installation> </deploy_image> <firewall> <FW_ALLOW_FW_BROADCAST_DMZ>no</FW_ALLOW_FW_BROADCAST_DMZ> <FW_ALLOW_FW_BROADCAST_EXT>no</FW_ALLOW_FW_BROADCAST_EXT> <FW_ALLOW_FW_BROADCAST_INT>no</FW_ALLOW_FW_BROADCAST_INT> <FW_CONFIGURATIONS_DMZ>sshd</FW_CONFIGURATIONS_DMZ> <FW_CONFIGURATIONS_EXT>sshd</FW_CONFIGURATIONS_EXT> <FW_CONFIGURATIONS_INT>sshd</FW_CONFIGURATIONS_INT> <FW_DEV_DMZ></FW_DEV_DMZ> <FW_DEV_EXT></FW_DEV_EXT> <FW_DEV_INT></FW_DEV_INT> <FW_FORWARD_ALWAYS_INOUT_DEV></FW_FORWARD_ALWAYS_INOUT_DEV> <FW_FORWARD_MASQ></FW_FORWARD_MASQ> <FW_IGNORE_FW_BROADCAST_DMZ>no</FW_IGNORE_FW_BROADCAST_DMZ> <FW_IGNORE_FW_BROADCAST_EXT>yes</FW_IGNORE_FW_BROADCAST_EXT> <FW_IGNORE_FW_BROADCAST_INT>no</FW_IGNORE_FW_BROADCAST_INT> <FW_IPSEC_TRUST>no</FW_IPSEC_TRUST> <FW_LOAD_MODULES>nf_conntrack_netbios_ns</FW_LOAD_MODULES> <FW_LOG_ACCEPT_ALL>no</FW_LOG_ACCEPT_ALL> <FW_LOG_ACCEPT_CRIT>yes</FW_LOG_ACCEPT_CRIT> <FW_LOG_DROP_ALL>no</FW_LOG_DROP_ALL> <FW_LOG_DROP_CRIT>yes</FW_LOG_DROP_CRIT> <FW_MASQUERADE>no</FW_MASQUERADE> <FW_PROTECT_FROM_INT>no</FW_PROTECT_FROM_INT> <FW_ROUTE>no</FW_ROUTE> <FW_SERVICES_ACCEPT_DMZ></FW_SERVICES_ACCEPT_DMZ> <FW_SERVICES_ACCEPT_EXT></FW_SERVICES_ACCEPT_EXT> <FW_SERVICES_ACCEPT_INT></FW_SERVICES_ACCEPT_INT> <FW_SERVICES_ACCEPT_RELATED_DMZ></FW_SERVICES_ACCEPT_RELATED_DMZ> <FW_SERVICES_ACCEPT_RELATED_EXT></FW_SERVICES_ACCEPT_RELATED_EXT> <FW_SERVICES_ACCEPT_RELATED_INT></FW_SERVICES_ACCEPT_RELATED_INT> <FW_SERVICES_DMZ_IP></FW_SERVICES_DMZ_IP> <FW_SERVICES_DMZ_RPC></FW_SERVICES_DMZ_RPC> <FW_SERVICES_DMZ_TCP></FW_SERVICES_DMZ_TCP> <FW_SERVICES_DMZ_UDP></FW_SERVICES_DMZ_UDP> <FW_SERVICES_EXT_IP></FW_SERVICES_EXT_IP> <FW_SERVICES_EXT_RPC></FW_SERVICES_EXT_RPC> <FW_SERVICES_EXT_TCP></FW_SERVICES_EXT_TCP> <FW_SERVICES_EXT_UDP></FW_SERVICES_EXT_UDP> <FW_SERVICES_INT_IP></FW_SERVICES_INT_IP> <FW_SERVICES_INT_RPC></FW_SERVICES_INT_RPC> <FW_SERVICES_INT_TCP></FW_SERVICES_INT_TCP> <FW_SERVICES_INT_UDP></FW_SERVICES_INT_UDP> <enable_firewall config:type="boolean">false</enable_firewall> <start_firewall config:type="boolean">false</start_firewall> </firewall> <general> <ask-list config:type="list"/> <mode> <confirm config:type="boolean">false</confirm> </mode> <mouse> <id>none</id> </mouse> <proposals config:type="list"/> <signature-handling> <accept_file_without_checksum config:type="boolean">true</accept_file_without_checksum> <accept_non_trusted_gpg_key config:type="boolean">true</accept_non_trusted_gpg_key> <accept_unknown_gpg_key config:type="boolean">true</accept_unknown_gpg_key> <accept_unsigned_file config:type="boolean">true</accept_unsigned_file> <accept_verification_failed config:type="boolean">false</accept_verification_failed> <import_gpg_key config:type="boolean">true</import_gpg_key> </signature-handling> <storage/> </general> <groups config:type="list"> <group> <gid>100</gid> <group_password>x</group_password> <groupname>users</groupname> <userlist></userlist> </group> <group> <gid>19</gid> <group_password>x</group_password> <groupname>floppy</groupname> <userlist></userlist> </group> <group> <gid>1</gid> <group_password>x</group_password> <groupname>bin</groupname> <userlist>daemon</userlist> </group> <group> <gid>54</gid> <group_password>x</group_password> <groupname>lock</groupname> <userlist></userlist> </group> <group> <gid>41</gid> <group_password>x</group_password> <groupname>xok</groupname> <userlist></userlist> </group> <group> <gid>65533</gid> <group_password>x</group_password> <groupname>nobody</groupname> <userlist></userlist> </group> <group> <gid>43</gid> <group_password>x</group_password> <groupname>modem</groupname> <userlist></userlist> </group> <group> <gid>5</gid> <group_password>x</group_password> <groupname>tty</groupname> <userlist></userlist> </group> <group> <gid>7</gid> <group_password>x</group_password> <groupname>lp</groupname> <userlist></userlist> </group> <group> <gid>51</gid> <group_password>!</group_password> <groupname>postfix</groupname> <userlist></userlist> </group> <group> <gid>65534</gid> <group_password>x</group_password> <groupname>nogroup</groupname> <userlist>nobody</userlist> </group> <group> <gid>101</gid> <group_password>!</group_password> <groupname>messagebus</groupname> <userlist></userlist> </group> <group> <gid>59</gid> <group_password>!</group_password> <groupname>maildrop</groupname> <userlist></userlist> </group> <group> <gid>33</gid> <group_password>x</group_password> <groupname>video</groupname> <userlist>vagrant</userlist> </group> <group> <gid>3</gid> <group_password>x</group_password> <groupname>sys</groupname> <userlist></userlist> </group> <group> <gid>15</gid> <group_password>x</group_password> <groupname>shadow</groupname> <userlist></userlist> </group> <group> <gid>20</gid> <group_password>x</group_password> <groupname>cdrom</groupname> <userlist></userlist> </group> <group> <gid>21</gid> <group_password>x</group_password> <groupname>console</groupname> <userlist></userlist> </group> <group> <gid>42</gid> <group_password>x</group_password> <groupname>trusted</groupname> <userlist></userlist> </group> <group> <gid>16</gid> <group_password>x</group_password> <groupname>dialout</groupname> <userlist></userlist> </group> <group> <gid>10</gid> <group_password>x</group_password> <groupname>wheel</groupname> <userlist></userlist> </group> <group> <gid>8</gid> <group_password>x</group_password> <groupname>www</groupname> <userlist></userlist> </group> <group> <gid>40</gid> <group_password>x</group_password> <groupname>games</groupname> <userlist></userlist> </group> <group> <gid>6</gid> <group_password>x</group_password> <groupname>disk</groupname> <userlist></userlist> </group> <group> <gid>17</gid> <group_password>x</group_password> <groupname>audio</groupname> <userlist></userlist> </group> <group> <gid>49</gid> <group_password>x</group_password> <groupname>ftp</groupname> <userlist></userlist> </group> <group> <gid>9</gid> <group_password>x</group_password> <groupname>kmem</groupname> <userlist></userlist> </group> <group> <gid>32</gid> <group_password>x</group_password> <groupname>public</groupname> <userlist></userlist> </group> <group> <gid>103</gid> <group_password>!</group_password> <groupname>tape</groupname> <userlist></userlist> </group> <group> <gid>12</gid> <group_password>x</group_password> <groupname>mail</groupname> <userlist></userlist> </group> <group> <gid>0</gid> <group_password>x</group_password> <groupname>root</groupname> <userlist></userlist> </group> <group> <gid>2</gid> <group_password>x</group_password> <groupname>daemon</groupname> <userlist></userlist> </group> <group> <gid>104</gid> <group_password>!</group_password> <groupname>ntp</groupname> <userlist></userlist> </group> <group> <gid>14</gid> <group_password>x</group_password> <groupname>uucp</groupname> <userlist></userlist> </group> <group> <gid>62</gid> <group_password>x</group_password> <groupname>man</groupname> <userlist></userlist> </group> <group> <gid>22</gid> <group_password>x</group_password> <groupname>utmp</groupname> <userlist></userlist> </group> <group> <gid>13</gid> <group_password>x</group_password> <groupname>news</groupname> <userlist></userlist> </group> <group> <gid>102</gid> <group_password>!</group_password> <groupname>sshd</groupname> <userlist></userlist> </group> </groups> <host> <hosts config:type="list"> <hosts_entry> <host_address>127.0.0.1</host_address> <names config:type="list"> <name>localhost</name> </names> </hosts_entry> <hosts_entry> <host_address>127.0.0.2</host_address> <names config:type="list"> <name>vagrant-opensuse-114-x32.vagrantup.com vagrant-opensuse-114-x32</name> </names> </hosts_entry> <hosts_entry> <host_address>::1</host_address> <names config:type="list"> <name>localhost ipv6-localhost ipv6-loopback</name> </names> </hosts_entry> <hosts_entry> <host_address>fe00::0</host_address> <names config:type="list"> <name>ipv6-localnet</name> </names> </hosts_entry> <hosts_entry> <host_address>ff00::0</host_address> <names config:type="list"> <name>ipv6-mcastprefix</name> </names> </hosts_entry> <hosts_entry> <host_address>ff02::1</host_address> <names config:type="list"> <name>ipv6-allnodes</name> </names> </hosts_entry> <hosts_entry> <host_address>ff02::2</host_address> <names config:type="list"> <name>ipv6-allrouters</name> </names> </hosts_entry> <hosts_entry> <host_address>ff02::3</host_address> <names config:type="list"> <name>ipv6-allhosts</name> </names> </hosts_entry> </hosts> </host> <keyboard> <keymap>english-us</keymap> </keyboard> <language> <language>en_US</language> <languages>en_US,de_DE</languages> </language> <ldap> <base_config_dn></base_config_dn> <bind_dn></bind_dn> <create_ldap config:type="boolean">false</create_ldap> <file_server config:type="boolean">false</file_server> <ldap_domain>dc=example,dc=com</ldap_domain> <ldap_server>127.0.0.1</ldap_server> <ldap_tls config:type="boolean">true</ldap_tls> <ldap_v2 config:type="boolean">false</ldap_v2> <login_enabled config:type="boolean">true</login_enabled> <member_attribute>member</member_attribute> <mkhomedir config:type="boolean">false</mkhomedir> <pam_password>exop</pam_password> <sssd config:type="boolean">true</sssd> <start_autofs config:type="boolean">false</start_autofs> <start_ldap config:type="boolean">false</start_ldap> </ldap> <login_settings/> <networking> <dhcp_options> <dhclient_client_id></dhclient_client_id> <dhclient_hostname_option>AUTO</dhclient_hostname_option> </dhcp_options> <dns> <dhcp_hostname config:type="boolean">true</dhcp_hostname> <domain>vagrantup.com</domain> <hostname>vagrant-opensuse-114-x32</hostname> <resolv_conf_policy>auto</resolv_conf_policy> <searchlist config:type="list"> <search>vagrantup.com</search> </searchlist> <write_hostname config:type="boolean">true</write_hostname> </dns> <interfaces config:type="list"> <interface> <bootproto>dhcp</bootproto> <device>eth0</device> <name>82540EM Gigabit Ethernet Controller</name> <startmode>nfsroot</startmode> <usercontrol>no</usercontrol> </interface> </interfaces> <ipv6 config:type="boolean">true</ipv6> <managed config:type="boolean">false</managed> <net-udev config:type="list"> <rule> <name>eth0</name> <rule>KERNELS</rule> <value>0000:00:03.0</value> </rule> </net-udev> <routing> <ip_forward config:type="boolean">false</ip_forward> </routing> </networking> <nis> <netconfig_policy>auto</netconfig_policy> <nis_broadcast config:type="boolean">false</nis_broadcast> <nis_broken_server config:type="boolean">false</nis_broken_server> <nis_domain></nis_domain> <nis_local_only config:type="boolean">false</nis_local_only> <nis_options></nis_options> <nis_other_domains config:type="list"/> <nis_servers config:type="list"/> <slp_domain/> <start_autofs config:type="boolean">false</start_autofs> <start_nis config:type="boolean">false</start_nis> </nis> <partitioning config:type="list"> <drive> <device>/dev/sda</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <create config:type="boolean">true</create> <crypt_fs config:type="boolean">false</crypt_fs> <filesystem config:type="symbol">swap</filesystem> <format config:type="boolean">true</format> <fstopt>defaults</fstopt> <loop_fs config:type="boolean">false</loop_fs> <mount>swap</mount> <mountby config:type="symbol">device</mountby> <partition_id config:type="integer">130</partition_id> <partition_nr config:type="integer">1</partition_nr> <raid_options/> <resize config:type="boolean">false</resize> <size>525M</size> </partition> <partition> <create config:type="boolean">true</create> <crypt_fs config:type="boolean">false</crypt_fs> <filesystem config:type="symbol">ext4</filesystem> <format config:type="boolean">true</format> <fstopt>acl,user_xattr</fstopt> <loop_fs config:type="boolean">false</loop_fs> <mount>/</mount> <mountby config:type="symbol">device</mountby> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">2</partition_nr> <raid_options/> <resize config:type="boolean">false</resize> <size>max</size> </partition> </partitions> <pesize></pesize> <type config:type="symbol">CT_DISK</type> <use>all</use> </drive> </partitioning> <printer> <client_conf_content> <file_contents><![CDATA[]]></file_contents> </client_conf_content> <cupsd_conf_content> <file_contents><![CDATA[]]></file_contents> </cupsd_conf_content> </printer> <proxy> <enabled config:type="boolean">false</enabled> <ftp_proxy></ftp_proxy> <http_proxy></http_proxy> <https_proxy></https_proxy> <no_proxy>localhost, 127.0.0.1</no_proxy> <proxy_password></proxy_password> <proxy_user></proxy_user> </proxy> <report> <errors> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </errors> <messages> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </messages> <warnings> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </warnings> <yesno_messages> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </yesno_messages> </report> <runlevel> <default>3</default> <services config:type="list"> <service> <service_name>sshd</service_name> <service_start>3 5</service_start> </service> </services> </runlevel> <software> <do_online_update config:type="boolean">true</do_online_update> <image/> <instsource></instsource> <kernel>kernel-default</kernel> <packages config:type="list"> <package>autoyast2</package> <package>binutils</package> <package>binutils-gold</package> <package>crda</package> <package>gcc</package> <package>gcc45</package> <package>glibc-devel</package> <package>hicolor-icon-theme-branding-openSUSE</package> <package>kernel-default-devel</package> <package>kexec-tools</package> <package>less</package> <package>libgomp45</package> <package>libnl</package> <package>linux-glibc-devel</package> <package>make</package> <package>ruby</package> <package>ruby-devel</package> <package>rubygems</package> <package>sudo</package> <package>vim</package> <package>vim-data</package> <package>wget</package> <package>wireless-regdb</package> <package>yast2-schema</package> <package>yast2-trans-de</package> <package>yast2-trans-en_US</package> </packages> <patterns config:type="list"> <pattern>base</pattern> <pattern>sw_management</pattern> <pattern>yast2_basis</pattern> <pattern>yast2_install_wf</pattern> </patterns> <remove-packages config:type="list"> <package>FastCGI</package> <package>Mesa</package> <package>PackageKit</package> <package>PackageKit-branding-openSUSE</package> <package>PackageKit-branding-upstream</package> <package>PackageKit-lang</package> <package>PolicyKit</package> <package>a2ps</package> <package>apache2</package> <package>apache2-itk</package> <package>apache2-prefork</package> <package>apache2-utils</package> <package>apache2-worker</package> <package>bash-lang</package> <package>bea-stax-api</package> <package>bundle-lang-common-ar</package> <package>bundle-lang-common-ca</package> <package>bundle-lang-common-cs</package> <package>bundle-lang-common-da</package> <package>bundle-lang-common-es</package> <package>bundle-lang-common-fi</package> <package>bundle-lang-common-fr</package> <package>bundle-lang-common-hu</package> <package>bundle-lang-common-it</package> <package>bundle-lang-common-ja</package> <package>bundle-lang-common-ko</package> <package>bundle-lang-common-nb</package> <package>bundle-lang-common-nl</package> <package>bundle-lang-common-pl</package> <package>bundle-lang-common-pt</package> <package>bundle-lang-common-ru</package> <package>bundle-lang-common-sv</package> <package>bundle-lang-common-zh</package> <package>bundle-lang-gnome-ar</package> <package>bundle-lang-gnome-ca</package> <package>bundle-lang-gnome-cs</package> <package>bundle-lang-gnome-da</package> <package>bundle-lang-gnome-de</package> <package>bundle-lang-gnome-en</package> <package>bundle-lang-gnome-es</package> <package>bundle-lang-gnome-fi</package> <package>bundle-lang-gnome-fr</package> <package>bundle-lang-gnome-hu</package> <package>bundle-lang-gnome-it</package> <package>bundle-lang-gnome-ja</package> <package>bundle-lang-gnome-ko</package> <package>bundle-lang-gnome-nb</package> <package>bundle-lang-gnome-nl</package> <package>bundle-lang-gnome-pl</package> <package>bundle-lang-gnome-pt</package> <package>bundle-lang-gnome-ru</package> <package>bundle-lang-gnome-sv</package> <package>bundle-lang-gnome-zh</package> <package>cifs-utils</package> <package>coreutils-lang</package> <package>cpio-lang</package> <package>cracklib-dict-small</package> <package>cups</package> <package>cups-client</package> <package>cups-libs</package> <package>dbus-1-python</package> <package>dbus-1-x11</package> <package>exim</package> <package>fam</package> <package>fonts-config</package> <package>foomatic-filters</package> <package>freeglut</package> <package>ft2demos</package> <package>fuse</package> <package>gconf2</package> <package>gconf2-lang</package> <package>gd</package> <package>gdk-pixbuf-lang</package> <package>gdk-pixbuf-query-loaders</package> <package>ghostscript-fonts-other</package> <package>ghostscript-fonts-std</package> <package>ghostscript-library</package> <package>ghostscript-x11</package> <package>giflib</package> <package>glib2-branding-upstream</package> <package>glib2-lang</package> <package>gnome-icon-theme</package> <package>gnome-keyring</package> <package>gnome-keyring-lang</package> <package>gnome-keyring-pam</package> <package>gpg2-lang</package> <package>graphviz</package> <package>graphviz-gd</package> <package>graphviz-gnome</package> <package>gtk2-branding-openSUSE</package> <package>gtk2-branding-upstream</package> <package>gtk2-data</package> <package>gtk2-engine-murrine</package> <package>gtk2-immodule-amharic</package> <package>gtk2-immodule-inuktitut</package> <package>gtk2-immodule-thai</package> <package>gtk2-immodule-vietnamese</package> <package>gtk2-lang</package> <package>gtk2-metatheme-sonar</package> <package>gtk2-tools</package> <package>gutenprint</package> <package>gvfs</package> <package>gvfs-backend-afc</package> <package>gvfs-backends</package> <package>gvfs-fuse</package> <package>gvfs-lang</package> <package>hplip</package> <package>hplip-hpijs</package> <package>icedtea-web</package> <package>java-1_6_0-openjdk</package> <package>java-1_6_0-openjdk-plugin</package> <package>java-1_6_0-sun</package> <package>java-ca-certificates</package> <package>jline</package> <package>jpackage-utils</package> <package>keyutils</package> <package>krb5-mini</package> <package>lcms</package> <package>libFLAC8</package> <package>libIDL-2-0</package> <package>libQtWebKit4</package> <package>libXi6</package> <package>libapr-util1</package> <package>libapr1</package> <package>libarchive2</package> <package>libasound2</package> <package>libatasmart4</package> <package>libatk-1_0-0</package> <package>libavahi-client3</package> <package>libavahi-common3</package> <package>libavahi-glib1</package> <package>libbluetooth3</package> <package>libcairo2</package> <package>libcdio12</package> <package>libcdio12-mini</package> <package>libcdio_cdda0</package> <package>libcdio_paranoia0</package> <package>libdrm</package> <package>libebl1</package> <package>libevtlog0</package> <package>libexif12</package> <package>libffi45</package> <package>libfreebl3</package> <package>libfuse2</package> <package>libgcj45</package> <package>libgcj45-jar</package> <package>libgcr0</package> <package>libgdk_pixbuf-2_0-0</package> <package>libgdu-lang</package> <package>libgdu0</package> <package>libgimpprint</package> <package>libgirepository-1_0-1</package> <package>libgnome-keyring-lang</package> <package>libgnome-keyring0</package> <package>libgp11-0</package> <package>libgp11-modules</package> <package>libgphoto2</package> <package>libgphoto2-lang</package> <package>libgtk-2_0-0</package> <package>libgvfscommon0</package> <package>libieee1284</package> <package>libjasper1</package> <package>libjpeg62</package> <package>liblcms1</package> <package>libldb0</package> <package>liblockdev1</package> <package>libltdl7</package> <package>libmng</package> <package>libmysqlclient_r16</package> <package>libnet1</package> <package>libnih</package> <package>libogg0</package> <package>libopenobex1</package> <package>libpackagekit-glib2-14</package> <package>libpango-1_0-0</package> <package>libpciaccess0</package> <package>libpixman-1-0</package> <package>libpng14-14</package> <package>libpoppler7</package> <package>libpq5</package> <package>libpulse0</package> <package>libpython2_7-1_0</package> <package>libqdialogsolver1</package> <package>libqt4</package> <package>libqt4-qt3support</package> <package>libqt4-sql</package> <package>libqt4-sql-mysql</package> <package>libqt4-sql-postgresql</package> <package>libqt4-sql-sqlite</package> <package>libqt4-sql-unixODBC</package> <package>libqt4-x11</package> <package>libsensors4</package> <package>libsmbclient0</package> <package>libsndfile</package> <package>libsnmp25</package> <package>libsoftokn3</package> <package>libsoup-2_4-1</package> <package>libsqlite3-0</package> <package>libtalloc2</package> <package>libtdb1</package> <package>libtevent0</package> <package>libtiff3</package> <package>libvorbis0</package> <package>libvorbisenc2</package> <package>libwbclient0</package> <package>libxml2-python</package> <package>lockdev</package> <package>login-lang</package> <package>m4</package> <package>mawk</package> <package>metamail</package> <package>metatheme-sonar-common</package> <package>mozilla-nspr</package> <package>mozilla-nss</package> <package>mozilla-nss-certs</package> <package>nginx-0.8</package> <package>obex-data-server</package> <package>openSUSE-dynamic-wallpaper</package> <package>openSUSE-release-ftp</package> <package>openSUSE-release-livetree</package> <package>orbit2</package> <package>pango-tools</package> <package>poppler-data</package> <package>poppler-tools</package> <package>procmail</package> <package>python</package> <package>python-base</package> <package>python-gobject</package> <package>python-qt4</package> <package>python-sip</package> <package>python-xml</package> <package>readline-doc</package> <package>rhino</package> <package>ruby-fcgi</package> <package>rubygem-actionmailer-2_3</package> <package>rubygem-actionpack-2_3</package> <package>rubygem-activerecord</package> <package>rubygem-activerecord-2_3</package> <package>rubygem-activeresource-2_3</package> <package>rubygem-activesupport-2_3</package> <package>rubygem-daemon_controller</package> <package>rubygem-fastthread</package> <package>rubygem-file-tail</package> <package>rubygem-gettext</package> <package>rubygem-gettext_activerecord</package> <package>rubygem-gettext_rails</package> <package>rubygem-http_accept_language</package> <package>rubygem-locale</package> <package>rubygem-locale_rails</package> <package>rubygem-passenger</package> <package>rubygem-passenger-nginx</package> <package>rubygem-polkit</package> <package>rubygem-rack</package> <package>rubygem-rails</package> <package>rubygem-rails-2_3</package> <package>rubygem-rake</package> <package>rubygem-rpam</package> <package>rubygem-ruby-dbus</package> <package>rubygem-spruz</package> <package>rubygem-sqlite3</package> <package>rubygem-webyast-rake-tasks</package> <package>samba</package> <package>samba-client</package> <package>sane-backends</package> <package>sendmail</package> <package>sg3_utils</package> <package>sharutils</package> <package>snmp-mibs</package> <package>sonar-icon-theme</package> <package>sqlite3</package> <package>syslog-ng</package> <package>syslogd</package> <package>systemd</package> <package>systemd-sysvinit</package> <package>systemtap</package> <package>systemtap-runtime</package> <package>tar-lang</package> <package>timezone-java</package> <package>udisks</package> <package>unixODBC</package> <package>upower-lang</package> <package>upstart</package> <package>util-linux-lang</package> <package>wdiff</package> <package>wdiff-lang</package> <package>webyast-base-ws</package> <package>xaw3d</package> <package>xaw3dd</package> <package>xdmbgrd</package> <package>xkeyboard-config</package> <package>xmlbeans</package> <package>xmlbeans-mini</package> <package>xorg-x11</package> <package>xorg-x11-driver-input</package> <package>xorg-x11-fonts</package> <package>xorg-x11-fonts-core</package> <package>xorg-x11-libXdmcp</package> <package>xorg-x11-libXv</package> <package>xorg-x11-server</package> <package>xorg-x11-xauth</package> <package>xtermset</package> <package>yast2-dbus-server</package> <package>yast2-gtk</package> <package>yast2-qt</package> <package>yast2-qt-pkg</package> <package>yast2-theme-openSUSE-Crystal</package> <package>yast2-theme-openSUSE-Oxygen</package> </remove-packages> </software> <timezone> <hwclock>UTC</hwclock> <timezone>Europe/Berlin</timezone> </timezone> <user_defaults> <expire></expire> <group>100</group> <groups>video</groups> <home>/home</home> <inactive>-1</inactive> <shell>/bin/bash</shell> <skel>/etc/skel</skel> <umask>022</umask> </user_defaults> <users config:type="list"> <user> <encrypted config:type="boolean">true</encrypted> <fullname>vagrant</fullname> <gid>100</gid> <home>/home/vagrant</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max>99999</max> <min>0</min> <warn>7</warn> </password_settings> <shell>/bin/bash</shell> <uid>1000</uid> <user_password>$2a$05$RZAhNZ/UJnRgCUmDBH26deOF3ADehICISxvVz/Z8ypCnGRbJhceIW</user_password> <username>vagrant</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Games account</fullname> <gid>100</gid> <home>/var/games</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>12</uid> <user_password>*</user_password> <username>games</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>bin</fullname> <gid>1</gid> <home>/bin</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>1</uid> <user_password>*</user_password> <username>bin</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>nobody</fullname> <gid>65533</gid> <home>/var/lib/nobody</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>65534</uid> <user_password>*</user_password> <username>nobody</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Printing daemon</fullname> <gid>7</gid> <home>/var/spool/lpd</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>4</uid> <user_password>*</user_password> <username>lp</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Postfix Daemon</fullname> <gid>51</gid> <home>/var/spool/postfix</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max>99999</max> <min>0</min> <warn>7</warn> </password_settings> <shell>/bin/false</shell> <uid>51</uid> <user_password>*</user_password> <username>postfix</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>FTP account</fullname> <gid>49</gid> <home>/srv/ftp</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>40</uid> <user_password>*</user_password> <username>ftp</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>NFS statd daemon</fullname> <gid>65534</gid> <home>/var/lib/nfs</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max>99999</max> <min>0</min> <warn>7</warn> </password_settings> <shell>/sbin/nologin</shell> <uid>102</uid> <user_password>*</user_password> <username>statd</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>root</fullname> <gid>0</gid> <home>/root</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>0</uid> <user_password>$2a$05$D0Dz9H1YnauSPfWgxe5/y.bWSL9gAO8DlBD/S1XFobWHafUFETWSe</user_password> <username>root</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Mailer daemon</fullname> <gid>12</gid> <home>/var/spool/clientmqueue</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/false</shell> <uid>8</uid> <user_password>*</user_password> <username>mail</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Daemon</fullname> <gid>2</gid> <home>/sbin</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>2</uid> <user_password>*</user_password> <username>daemon</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>NTP daemon</fullname> <gid>104</gid> <home>/var/lib/ntp</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max>99999</max> <min>0</min> <warn>7</warn> </password_settings> <shell>/bin/false</shell> <uid>74</uid> <user_password>*</user_password> <username>ntp</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>User for D-Bus</fullname> <gid>101</gid> <home>/var/run/dbus</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min>0</min> <warn>7</warn> </password_settings> <shell>/bin/false</shell> <uid>100</uid> <user_password>*</user_password> <username>messagebus</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Unix-to-Unix CoPy system</fullname> <gid>14</gid> <home>/etc/uucp</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>10</uid> <user_password>*</user_password> <username>uucp</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>WWW daemon apache</fullname> <gid>8</gid> <home>/var/lib/wwwrun</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/false</shell> <uid>30</uid> <user_password>*</user_password> <username>wwwrun</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>Manual pages viewer</fullname> <gid>62</gid> <home>/var/cache/man</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>13</uid> <user_password>*</user_password> <username>man</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>News system</fullname> <gid>13</gid> <home>/etc/news</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>9</uid> <user_password>*</user_password> <username>news</username> </user> <user> <encrypted config:type="boolean">true</encrypted> <fullname>SSH daemon</fullname> <gid>102</gid> <home>/var/lib/sshd</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min>0</min> <warn>7</warn> </password_settings> <shell>/bin/false</shell> <uid>101</uid> <user_password>*</user_password> <username>sshd</username> </user> </users> </profile>
{ "pile_set_name": "Github" }
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>CSS Test: Border-top-width using millimeters with a minimum value with a plus sign, +0mm</title> <link rel="author" title="Microsoft" href="http://www.microsoft.com/" /> <link rel="reviewer" title="Gérard Talbot" href="http://www.gtalbot.org/BrowserBugsSection/css21testsuite/" /> <!-- 2012-06-24 --> <link rel="help" href="http://www.w3.org/TR/CSS21/box.html#propdef-border-top-width" /> <link rel="help" href="http://www.w3.org/TR/CSS21/box.html#border-width-properties" /> <link rel="match" href="../reference/ref-if-there-is-no-red.xht" /> <meta name="flags" content="" /> <meta name="assert" content="The 'border-top-width' property supports a minimum length value in millimeters that that has a plus sign before it." /> <style type="text/css"> div { border-top-style: solid; border-top-width: +0mm; border-top-color: red; height: 1in; } </style> </head> <body> <p>Test passes if there is <strong>no red</strong>.</p> <div></div> </body> </html>
{ "pile_set_name": "Github" }
Sir, One day, 1600 g, preterm baby with type C tracheoesophageal fistula (TEF) was scheduled for emergency TEF repair. In the operating room, pulse oximetry, electrocardiography, noninvasive blood pressure and temperature monitoring was established. Anesthesia was induced using 100% oxygen and sevoflurane titrated to effect. Intravenous fluid consisted of balanced crystalloids (Isolyte P; 4 ml/kg/hr), injection Atracurium 0.8 mg), fentanyl 4 mcg was administered intravenously. Trachea was intubated with 3.0 mm ID uncuffed polyvinyl chloride endotracheal tube (ETT). Manual positive-pressure ventilation was commenced. Correct ETT placement confirmed by end-tidal CO~2~(EtCO~2~) and bilaterally equal air entry without undue distention of the stomach on auscultation. ETT was fixed at 9.0 cm. Extra-length of ETT was cut to minimize dead space and prevent kinking. Posterio-lateral thoracotomy position was made and neonate put on pressure controlled ventilation (peak airway pressure 15--20 cmH~2~O; respiratory rate, 25/min; inspiratory: Expiratory ratio \[1:1.5\]) using retropleural approach, surgeon identified and exposed a low lying fistula. Soon thereafter, sudden fall in airway pressures and EtCO~2~ occurred. Manual ventilation did not improve respiratory parameters. Surgeon noticed accidental knick in main stem trachea near the right tracheo-bronchial junction. Tip of ETT was visible through the severed carina, but it could not be advanced as ETT was cut at the oral end and then fixed. Surgeon\'s attempt to ventilate with the thumb over the rent failed. Neonate desaturated rapidly, SpO~2~30%; heart rate 40/min. At this moment, another 3.0 mm ETT was given to surgeon for intrathoracic endobronchial insertion vi the rent and manual ventilation of the right lung was initiated through this ETT \[[Figure 1](#F1){ref-type="fig"}\]. SpO~2~ and heart rate improved. Three-fourths of the rent was repaired around intrathoracic ETT. Thereafter, oral ETT was replaced in the lateral position and advanced just above the carina. Surgeon removed intrathoracic ETT and directed oral ETT into the bronchial lumen to complete the repair. Subsequently, oral ETT was withdrawn back to carina and breathing circuit again connected at the oral end. In the postoperative period, mechanical ventilation and supportive. therapy continued in Intensive Care Unit (ICU). Rest of ICU stay was uneventful. ![Intra-operative image showing severed end of right tracheobronchial junction at carina and right bronchus with endotracheal tube *in situ*](JOACP-32-532-g001){#F1} Limited data are available on the iatrogenic tracheal injuries in neonates. Previously, trauma during delivery endotracheal intubation or foreign body removal has been reported,\[[@ref1]\] Di Gaetano *et al*.\[[@ref2]\] reported cautery induced carinal perforation in an adult. Managed using selective bilateral main stem bronchial intubation. There is a case report of left main stem bronchial tear with EtCO~2~ upsurge during thoracoscopic TEF repair.\[[@ref3]\] Dango *et al*.\[[@ref4]\] reported subtotal posttraumatic rupture of distal tracheobronchial tree in a toddler. Patient required abridgment with a pericardial patch. In the present neonate, large accidental knick lead to complete loss of ventilation. Prompt intrathoracic endobrochial ETT placement restored hemodynamics and acted as a bridge for surgical repair. Therefore, it is suggested that ETT should not be cut at the oral end to allow easy manipulation during surgery. De Gabriele *et al*.\[[@ref5]\] recommend use of fiberoptic bronchoscope to correctly identify TEF before surgical incision. To conclude, early recognition and prompt intervention tailored according to the situation averted peri-operative mortality in a neonatal TEF surgery.
{ "pile_set_name": "PubMed Central" }
23andMe Suspends Sale of Health Tests By Bio-IT World Staff December 6, 2013 | It appears the first chapter in the recent saga of 23andMe and FDA is now coming to a close, as 23andMe at last caves to regulatory pressure and suspends all sales of its health-related genetic tests. An open letter from CEO Anne Wojcicki, posted last night on the 23andMe blog, reports that "23andMe will comply with FDA’s directive and stop offering new consumers access to health-related genetic tests while the company moves forward with the agency’s regulatory review processes." A similar message now greets customers on the front page of 23andMe's website, along with an assurance that older customers will still have access to their full health results. This action follows two weeks of intense speculation in the wake of a November 22 letter from FDA to 23andMe, ordering that the company "must immediately discontinue marketing the PGS [Personal Genome Service] until such time as it receives FDA marketing authorization for the device." 23andMe's commercial strategy was always courting regulatory intervention, as no clear standards were in place in 2006, when the company was founded, outlining whether the federal government would have a role in vetting direct-to-consumer (DTC) genetic tests for susceptibility to health conditions. One camp has held that an individual's personal genetic information should be freely accessible, that genetic testing is fundamentally distinct from the tests "intended for use in the diagnosis of disease or other conditions" that FDA has traditionally covered, and that the 23andMe consumers belong to a unique class of genetic enthusiasts who know enough to take their health reports with a grain of salt. 23andMe has seemed to hold with this train of thought, operating out of a CLIA-approved lab to offer some regulatory assurance of the validity of its raw genetic reads, but declining to submit its gene-disease association data for validation. Another camp has insisted that federal agencies have a vital role to play in these DTC tests, because the average consumer has no way to independently verify any health reports made on the basis of her genetic information, and the go-between companies that deliver these interpretations must be held to some medical standards. This is clearly the role that FDA sees itself filling in the future of personal genetic testing. As the controversy reached a boiling point in the wake of the cease-and-desist letter, one aspect of the disagreement that provoked a great deal of comment was 23andMe's apparent open defiance of FDA. The agency's letter asserted that "FDA has not received any communication from 23andMe since May," despite repeated requests for information on how the company produces and validates its health reports. Even defenders of 23andMe's business strategy and right to operate have seemed at a loss to account for this behavior. Now it appears that the channels have been reopened. Ms. Wojcicki's open letter to her customers references "discussion with officials from the Food and Drug Administration today," and the removal of health testing from 23andMe's services could be seen as a tacit admission that 23andMe does not see its position as strong enough to challenge the agency's directive in the courts. Until the announcement yesterday, 23andMe has tried to thread the needle of literal compliance with the "discontinue marketing" directive by withdrawing its advertising, while sales of the PGS remained open online. Just in time for the 15-day deadline set by FDA, however, the company has dropped this distinction. With its health tests withdrawn from the market, 23andMe can only offer raw genetic reads and ancestry reports, removing its crucial distinction from other services like Ancestry.com or National Geographic's Genographic Project, all of which either buckled earlier to the regulatory hazards of offering genetic health information, or declined to enter the arena altogether. Customers who received health reports from 23andMe before the date of the FDA letter will still have access to those files, but all new customers from November 22 onward will be kept in the dark about what their genetic data says about their health, at least until such time as 23andMe receives FDA clearance for some or all of its health reports. Any customers who ordered the PGS between November 22 and last night's announcement will be offered a full refund if they desire one. On the other hand, Wojcicki reaffirmed 23andMe's commitment to leveraging the massive database it has collected – including genetic information from over half a million individuals – toward new health discoveries. "We will continue our Parkinson’s, sarcoma, MPN and African American research projects," she writes, "and plan to launch more communities in 2014." The imbroglio now appears set to enter a tamer, more drawn-out phase, as FDA and 23andMe work out the precise boundaries of regulatory oversight for DTC genetic testing. Proponents of open access to genetic health data must hope that the bar is set somewhere below the level of accuracy required for standard clinical tests, as genetic data simply cannot support clinical levels of validation for most conditions for the foreseeable future. A possible compromise that admits FDA oversight of 23andMe's data, but allows some flexibility in the degree of validation required in return for strong disclaimers about the quality of genetic health information, may be the fastest and most realistic path toward the return of a renewed PGS with at least some health reporting intact.
{ "pile_set_name": "Pile-CC" }
Q: About how much does it cost to leave 12 led bulbs 60watt replacement burning for 24 hours? I changed to LED bulbs, and I wonder if it pays to buy a switch timer or if it is cheaper to leave it on at all times? A: a "60W replacement" LED is usually around 10W actual. 10W * 12bulbs * 24 hours = 2880Watt-hours 2880Watt-hours = 2.88 kilowatt-hours Your electric bill shows the price / kilowatt-hour. For me, with all applicable taxes and stuff, it's about $0.145 / kilowatt hour (I just paid my bill, so I have it right here)... yeah, that's 14.5 CENTS. So every 24 hours those lights are on would cost me: 2.88 * 0.145 = $0.42 So if you were to use a timer to run them only 12 hours / day, you'd save $0.21 / day. If the timer costs $20, it would pay for itself after 100 days. Cheers, CList A: You will probably save money, but you have omitted several key pieces of information: What is the actual wattage of the bulbs? It's probably around 8-12 watts per bulb for a newer LED but it is easy to verify. How many hours will you save by using a timer? How many timers would you need to buy / how much do they cost? How much do you pay for electricity? In the USA that is measured in $ per kWh and is typically in the range of $0.08 to $0.25. As you can see there is a HUGE amount of variation even within the USA. Also note that rates can be different during day/night, and can also vary seasonally. You really need to find out what you pay to have any semblance of an accurate calculation... do not rely on the "national average".
{ "pile_set_name": "StackExchange" }
Jochen Liedtke Jochen Liedtke (26 May 1953 – 10 June 2001) was a German computer scientist, noted for his work on microkernels, especially the creation of the L4 microkernel family. Vita Education In the mid-1970s Liedtke studied for a diploma degree in mathematics at the Bielefeld University. His thesis project was to build a compiler for the ELAN programming language, which had been launched for teaching programming in German schools; the compiler was written in ELAN itself. Post grad After his graduation in 1977, he remained at Bielefeld and worked on an Elan environment for the Zilog Z80 microprocessor. This required a run-time environment, which he called Eumel ("Extendable Multiuser Microprocessor ELAN-System", but also a colloquial north-German term for a likeable fool). Eumel grew into a complete multi-tasking, multi-user operating system supporting orthogonal persistence, which started shipping (by whoom? to whoom?) in 1980 and was later ported to Zilog Z8000, Motorola 68000 and Intel 8086 processors. As these processors lacked memory protection, Eumel implemented a virtual machine which added the features missing from the hardware. More than 2000 Eumel systems shipped, mostly to schools but also to legal practices as a text-processing platform. In 1984, he joined the GMD (, the German National Research Center for Computer Science, which is now a part of the Fraunhofer Society), where he continued his work on Eumel. In 1987, when microprocessors supporting virtual memory became widely available in the form of the Intel 80386, Liedtke started to design a new operating system to succeed Eumel, which he called L3 ("Liedtke's 3rd system", after Eumel and the Algol 60 interpreter he had written in High School). L3 was designed to achieve better performance by using the latest hardware features, and was implemented from scratch. It was mostly backward-compatible with Eumel, thus benefiting from the existing Eumel ecosystem. L3 started to ship in 1989, with total deployment of at least 500. Both Eumel and L3 were microkernel systems, a popular design in the 1980s. However, by the early 1990s, microkernels had received a bad reputation, as systems built on top were performing poorly, culminating in the billion-dollar failure of the IBM Workplace OS. The reason was claimed to be inherent in the operating-system structure imposed by microkernels. Liedtke, however, observed that the message-passing operation (IPC), which is fundamentally important for microkernel performance, was slow in all existing microkernels, including his own L3 system. His conclusion was that radical re-design was required. He did this by re-implementing L3 from scratch, dramatically simplifying the kernel, resulting in an order-of-magnitude decrease in IPC cost. The resulting kernel was later renamed "L4". Conceptually, the main novelty of L4 was its complete reliance on external pagers (page fault handlers), and the recursive construction of address spaces. This led to a complete family of microkernels, with many independent implementations of the same principles. Liedtke also worked on computer architecture, inventing guarded page tables as a means of implementing a sparsely-mapped 64-bit address space. In 1996, Liedtke completed a PhD on guarded page tables at the Technical University of Berlin. In the same year he joined the Thomas J. Watson Research Center, where he continued to work on L4 (for political reason called the "Lava Nucleus" or short "LN", microkernels were not fashionable at IBM after the Workplace OS disaster). The main project during his IBM time was the Saw Mill project, which attempted to turn Linux into an L4-based multi-server OS. In April 1999 he took up the System Architecture Chair at the University of Karlsruhe. In Karlsruhe he continued to collaborate with IBM on Saw Mill, but at the same time worked on a new generation of L4 ("Version 4"). Several experimental kernels were developed during that time, including Hazelnut, the first L4 kernel that was ported (as opposed to re-implemented) to a different architecture (from x86 to ARM). Work on the new version was completed after his death by Liedtke's students Volkmar Uhlig, Uwe Dannowski and Espen Skoglund. It was released under the name "Pistachio" in 2002. References External links In Memoriam Jochen Liedtke (1953 - 2001) List of Liedtke's publications related to microkernels Category:1953 births Category:2001 deaths Category:German computer scientists Category:Computer systems researchers Category:Kernel programmers
{ "pile_set_name": "Wikipedia (en)" }
<?php namespace Oro\Bundle\WebCatalogBundle\Form\Type; use Oro\Bundle\FormBundle\Form\Type\CheckboxType; use Oro\Bundle\LocaleBundle\Form\Type\LocalizedFallbackValueCollectionType; use Oro\Bundle\RedirectBundle\Form\Type\LocalizedSlugWithRedirectType; use Oro\Bundle\ScopeBundle\Form\Type\ScopeCollectionType; use Oro\Bundle\WebCatalogBundle\Entity\ContentNode; use Oro\Bundle\WebCatalogBundle\Entity\ContentVariant; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\Form\FormEvent; use Symfony\Component\Form\FormEvents; use Symfony\Component\OptionsResolver\OptionsResolver; use Symfony\Component\Routing\RouterInterface; use Symfony\Component\Validator\Constraints\NotBlank; /** * ContentNode form type */ class ContentNodeType extends AbstractType { const NAME = 'oro_web_catalog_content_node'; /** * @var RouterInterface */ private $router; /** * @param RouterInterface $router */ public function __construct(RouterInterface $router) { $this->router = $router; } /** * @param FormBuilderInterface $builder * @param array $options */ public function buildForm(FormBuilderInterface $builder, array $options) { /** @var ContentNode $contentNode */ $contentNode = array_key_exists('data', $options) ? $options['data'] : null; $builder ->add( 'titles', LocalizedFallbackValueCollectionType::class, [ 'label' => 'oro.webcatalog.contentnode.titles.label', 'required' => true, 'entry_options' => ['constraints' => [new NotBlank()]] ] ) ->add( 'scopes', ScopeCollectionType::class, [ 'entry_options' => [ 'scope_type' => 'web_content', 'web_catalog' => $contentNode ? $contentNode->getWebCatalog() : null ], ] ) ->add( 'rewriteVariantTitle', CheckboxType::class, [ 'label' => 'oro.webcatalog.contentnode.rewrite_variant_title.label', 'required' => false ] ) ->add( 'contentVariants', ContentVariantCollectionType::class, [ 'label' => 'oro.webcatalog.contentvariant.entity_plural_label', 'entry_options' => [ 'web_catalog' => $contentNode ? $contentNode->getWebCatalog() : null ] ] ); $builder->addEventListener(FormEvents::PRE_SET_DATA, [$this, 'preSetData']); $builder->addEventListener(FormEvents::SUBMIT, [$this, 'onSubmit']); $builder->addEventListener(FormEvents::PRE_SUBMIT, [$this, 'onPreSubmit']); } /** * @param FormEvent $event */ public function preSetData(FormEvent $event) { $data = $event->getData(); if ($data instanceof ContentNode) { $form = $event->getForm(); if ($data->getParentNode() instanceof ContentNode) { $url = null; if ($data->getId()) { $url = $this->router->generate('oro_content_node_get_changed_urls', ['id' => $data->getId()]); } $form->add( 'slugPrototypesWithRedirect', LocalizedSlugWithRedirectType::class, [ 'label' => 'oro.webcatalog.contentnode.slug_prototypes.label', 'required' => true, 'source_field' => 'titles', 'get_changed_slugs_url' => $url ] ); $form->add( 'parentScopeUsed', CheckboxType::class, [ 'label' => 'oro.webcatalog.contentnode.parent_scope_used.label', 'required' => false ] ); } } } /** * @param FormEvent $event */ public function onSubmit(FormEvent $event) { /** @var ContentNode $contentNode */ $data = $event->getData(); if ($data instanceof ContentNode) { if ($data->getParentNode()) { if ($data->isParentScopeUsed()) { $data->resetScopes(); } } else { $data->setParentScopeUsed(false); } if (!$data->getContentVariants()->isEmpty()) { $data->getContentVariants()->map( function (ContentVariant $contentVariant) use ($data) { if (!$contentVariant->getNode()) { $contentVariant->setNode($data); } } ); } } } /** * @param FormEvent $event */ public function onPreSubmit(FormEvent $event) { $data = $event->getData(); if (!\is_array($data)) { return; } if (!isset($data['scopes'])) { $data['scopes'] = []; } $event->setData($data); } /** * {@inheritdoc} */ public function configureOptions(OptionsResolver $resolver) { $resolver->setDefaults( [ 'data_class' => ContentNode::class, ] ); } /** * @return string */ public function getName() { return $this->getBlockPrefix(); } /** * {@inheritdoc} */ public function getBlockPrefix() { return self::NAME; } }
{ "pile_set_name": "Github" }
Join the Conversation Coyote predates man for claims on North America Carol Reese, Special to The Sun Published 9:00 a.m. CT Dec. 18, 2017 In this photo provided by the New York City Police Department’s Special Operations Division, a female coyote lay in an animal carrier after being captured by Special Operations officers on Manhattan’s west side, Saturday, April 25, 2015, in New York. The police released the coyote to the custody of the American Society for Prevention of Cruelty to Animals, where she is being cared for. A string of recent sightings in Manhattan has drawn new attention to the wily critters that have been spotted periodically in New York since the 1990s.(New York City Police Department Special Operations Division via AP)(Photo: AP) I was walking on a field road into a blustery wind on the family farm in Mississippi. A coyote slid from the rippling tawny broomsedge, and trotted ahead of me for precious minutes, close enough to see the longer blacktipped hairs blow, revealing softer golden fur beneath. The wind shifted and it wheeled to look at me before leaping back into the grasses. Many people would have shot this lithe animal as it went about doing what coyotes do, hunting rodents and rabbits, and digging for grubs. While they might robbed that coyote of its life and its pack of a family member, the killing would have done little to diminish coyote populations. Coyotes have demonstrated that they flourish in spite of having no legal protection. They can be shot, trapped, or snared anytime of year. There are no bag limits, and the use of poison is still an option in 16 states. Thousands of deaths attributed to these poisons include bear, eagle, vulture, raven, cougar, fox, pet dogs, and many more examples of “collateral damage”. A 14-year-old boy also came into contact with an unmarked device that spewed cyanide in his face. He rinsed as much as he could with snow and survived.. His yellow lab, Casey, did not. In spite of these efforts (some 500,000 are killed each year), it is the coyote that laughs last. Astonishingly, their numbers continue to grow. The decline of other large predators helped them spread into parts of the country where they had been uncommon. Coyotes produce larger litters when populations are reduced, so rebound quickly. Young adults move into new territories to start their own families. Shooting one coyote, or killing all of a litter in the den simply makes room for a new pack, and it follows that killing local coyotes that are not causing problems simply makes room for a group that may not be as well-mannered. Left alone, coyote populations tend to stabilize and learn to be wary. Those who believe our wild areas are healthier eco-systems with the presence of a large predator recommend methods of non-lethal livestock protection. Cows close to calving and small calves can be kept close and protected with electric fence. Guard dogs, llamas, mules and donkeys can be effective. Studies show that livestock predation is reduced when farmers change old methods of disposal of dead livestock. Dragging carcasses to a farm “boneyard” simply teaches coyotes to like the taste of livestock. I’ll support the shooting of a nuisance coyote, but if a person hunts them for “sport”, I beg them to reconsider. The spine-tingling call of the coyote has welcomed the night over on this continent for a million years, much longer than we have been here. I’d tell them it’s an honor to hear the ancient evening song and to have the rare moment of exchange with those golden eyes.
{ "pile_set_name": "Pile-CC" }
<?php /** * @author Adam Charron <[email protected]> * @copyright 2009-2019 Vanilla Forums Inc. * @license GPL-2.0-only */ namespace Vanilla\Contracts\Site; /** * Provider for site sections. * * This is called a "provider" because it does not contain any methods for creating/modifying sections. * Some implementations may contain this behaviour but it is not strictly defined for this interface. */ interface SiteSectionProviderInterface { /** * Returns all sections of the site. * * @return SiteSectionInterface[] */ public function getAll(): array; /** * Get the current site section for the request automatically if possible. * * @return SiteSectionInterface */ public function getCurrentSiteSection(): ?SiteSectionInterface; }
{ "pile_set_name": "Github" }
package App::Ack::Filter::FirstLineMatch; =head1 NAME App::Ack::Filter::FirstLineMatch =head1 DESCRIPTION The class that implements filtering files by their first line. =cut use strict; use warnings; use parent 'App::Ack::Filter'; sub new { my ( $class, $re ) = @_; $re =~ s{^/|/$}{}g; # XXX validate? $re = qr{$re}i; return bless { regex => $re, }, $class; } # This test reads the first 250 characters of a file, then just uses the # first line found in that. This prevents reading something like an entire # .min.js file (which might be only one "line" long) into memory. sub filter { my ( $self, $file ) = @_; return $file->firstliney =~ /$self->{regex}/; } sub inspect { my ( $self ) = @_; return ref($self) . ' - ' . $self->{regex}; } sub to_string { my ( $self ) = @_; (my $re = $self->{regex}) =~ s{\([^:]*:(.*)\)$}{$1}; return "First line matches /$re/"; } BEGIN { App::Ack::Filter->register_filter(firstlinematch => __PACKAGE__); } 1;
{ "pile_set_name": "Github" }
--- abstract: 'Constraint propagation algorithms implement logical inference. For efficiency, it is essential to control whether and in what order basic inference steps are taken. We provide a high-level framework that clearly differentiates between information needed for controlling propagation versus that needed for the logical semantics of complex constraints composed from primitive ones. We argue for the appropriateness of our *controlled propagation* framework by showing that it captures the underlying principles of manually designed propagation algorithms, such as literal watching for unit clause propagation and the lexicographic ordering constraint. We provide an implementation and benchmark results that demonstrate the practicality and efficiency of our framework.' author: - Sebastian Brand - 'Roland H.C. Yap' title: 'Towards “Propagation = Logic + Control”' --- Introduction ============ Constraint programming solves combinatorial problems by combining search and logical inference. The latter, constraint propagation, aims at reducing the search space. Its applicability and usefulness relies on the availability of efficiently executable propagation algorithms. It is well understood how primitive constraints, [e.g.]{} indexical constraints, and also their reified versions, are best propagated. We also call such primitive constraints *pre-defined*, because efficient, special-purpose propagation algorithms exist for them and many constraint solving systems provide implementations. However, when modelling problems, one often wants to make use of more complex constraints whose semantics can best be described as a combination of pre-defined constraints using logical operators ([i.e.]{} conjunction, disjunction, negation). Examples are constraints for breaking symmetries [@frisch:2002:global] and channelling constraints [@cheng:1999:increasing]. Complex constraints are beneficial in two aspects. Firstly, from a reasoning perspective, complex constraints give a more direct and understandable high-level problem model. Secondly, from a propagation perspective, the more more global scope of such constraints can allow stronger inference. While elaborate special-purpose propagation algorithms are known for many specific complex constraints (the classic example is the ${\mathsf{alldifferent}}$ constraint discussed in [@regin:1994:filtering]), the diversity of combinatorial problems tackled with constraint programming in practice implies that more diverse and rich constraint languages are needed. Complex constraints which are defined by logical combinations of primitive constraints can always be decomposed into their primitive constituents and Boolean constraints, for which propagation methods exist. However, decomposing in this way may -   cause redundant propagation, as well as -   limit possible propagation. This is due to the loss of a global view: information between the constituents of a decomposition is only exchanged via shared constrained variables. As an example, consider the implication constraint $x = 5 {\rightarrow}y \neq 8$ during constructive search. First, once the domain of $x$ does not contain $5$ any more, the conclusion $y \neq 8$ is irrelevant for the remainder of the search. Second, only an instantiation of $y$ is relevant as non-instantiating reductions of the domain of $y$ do not allow any conclusions on $x$. These properties are lost if the implication is decomposed into the reified constraints $(x = 5) \equiv b_1$, $(y \neq 8) \equiv b_2$ and the Boolean constraints ${{\mathsf{not}}}(b_1, b_1')$, ${{\mathsf{or}}}(b_1', b_2)$. Our focus is point ([A]{}). We show how [***shared control information***]{} allows a constraint to signal others what sort of information is relevant to its propagation or that any future propagation on their part has become irrelevant to it. We address ([B]{}) to an extent by considering implied constraints in the decomposition. Such constraints may be logically redundant but not operationally so. Control flags connecting them to their respective antecedents allow us to keep track of the special status of implied constraint, so as to avoid redundant propagation steps. Our proposed control framework is naturally applicable not only to the usual tree-structure decomposition but also to those with a more complex DAG structure, which permits stronger propagation. Our objective is to capture the *essence* of manually designed propagation algorithms, which implicitly merge the separate aspects of logic and control. We summarise this by *Propagation = Logic + Control* in the spirit of [@kowalski:1979:algorithm]. The ultimate goal of our approach is a fully automated treatment of arbitrary complex constraints specified in a logic-based constraint definition language. We envisage such a language to be analogous to CLP but focused on propagation. Our framework would allow users lacking the expertise in or the time for the development of specialised propagation to rapidly prototype and refine propagation algorithms for complex constraints. Preliminaries {#preliminaries .unnumbered} ------------- Consider a finite sequence of different variables $X = {{x_1, \ldots ,x_m}}$ with respective domains $D(x_1), \ldots, D(x_m)$. A [***constraint***]{} $C$ on $X$ is a pair ${\langle S, X \rangle}$. The set $S$ is an $m$-ary relation and a subset of the Cartesian product of the domains, that is, $S \subseteq D(x_1) \times \ldots \times D(x_m)$. The elements of $S$ are the [***solutions***]{} of the constraint, and $m$ is its *arity*. We assume $m {\geqslant}1$. We sometimes write $C(X)$ for the constraint and often identify $C$ with $S$. We distinguish pre-defined, [***primitive***]{} constraints, such as $x = y, x {\leqslant}y$, and [***complex***]{} constraints, constructed from the primitive constraints and the logical operators $\lor, \land, \lnot$ etc. For each logical operator there is a corresponding [***Boolean***]{} constraint. For example, the satisfying assignments of $x \lor y = z$ are the solutions of the constraint ${{\mathsf{or}}}(x,y,z)$. The [***reified***]{} version of a constraint $C(X)$ is a constraint on $X$ and an additional Boolean variable $b$ reflecting the truth of $C(X)$; we write it as ${C(X) \equiv b}$. Complex constraints can be [***decomposed***]{} into a set of reified primitive constraints and Boolean constraints, whereby new Boolean variables are introduced. For example, the first step in decomposing $C_1 \lor C_2$ may result in the three constraints ${C_1 \equiv b_1}$, ${C_2 \equiv b_2}$, and ${{\mathsf{or}}}(b_1, b_2, 1)$. Constraint [***propagation***]{} aims at inferring new constraints from given constraints. In its most common form, a single constraint is considered, and the domains of its variables are reduced without eliminating any solution of the constraint. If every domain is maximally reduced and none is empty, the constraint is said to be [***domain-consistent***]{} (DC). For instance, $x < y$ with $D(x) = \{1,2\}$, $D(y) = \{1,2,3\}$ can be made domain-consistent by inferring the constraint $y \neq 1$, leading to the smaller domain $D(y) = \{2,3\}$. Decomposing a complex constraint may hinder propagation. For example, DC-establishing propagation is guaranteed to result in the same domain reductions on a constraint and its decomposition only if the constraint graph of the decomposition is a tree [@freuder:1982:sufficient]. For instance, the constraints of the decomposition of the constraint $(x > y) \land (x < y)$ considered in isolation do not indicate its inconsistency. Logic and Control Information ============================= A complex constraint expressed as a logical combination of primitive constraints can be decomposed into its primitive parts. However, such a naive decomposition has the disadvantage that it assigns equal relevance to every constraint. This may cause redundant reasoning to take place for the individual primitive constraints and connecting Boolean constraints. We prevent this by maintaining fine-grained control information on whether the *truth* or *falsity* of individual constraints matters. We say that a truth status of a constraint is [***relevant***]{} if it entails the truth status of some other constraint. We focus on the disjunction operator first. \[prop:disj\] Suppose $C$ is the disjunctive constraint $C_1 \lor C_2$. Consider the truth status of $C$ in terms of the respective truth statuses of the individual constraints $C_1$, $C_2$. - If the falsity of $C$ is asserted then the falsity of $C_1$ and $C_2$ can be asserted. - If the truth of $C$ is asserted then the falsity of $C_1$ and $C_2$ is relevant, but not their truth. - If the truth of $C$ is queried then the truth of $C_1$ and $C_2$ is relevant, but not their falsity. - If the falsity of $C$ is queried then the falsity of only one of $C_1$ or $C_2$ is relevant, but not the their truth. Let the reified version of $C$ be ${(C_1 \lor C_2) \equiv b}$ and its partial decomposition be ${C_1 \equiv b_1}$, ${C_2 \equiv b_2}$, ${{\mathsf{or}}}(b_1, b_2, b)$. The following cases can occur when asserting or querying $C$. Case $b=0$. : Then $C_1$ and $C_2$ must both be asserted to be false. Case $b=1$. : - Suppose $C_1$ is found to be true. This means that both the truth and the falsity of $C_2$, hence $C_2$ itself, have become irrelevant for the remainder of the current search. Although this simplifies the representation of $C$ to $C_1$, it does not lead to any inference on it. In this sense, the truth of $C_1$ is useless information. The case of $C_2$ being true is analogous. - Suppose $C_1$ is found to be false. This is useful information as we now must assert the truth of $C_2$, which may cause further inference in $C_2$. The case of $C_2$ being false is analogous. Only falsity of $C_1$ or $C_2$ is information that may cause propagation. Their truth is irrelevant in this respect. Case $b$ unknown. : We now assume that we know what aspect of the truth status of $C$ is relevant: its truth or its falsity. If neither is relevant then we need not consider $C$, [i.e.]{} $C_1$ and $C_2$, at all. If both the truth and falsity of $C$ are relevant, the union of the individual cases applies. Truth of $C$ is queried: : - Suppose $C_1$ or $C_2$ is found to be true. This means that $C$ is true, and knowing either case is therefore useful information. - Suppose $C_1$ is found to be false. Then the truth of $C$ depends on the truth of $C_2$. The reasoning for $C_2$ being false is analogous. The truth of both $C_1$ and $C_2$ matters, but not their falsity. Falsity of $C$ is queried: : - Suppose $C_1$ or $C_2$ is found to be true. While this means that $C$ is true, this is not relevant since its falsity is queried. - Suppose $C_1$ is found to be false. Then the falsity of $C$ depends on the falsity of $C_2$. Now suppose otherwise that $C_1$ is queried for falsity but *not* found to be false. If $C_1$ is not false then $C$ cannot be false. It is important to realise that this reasoning is independent of $C_2$. The reasoning for $C_2$ being false is symmetric. In summary, to determine the falsity of $C$, it suffices to query the falsity of *just one* of $C_1$ or $C_2$. Fig. \[fig:control-flow-or\] shows the flow of control information through a disjunction. There, and throughout the rest of this paper, we denote a truth query by ${{\mbox{{}\textsl{chk-true}}}}$ and a falsity query by ${{\mbox{{}\textsl{chk-false}}}}$. Analogous studies on control flow can be conducted for all other Boolean operators. The case of a negated constraint is straightforward: truth and falsity swap their roles. Conjunction is entirely symmetric to disjunction due to De Morgan’s law. For example, a query for falsity of the conjunction propagates to both conjuncts while a query for truth need only be propagated to one conjunct. We remark that one can apply such an analysis to other kinds of operators including non-logical ones. Thus, the ${\mathsf{cardinality}}$ constraint [@hentenryck:1991:cardinality] can be handled within this framework. Controlled Propagation ---------------------- Irrelevant inference can be prevented by distinguishing whether the truth or the falsity of a constraint matters. This control information arises from truth information and is propagated similarly. By [***controlled propagation***]{} we mean constraint propagation that (1) conducts inference according to truth and falsity information and (2) propagates such information. We now characterise controlled propagation for a complex constraint in decomposition. We are interested in the *effective* propagation, [i.e.]{} newly inferred constraints (such as smaller domains) on the original variables rather than on auxiliary Boolean variables. We assume that only individual constraints are propagated[^1]. This is the usual case in practice. Controlled and uncontrolled propagation of the constraints of the decomposition of a constraint $C$ are equivalent with respect to the variables of $C$ if only single constraints are propagated. Proposition \[prop:disj\] and analogous propositions for the other Boolean operators. In the following, we explain a formal framework for maintaining and reacting to control information. ### Control store. Constraints communicate truth information by shared Boolean variables. Similarly, we think of control information being communicated between constraints by shared sets of control flags. As control flags we consider the truth status queries ${{\mbox{{}\textsl{chk-true}}}}$, ${{\mbox{{}\textsl{chk-false}}}}$ and the additional flag ${{\mbox{{}\textsl{irrelevant}}}}$ signalling [***permanent irrelevance***]{}. In this context, ‘permanently’ refers to subsidiary parts of the search, that is, until the next back-tracking. Note that the temporary absence of truth and falsity queries on a constraint is not the same as its irrelevance. We write $${C \text{ with } \mathcal{FS}}$$ to mean that the constraint $C$ can read and update the sequence of control flag sets $\mathcal{FS}$. One difference between logic and control information communication is that control flows only one way, from a producer to a consumer. ### Propagating control. A set of control flags ${\mathcal{F}}$ is updated by adding or deleting flags. We abbreviate the adding operation ${\mathcal{F}}:= {\mathcal{F}}{\cup}\{{\mbox{{}\textsl{f}}}\}$ as ${\mathcal{F}}{\mathbin{{\cup}{=}}}{\mbox{{}\textsl{f}}}$. We denote by ${\mathcal{F}}_1 {\rightsquigarrow}{\mathcal{F}}_2$ that from now on permanently changes to the control flags in ${\mathcal{F}}_1$ are reflected in corresponding changes to ${\mathcal{F}}_2$; [e.g.]{} an addition of ${\mbox{{}\textsl{f}}}$ to ${\mathcal{F}}_1$ leads to an addition of ${\mbox{{}\textsl{f}}}$ to ${\mathcal{F}}_2$. We employ rules to specify how control information is attached to the constituents of a decomposed complex constraint, and how it propagates. The rule $A {\Rightarrow}B$ denotes that the conditions in $A$, consisting of constraints and associated control information, entail the constraints and the updates of control information specified in $B$. We use [***delete***]{} statements in the conclusion to explicitly remove a constraint from the constraint store once it is solved or became permanently irrelevant. ### Relevance. At the core of controlled propagation is the principle that reasoning effort should be made only if it is relevant to do so, that is, if the truth or falsity of the constraint at hand is asserted or queried. We reflect this condition in the predicate $$\begin{aligned} \tag{is\_rel} \label{eq:relevance-condition} \begin{split} {\mathit{is\_relevant}}(b, {\mathcal{F}}) \quad := \qquad &b = 1 {\quad\text{or}\quad} {{\mbox{{}\textsl{chk-true}}}}\in {\mathcal{F}}\quad\text{or}\\ &b = 0 {\quad\text{or}\quad} {{\mbox{{}\textsl{chk-false}}}}\in {\mathcal{F}}. \end{split}\end{aligned}$$ It applies to constraints in the form ${C \equiv b \text{ with } {\mathcal{F}}}$. We show later that this principle can be applied to primitive constraints. Boolean Constraints ------------------- We again focus on disjunctive constraints. The following rule decomposes the constraint ${(C_1 \lor C_2) \equiv b}$ only if the relevance test is passed. In this case the shared control sets are initialised. $$\begin{aligned} \tag{${{\mathsf{or}}}_{\text{dec}}$} \label{rule:or-decomposition} \begin{split} \begin{array}[b]{@{}l} {\mathit{is\_relevant}}(b, {\mathcal{F}}) \end{array} {{\quad\Rightarrow\quad}}& {{{\mathsf{or}}}(b, b_1, b_2) \text{ with } {\langle {\mathcal{F}}, {\mathcal{F}}_1, {\mathcal{F}}_2 \rangle}}, \\& {C_1 \equiv b_1 \text{ with } {\mathcal{F}}_1}, {\mathcal{F}}_1 := {\varnothing}, \\& {C_2 \equiv b_2 \text{ with } {\mathcal{F}}_2}, {\mathcal{F}}_2 := {\varnothing}. \end{split}\end{aligned}$$ The following rules specify how control information propagates through this disjunctive constraint in accordance with Proposition \[prop:disj\]: $$\begin{aligned} b = 1 &{{\quad\Rightarrow\quad}}{\mathcal{F}}_1 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-false}}}}, {\mathcal{F}}_2 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-false}}}}; \nonumber \\ b_1 = 0 &{{\quad\Rightarrow\quad}}{\mathcal{F}}{\rightsquigarrow}{\mathcal{F}}_2, {\text{{}delete}}\ {{\mathsf{or}}}(b, b_1, b_2); \nonumber \\ b_2 = 0 &{{\quad\Rightarrow\quad}}{\mathcal{F}}{\rightsquigarrow}{\mathcal{F}}_1, {\text{{}delete}}\ {{\mathsf{or}}}(b, b_1, b_2); \nonumber \\ b_1 = 1 &{{\quad\Rightarrow\quad}}{\mathcal{F}}_2 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\text{{}delete}}\ {{\mathsf{or}}}(b, b_1, b_2); \nonumber \\ b_2 = 1 &{{\quad\Rightarrow\quad}}{\mathcal{F}}_1 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\text{{}delete}}\ {{\mathsf{or}}}(b, b_1, b_2); \nonumber \\ {{\mbox{{}\textsl{chk-false}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_1 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-false}}}}\tag{${{\mathsf{or}}}_{\text{cf}}$} \label{rule:or-control-checkfalse}; \\ {{\mbox{{}\textsl{chk-true}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_1 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-true}}}}, {\mathcal{F}}_2 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-true}}}}; \nonumber \\ {{\mbox{{}\textsl{irrelevant}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_1 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\mathcal{F}}_2 {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\text{{}delete}}\ {{\mathsf{or}}}(b, b_1, b_2). \nonumber\end{aligned}$$ In rule , we arbitrarily select the first disjunct to receive ${{\mbox{{}\textsl{chk-false}}}}$. For comparison and completeness, here are the rules propagating truth information: $$\begin{aligned} b_1 = 0 &{{\quad\Rightarrow\quad}}b = b_2; & b_1 = 1 &{{\quad\Rightarrow\quad}}b = 1; \\ b_2 = 0 &{{\quad\Rightarrow\quad}}b = b_1; & b_2 = 1 &{{\quad\Rightarrow\quad}}b = 1; \\ b = 0 &{{\quad\Rightarrow\quad}}b_1 = 0, b_2 = 0.\end{aligned}$$ Control propagation for the negation constraint ${{{\mathsf{not}}}(b, b_{{N}}) \text{ with } {\langle {\mathcal{F}}, {\mathcal{F}}_{{N}} \rangle}}$ is straightforward: $$\begin{aligned} b = 1 \text{ or } b = 0 \text{ or } b_{{N}} = 1 \text{ or } b_{{N}} = 0 &{{\quad\Rightarrow\quad}}{\text{{}delete}}\ {{\mathsf{not}}}(b, b_{{N}}); \\ {{\mbox{{}\textsl{chk-false}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_{{N}} {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-true}}}}; \\ {{\mbox{{}\textsl{chk-true}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_{{N}} {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{chk-false}}}}; \\ {{\mbox{{}\textsl{irrelevant}}}}\in {\mathcal{F}}&{{\quad\Rightarrow\quad}}{\mathcal{F}}_{{N}} {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}.\end{aligned}$$ The rules for other Boolean operators are analogous. Note that a move from binary to $n$-ary conjunctions or disjunctions does not affect the control flow in principle, in the same way that the logic is unaffected. Both ${{\mbox{{}\textsl{chk-true}}}}$ and ${{\mbox{{}\textsl{chk-false}}}}$ can be in the control set of a constraint at the same time, as it might be in a both positive and negative context. An example is the condition of an if-then-else. On the other hand, if for instance a constraint is not in a negated context, ${{\mbox{{}\textsl{chk-false}}}}$ cannot arise. Primitive Constraints --------------------- Asserting and querying other primitive constraints can be controlled similarly to Boolean constraints. In particular, the relevance condition  must be satisfied before inspecting a constraint. We furthermore deal with ${{\mbox{{}\textsl{irrelevant}}}}\in {\mathcal{F}}$ as expected, by not asserting the primitive constraint or by deleting it from the set of currently queried or asserted constraints. When a query on a primitive constraint is inconclusive, it is re-evaluated whenever useful. This can be when elements from a variable domain are removed or when a bound changes. We rely on the constraint solving environment to signal such changes. Deciding the truth or the falsity of a constraint in general is an expensive operation that requires the evaluation of every variable domain. A primitive $C(X)$ is guaranteed to be true if and only if $C(X) \subseteq D(X)$ and $C(X)$ is non-empty. $C$ is guaranteed to be false if and only if $C(X) {\cap}D(X) = {\varnothing}$, where $X = {{x_1, \ldots ,x_n}}$ and $D(X) = D(x_1) \times \ldots \times D(x_n)$. For some primitive constraints we can give complete but simpler evaluation criteria, similarly to indexicals [@codognet:1996:compiling]; see Tab. \[tab:primitive-constraint-queries\]. $$\begin{array}{@{}l@{\quad}||@{\quad}l|@{\quad}l} \text{Constraint } & \text{true if} & \text{false if} \\[1ex]\hline&&\\[-1.5ex] x \in S & D(x) \subseteq S & D(x) {\cap}S = {\varnothing}\\[0.5ex] x = a & |D(x)| = 1,\ D(x) = \{a\} & a \notin D(x) \\[0.5ex] x = y & |D(x)| = |D(y)| = 1,\ D(x) = D(y)\quad & D(x) {\cap}D(y) = {\varnothing}\\[0.5ex] x {\leqslant}y & \max(D(x)) {\leqslant}\min(D(y)) & \min(D(x)) > \max(D(y)) \end{array}$$ Practical constraint solving systems usually maintain domain bounds explicitly. This makes answering the truth query for equality constraints and the queries for ordering constraints very efficient. Furthermore, the re-evaluation of a query can be better controlled: only changes of the respective bounds are an event that makes a re-evaluation worthwhile. Implied Constraints {#sec:implied-constraints} =================== Appropriate handling of implied constraints fits naturally into the control propagation framework. Suppose the disjunctive constraint $C_1 \lor C_2$ implies $C_{{{\vartriangleright}}}$; that is, $(C_1 \lor C_2) {\rightarrow}C_{{{\vartriangleright}}}$ is always true. Logically, $C_{{{\vartriangleright}}}$ is redundant. In terms of constraint propagation, it may not be, however. Consider the disjunction $(x = y) \lor (x < y)$, which implies $x {\leqslant}y$. Assume the domains are $D(x) = \{4,5\}$, $D(y) = \{3,4,5\}$. Since the individual disjuncts are not false, there is no propagation from the decomposition. In order to conclude $x {\leqslant}y$ and thus $D(y) = \{4,5\}$ we associate the constraint with its implied constraint. We write a disjunctive constraint annotated with an implied constraint as $$C_1 \lor C_2 {\vartriangleright}C_{{{\vartriangleright}}}.$$ To benefit from the propagation of $C_{{{\vartriangleright}}}$, we could represent this constraint as $(C_1 \lor C_2) \land C_{{{\vartriangleright}}}$. However, this representation has the shortcoming that it leads to redundant propagation in some circumstances. Once one disjunct, say, $C_1$, is known to be false, the other disjunct, $C_2$, can be imposed. The propagation of $C_{{{\vartriangleright}}}$ is then still executed, however, while it is subsumed by that of $C_2$. It is desirable to recognise that $C_{{{\vartriangleright}}}$ is operationally redundant at this point. We capture this situation by enhancing the decomposition rule  as follows: $$\begin{aligned} \begin{split} {(C_1 \lor C_2 {\vartriangleright}C_{{{\vartriangleright}}}) \equiv b \text{ with } {\mathcal{F}}} {{\quad\Rightarrow\quad}}& {{{\mathsf{or}}_{{{\vartriangleright}}}}(b, b_1, b_2, b_{{{\vartriangleright}}}) \text{ with } {\langle {\mathcal{F}}, {\mathcal{F}}_1, {\mathcal{F}}_2, {\mathcal{F}}_{{{\vartriangleright}}} \rangle}}, \\& {C_1 \equiv b_1 \text{ with } {\mathcal{F}}_1}, {\mathcal{F}}_1 := {\varnothing}, \\& {C_2 \equiv b_2 \text{ with } {\mathcal{F}}_2}, {\mathcal{F}}_2 := {\varnothing}, \\& {C_{{{\vartriangleright}}} \equiv b_{{{\vartriangleright}}} \text{ with } {\mathcal{F}}_{{{\vartriangleright}}}}, {\mathcal{F}}_{{{\vartriangleright}}} := {\varnothing}. \end{split}\end{aligned}$$ Additionally to the control rules for regular disjunctive constraints shown earlier, we now also use the following four rules: $$\begin{aligned} \!\!b_{{{\vartriangleright}}} = 0 &{\ {\Rightarrow}\ }b = 0; \hspace{1.8em} & b_1 = 0 &{\ {\Rightarrow}\ }{\mathcal{F}}_{{{\vartriangleright}}} {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\text{{}delete}}\ {{\mathsf{or}}_{{{\vartriangleright}}}}(b, b_1, b_2, b_{{{\vartriangleright}}});\\ b = 1 &{\ {\Rightarrow}\ }b_{{{\vartriangleright}}} = 1; & b_2 = 0 &{\ {\Rightarrow}\ }{\mathcal{F}}_{{{\vartriangleright}}} {\mathbin{{\cup}{=}}}{{\mbox{{}\textsl{irrelevant}}}}, {\text{{}delete}}\ {{\mathsf{or}}_{{{\vartriangleright}}}}(b, b_1, b_2, b_{{{\vartriangleright}}}).\end{aligned}$$ We envisage the automated discovery of implied constraints, but for now we assume manual annotation. Subconstraint Sharing: From Trees to DAGs ========================================= The straightforward decomposition of complex constraints can contain unnecessary copies of the same subconstraint in different contexts. The dual constraint graph (whose vertices are the constraints and whose edges are the variables) is a tree, while often a directed acyclic graph (DAG) gives a logically equivalent but more compact representation. See, for example, CDDs [@cheng:2005:constrained]. We can apply controlled propagation to complex constraints represented in DAG form. We need to account for the multiplicity of a constraint when handling queries on it: the set of control flags now becomes a *multiset*, and in effect, we maintain *reference counters* for subconstraints. Control flags need to be properly subtracted from the control set of a constraint. For the sake of a simple example, consider the constraint $(C \lor C_1) \land (C \lor C_2)$. Fig. \[fig:subconstraint-sharing\] shows a decomposition of it. Another example is the condition in an if-then-else constraint. Opportunities for shared structures arise frequently when constraints are defined in terms of subconstraints that in turn are constructed by recursive definitions. Case Studies {#sec:case-studies} ============ We examine several constraints studied in the literature and show that their decomposition benefits from controlled propagation. [**Literal Watching.**]{} The DPLL procedure for solving the SAT problem uses a combination of search and inference and can be viewed as a special case of constraint programming. Many SAT solvers based on DPLL employ unit propagation with *2-literal watching*, [e.g.]{} Chaff [@moskewicz:2001:chaff]. At any time, only changes to two literals per clause are tracked, and consideration of other literals is postponed. Let us view a propositional clause as a Boolean constraint. We define $${{\mathsf{clause}}}({{x_1, \ldots ,x_n}}) \quad := \qquad x_1 = 1 \ \lor\ {{\mathsf{clause}}}({{x_2, \ldots ,x_n}})$$ and show in Fig. \[fig:clause-constraint\] the decomposition of ${{\mathsf{clause}}}({{x_1, \ldots ,x_n}})$ as a graph for controlled and uncontrolled propagation (where $D(x_i) = \{0,1\}$ for all $x_i$). Both propagation variants enforce domain-consistency if the primitive equality constraints do and the variables are pairwise different. This corresponds to unit propagation. Uncontrolled decomposition expands fully into $n-1$ Boolean ${{\mathsf{or}}}$ constraints and $n$ primitive constraints $x_i = 1$. Controlled decomposition only expands into two ${{\mathsf{or}}}$ constraints and the first two primitive constraints $x_1=1$, $x_2=1$. The leaf node marked ${{\mathsf{clause}}}({{x_3, \ldots ,x_n}})$ is initially not expanded as neither assertion nor query information is passed to it. The essence is that the first ${{\mathsf{or}}}$ constraint results in two ${{\mbox{{}\textsl{chk-false}}}}$ queries to the subordinate ${{\mathsf{or}}}$ constraint which passes this query on to just one disjunct. This structure is maintained with respect to new information such as variable instantiations. No more than two primitive equality constraints are ever queried at a time. A reduction of inference effort as well as of space usage results. Controlled propagation here corresponds precisely to 2-literal watching. [**Disequality of Tuples.**]{} Finite domain constraint programming generally focuses on variables over the integers. Sometimes, higher-structured variable types, such as sets of integers, are more appropriate for modelling. Many complex constraints studied in the constraint community are on a sequence of variables and can thus naturally be viewed as constraining a variable whose type is tuple-of-integers. The recent study [@quimper:2005:beyond] examines how some known constraint propagation algorithms for integer variables can be lifted to higher-structured variables. One of the constraints examined is ${\mathsf{alldifferent}}$ on tuples, which requires a sequence of variables of type tuple-of-integers to be pairwise different. Its straightforward definition is $${{\mathsf{alldifferent\_tp}}}({\langle {{X_1, \ldots ,X_n}} \rangle}) \ := \ {\bigwedge}_{i,j \in 1, \ldots, n,\ i < j} {{\mathsf{different\_tp}}}(X_i, X_j),$$ where $${{\mathsf{different\_tp}}}({\langle {{x_1, \ldots ,x_m}} \rangle}, {\langle {{y_1, \ldots ,y_m}} \rangle}) \ := \ {\bigvee}_{i \in 1, \ldots, m} x_i \neq y_i.$$ Let us examine these constraints with respect to controlled propagation. The ${{\mathsf{different\_tp}}}$ constraint is a large disjunction, and it behaves thus like the ${{\mathsf{clause}}}$ constraint studied in the previous section – at most two disjuncts $x_i \neq y_i$ are queried for falsity at any time. Deciding the falsity of a disequality constraint is particularly efficient when the primitive constraints in Tab. \[tab:primitive-constraint-queries\] are used, [i.e.]{}falsity of disequality when the domains are singletons. If the domains are not singletons, re-evaluation of the query is only necessary once that is the case. In contrast, a truth query for a disequality is (more) expensive as the domains must be intersected, and, if inconclusive, should be re-evaluated whenever any domain change occurred. The ${{\mathsf{alldifferent\_tp}}}$ constraint is a conjunction of $\binom n2$ ${{\mathsf{different\_tp}}}$ constraints. Therefore, controlled propagation queries at most $n(n-1)$ disequality constraints for falsity at a time. Uncontrolled propagation asserts all $n(n-1)m/2$ reified disequality constraints and in essence queries truth and falsity of each. Using controlled rather than uncontrolled decomposition-based propagation for ${{\mathsf{alldifferent\_tp}}}$ saves substantial effort without loss of effective propagation. We remark that a specialised, stronger but non-trivial propagation algorithm for this case has been studied in [@quimper:2005:beyond]. The controlled propagation framework is then useful when specialised algorithms are not readily available, for example due to a lack of expertise or resources in the design and implementation of propagation algorithms. [**Lexicographic Ordering Constraint.**]{} It is often desirable to prevent symmetries in constraint problems. One way is to add symmetry-breaking constraints such as the lexicographic ordering constraint [@frisch:2002:global]. A straightforward definition is as follows: $${{\mathsf{lex}}}({\langle {{x_1, \ldots ,x_n}} \rangle}, {\langle {{y_1, \ldots ,y_n}} \rangle}) \ := \ \begin{array}[t]{@{}l} x_1 < y_1 \\ \lor\\ x_1 = y_1 \land {{\mathsf{lex}}}({\langle {{x_2, \ldots ,x_n}} \rangle}, {\langle {{y_2, \ldots ,y_n}} \rangle})\\ \lor\\ n = 0 \end{array}$$ With this definition, propagation of the decomposition does not always enforce domain-consistency. Consider ${{\mathsf{lex}}}({\langle x_1, x_2 \rangle}, {\langle y_1, y_2 \rangle})$ with the domains $D(x_1) = D(x_2) = D(y_2) = \{3..5\}$ and $D(y_1) = \{0..5\}$. Controlled decomposition results in the reified versions of $x_1 < y_1$, $x_1 = y_1$, $x_2 < y_2$ connected by Boolean constraints. None of these primitive constraints is true or false. Yet we should be able to conclude $x_1 {\leqslant}y_1$, hence $D(y_1) = \{3..5\}$, from the definition of ${{\mathsf{lex}}}$. The difficulty is that the naive decomposition is weaker than the logical definition because it only reasons on the individual primitive constraints. However, it is easy to see that $x_1 {\leqslant}y_1$ is an implied constraint in the sense of Section \[sec:implied-constraints\], and we can annotate the definition of ${{\mathsf{lex}}}$ accordingly: $${{\mathsf{lex}}}({\langle {{x_1, \ldots ,x_n}} \rangle}, {\langle {{y_1, \ldots ,y_n}} \rangle}) \ := \quad \begin{array}[t]{@{}l} x_1 < y_1 \\ \lor\\ x_1 = y_1 \land {{\mathsf{lex}}}({\langle {{x_2, \ldots ,x_n}} \rangle}, {\langle {{y_2, \ldots ,y_n}} \rangle})\\ {\vartriangleright}x_1 {\leqslant}y_1\\ \lor\\ n = 0 \end{array}$$ We state without proof that propagation of the constraints of the decomposition enforces domain-consistency on ${{\mathsf{lex}}}$ if the annotated definition is used. Tab. \[tab:lex-derivation\] represents a trace of ${{\mathsf{lex}}}$ on the example used in [@frisch:2002:global], showing the lazy decomposing due to controlled propagation. We collapse several atomic inference steps and omit the Boolean constraints, and we write $v_{i..j}$ to abbreviate ${{v_i, \ldots ,v_j}}$. Observe how the implied constraints $x_i {\leqslant}y_i$ are asserted, made irrelevant and then deleted. The derivation ends with no constraints other than $x_3 < y_3$ queried or asserted. Implementation and Benchmarks ============================= We implemented a prototype of the controlled propagation framework in the CLP system [@wallace:1997:eclipse], using its predicate suspension features and attributed variables to handle control information. The implementation provides controlled propagation for the basic Boolean and primitive constraints, and it handles implied constraints. Structure-sharing by a DAG-structured decomposition is not supported. We conducted several simple benchmarks to compare controlled and uncontrolled propagation on constraint decompositions, using the ${{\mathsf{clause}}}$, ${{\mathsf{different\_tp}}}$, ${{\mathsf{alldifferent\_tp}}}$ and ${{\mathsf{lex}}}$ constraints. A benchmark consisted of finding a solution to a single constraint. For the uncontrolled propagation benchmark, the constraint was simply decomposed into built-in Boolean and primitive constraints of , and implied constraints (in ${{\mathsf{lex}}}$) were conjunctively added to their respective premise. The number of variables in the respective tuple(s) was varied between five and 50. For the ${{\mathsf{alldifferent\_tp}}}$ benchmark, we chose $20$ tuples. The variables ranged over the interval $\{1..10\}$ (except for ${{\mathsf{clause}}}$). Solutions to the constraints were searched by randomly selecting a variable and a value in its domain. This value was either assigned or excluded from its domain; this choice was also random. To obtain meaningful averages, every individual solution search was run a sufficient number of times (typically a few 10000) so that the total computation time was roughly 15s. Each of these runs used a new initialisation of the pseudo-random number generator resulting in a possibly different solution, while the benchmark versions (controlled vs. uncontrolled propagation) used the same initial value to obtain identical search trees. Every experiment was repeated five times. In Tab. \[tab:benchmarks\], we give the relative solving time with controlled propagation, based on the corresponding uncontrolled propagation benchmark taken to be 100%. [@|@l||\*4[rrrr|]{}]{} & 4l[${{\mathsf{clause}}}$]{} & 4l[${{\mathsf{different\_tp}}}$]{} & 4l[${{\mathsf{alldifferent\_tp}}}$]{} & 4l[${{\mathsf{lex}}}$]{}\ nb. of variables & 5 & 10 & 20 & 50 & 5 & 10 & 20 & 50 & 5 & 10 & 20 & 50 & 5 & 10 & 20 & 50\ runtime (%) & 100 & 69 & 50 & 38 & 88 & 84 & 67 & 62 & 66 & 38 & 23 & 11 & 138 & 92 & 69 & 54\ The benchmarks show that controlling propagation can reduce the propagation time. The reduction is especially substantial for high-arity constraints. For low-arity constraints, the extra cost of maintaining control information in our implementation can outweigh the saving due to less propagation. While we have not measured the space usage of the two propagation approaches, it follows from the analyses in Section \[sec:case-studies\] that using controlled propagation for the considered constraints often also requires less space, since constraints are decomposed only when required. We remark that efficiency was a minor concern in our high-level, proof-of-concept implementation; consequently we expect that it can be improved considerably. For example, for constraints that are in negation normal form (all constraints in our benchmark), the control flag ${{\mbox{{}\textsl{chk-true}}}}$ is never created. A simpler subset of the control propagation rules can then be used. Final Remarks ============= ### Related Work. {#related-work. .unnumbered} In terms of foundations, the controlled propagation framework can be described as a refined instance of the CLP scheme (see [@jaffar:1994:constraint]), by a subdivision of the set of active constraints according to their associated truth and falsity queries. Concurrent constraint programming (CCP) [@saraswat:1993:ccp], based on asserting and querying constraints, is closely related; our propagation framework can be viewed as an extension in which control is explicitly addressed and dealt with in a fine-grained way. A practical CCP-based language such as CHR [@fruehwirth:1998:theory] would lend itself well to an implementation. For example, control propagation rules with ${\text{{}delete}}$ statements can be implemented as simplification rules. A number of approaches address the issue of propagation of complex constraints. The proposal of [@bacchus:2005:propagating] is to view a constraint as an expression from which sets of inconsistent or valid variable assignments (in extension) can be computed. It focuses more on the complexity issues of achieving certain kinds of local consistencies. The work [@beldiceanu:2004:deriving] studies semi-automatic construction of propagation mechanisms for constraints defined by extended finite automata. An automaton is captured by signature (automaton input) constraints and state transition constraints. Signature constraints represent groups of reified primitive constraints and are considered pre-defined. They communicate with state transition constraints via constrained variables, which correspond to tuples of Boolean variables of the reified constraints in the signature constraints. Similarly to propagating the constraint in decomposition, all automata constraints are propagated independently of each other. Controlled propagation is similar to techniques used in <span style="font-variant:small-caps;">NoClause</span>, a SAT solver for propositional non-CNF formulas [@thiffault:2004:solving], which in turn lifts techniques such as 2-literal watching from CNF to non-CNF solvers. We describe here these techniques in a formal, abstract framework and integrate non-Boolean primitive constraints and implied constraints, thus making them usable for constraint propagation. ### Conclusion. {#conclusion. .unnumbered} We have proposed a new framework for propagating arbitrary complex constraints. It is characterised by viewing logic and control as separate concerns. We have shown that the controlled propagation framework explains and generalises some of the principles on which efficient manually devised propagation algorithms for complex constraints are based. By discussing an implementation and benchmarks, we have demonstrated feasibility and efficiency. The practical benefits of the controlled propagation framework are that it provides *automatic* constraint propagation for *arbitrary* logical combinations of primitive constraints. Depending on the constraint, controlling the propagation can result in substantially reduced usage of time as well as space. Our focus in this paper has been on reducing unnecessary inference steps. The complementary task of automatically identifying and enabling useful inference steps in our framework deserves to be addressed. It would be interesting to investigate if automatic reasoning methods can be used to strengthen constraint definitions, for instance by automatically deriving implied constraints. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the anonymous reviewers for their comments. This paper was written while Roland Yap was visiting the Swedish Institute of Computer Science and their support and hospitality are gratefully acknowledged. The research here is supported by a NUS ARF grant. [MMZ[[$^{+}$]{}]{}01]{} N. Beldiceanu, M. Carlsson, and T. Petit. Deriving filtering algorithms from constraint checkers. In Wallace [@cp04], pages 107–122. F. Bacchus and T. Walsh. Propagating logical combinations of constraints. In L. P. Kaelbling and A. Saffiotti, editors, [*Proc. of International Joint Conference on Artificial Intelligence ([IJCAI]{}’05)*]{}, pages 35–40, 2005. B. M. W. Cheng, K. M. F. Choi, J. H.-M. Lee, and J. C. K. Wu. Increasing constraint propagation by redundant modeling: [A]{}n experience report. , 4(2):167–192, 1999. P. Codognet and D. Diaz. Compiling constraints in [clp(FD)]{}. , 27(3):185–226, 1996. K. C. K. Cheng and R. H. C. Yap. Constrained decision diagrams. In M. M. Veloso and S. Kambhampati, editors, [*Proc. of 20th National Conference on Artificial Intelligence ([AAAI]{}’05)*]{}, pages 366–371. AAAI Press, 2005. A. M. Frisch, B. Hnich, Z. Kiziltan, I. Miguel, and T. Walsh. Global constraints for lexicographic orderings. In P. Van Hentenryck, editor, [*Proc. of 8th International Conference on Principles and Practice of Constraint Programming ([CP]{}’02)*]{}, volume 2470 of [*LNCS*]{}, pages 93–108. Springer, 2002. E. C. Freuder. A sufficient condition for backtrack-free search. , 29(1):24–32, 1982. T. Fr[ü]{}hwirth. Theory and practice of [C]{}onstraint [H]{}andling [R]{}ules. , 37(1-3):95–138, 1998. P. Van Hentenryck and Y. Deville. The [C]{}ardinality operator: A new logical connective for constraint logic programming. In K. Furukawa, editor, [*Proc. of 8th International Conference on Logic Programming ([ICLP]{}’91)*]{}, pages 745–759. MIT Press, 1991. J. Jaffar and M. J. Maher. Constraint logic programming: A survey. , 19 & 20:503–582, 1994. R. A. Kowalski. Algorithm = [L]{}ogic + [C]{}ontrol. , 22(7):424–436, 1979. M. W. Moskewicz, C. F. Madigan, Y. Zhao, L. Zhang, and S. Malik. Chaff: Engineering an efficient [SAT]{} solver. In [*Proc. of 38th Design Automation Conference ([DAC]{}’01)*]{}, 2001. C.-G. Quimper and T. Walsh. Beyond finite domains: The [A]{}ll [D]{}ifferent and [G]{}lobal [C]{}ardinality constraints. In P. van Beek, editor, [*Proc. of 11th International Conference on Principles and Practice of Constraint Programming ([CP]{}’04)*]{}, volume 3709 of [*LNCS*]{}, pages 812–816. Springer, 2005. J.-C. R[é]{}gin. A filtering algorithm for constraints of difference in csps. In [*Proc. of 12th National Conference on Artificial Intelligence ([AAAI]{}’94)*]{}, pages 362–367. AAAI Press, 1994. V. A. Saraswat. . MIT Press, 1993. Chr. Thiffault, F. Bacchus, and T. Walsh. Solving non-clausal formulas with [DPLL]{} search. In Wallace [@cp04], pages 663–678. M. Wallace, editor. , volume 3258 of [*LNCS*]{}. Springer, 2004. M. G. Wallace, S. Novello, and J. Schimpf. : A platform for constraint logic programming. , 12(1):159–200, 1997. [^1]: [E.g.]{}, path-consistency enforcing propagation considers two constraints at a time.
{ "pile_set_name": "ArXiv" }
Shanti Pereira storms to 200m SEA Games gold, Singapore’s first sprint triumph in 42 years Not since 1973 has Singapore witnessed a local athlete win a sprint event in the SEA Games. Eng Chiew Guay and Glory Barnabas’ triumphs in the 100m and 200m respectively on home soil in the then-SEAP Games was followed by a long drought - but that ended on Wednesday evening. Shanti Pereira stormed to victory in the women’s 200m and into history to end a 42-year wait for gold, clocking 23.60s to set a new national record. The time, 0.3s off the Games record, was also the second time she rewrote her own national mark, having broken it in the morning’s heats. Momentous winSingapore’s newest sprint queen, who also won a 100m bronze the day before, was at a loss for words to describe her triumph as she radiated sheer delight. Gold medallist Shanti Pereira at the 28th SEA Games with her parents. Photo: Nick Tan. “This feels so amazing, I'm so happy!” said Pereira, still a little breathless from her momentous run. “I didn’t know I was going to win, but I did. I can’t describe how I feel right now. “My goal was a medal. I knew who I was racing against and I had a better chance in the 200m than the 100m… [but] I did not expect a gold, I definitely did not!” It was the sort of victory that could galvanise a nation’s sprint scene. Up against the Philippines’ Kayla Anise Richardson, the fastest qualifier in 23.67s, Pereira pulled away after the second bend and held off her closest competitor. “My game-plan was just to have a really good start and to not go so fast on the first curve… keep up with the person next to me and hopefully have more energy at the end,” she recalled. “I was a bit scared because the Malaysia girl (Zaitadul Zulkifli) seemed to be getting a little ahead of me, so I got a bit scared and I started to chiong (run faster) and it really worked!” Singapore's Shanti Veronica Pereira crosses the line first to win the 200m at the 28th SEA Games. Photo: SINGSOC/Action Images via Reuters Historic run for Dipna in hurdlesFellow runner Dipna Lim-Prasad also created history when she won Singapore’ second-ever medal in the women’s 400m hurdles at the Games. The lithe hurdler, who turned 24 on Sunday, followed up on her 2013 bronze by clocking a new national record of 59.24s to nab silver, despite not being in top form. “I’ve been injured for two-and-a-half months so in the past two and a half months, I’ve only hurdled six times, including this race,” she explained. “Naturally that was something not very good coming into a major meet and my preparation wasn’t ideal, but this is once in a lifetime running on home ground and at the National Stadium, so I just gave it my all and I am glad it paid off.” Lim-Prasad was in third place going into the bend, but produced a strong finish to come in ahead of defending champion Wassana Winatho of Thailand. Vietnam’s Nguyen Thi Huyen took gold in a Games-record time of 56.15s.For ZakiShe also dedicated her medal to all the athletes who tread the “more unorthodox” path in track and field, including former promising national hurdler Zaki Sapari, who was involved in a fatal accident last year. “Naturally, it was a bit stressful being on home ground and expecting to do well not only for our country, but for everyone out there including Zaki, our 400m hurdler who really wanted to run here at the SEA Games, but unfortunately passed away,” she reflected. “So I think it was sort of a run for everyone who wants to walk a path that is not very common and a bit more unorthodox and even though the odds aren’t really in your favour, you just want to run your best.” Shot put gold and personal bestsZhang Guirong won Singapore’s third athletics gold this Games in the women’s shot put, retaining her title with a season-best 14.60m. While other local athletes did not make the podium in the rest of the day’s events, Eugenia Tan, 19, shattered her own women’s long jump record by 0.24m with a new mark of 6.18m.
{ "pile_set_name": "Pile-CC" }
[Occurrence of Giardia and Cryptosporidium in wastewater, surface water and ground water samples in Palermo (Sicily)]. This paper reports the investigation carried out on the occurrence of Cryptosporidium and Giardia (oo)cysts in water samples of two municipal treatment plants, and in surface water and ground water wells of the province of Palermo. The wastewater samples taken before and after treatment process were assayed over the course of one year Giardia cysts were detected in all samples throught the year at higher concentration levels than Cryptosporidium oocysts, with a peak observed in spring. The overall removal efficiency of (oo)cysts in the treatment plants was about of 90%. Their presence were also searched in surface waters (three artificial lakes and one river); (oo)cysts were detected in one lake at very low concentration; on the contrary, both parasites were found at high concentration levels in all the samples collected throught one year from the water of the river. The pattern of occurrence of both parasites appears temporally related to the level of rainfall trend. Cryptosporidium and Giardia were also found in ground water wells; their presence occurred only in waters taken from wells at a depth lower than 31 meters with concomitant presence of faecal bacteria. These results may provide further insight into the possible source of Cryptosporidium and Giardia in natural environmental and stress the potential risk associated with the use of waters for recreational and agricultural purposes.
{ "pile_set_name": "PubMed Abstracts" }
Von Madd Laboratories is informing you that updates have been applied to your Von Madd Vibrator. There is no need to connect your Von Madd Vibrator to an internet connection. Updates have been applied using the new Quantum Matter Updater Satellite. WARNING! If your vibrator was touching a paddle at the time of the update, that paddle may now be a pink 38DD bra. Updates include *Added new vibration setting: “Fuck yeah!” *Added audio instructions for first time users. *Added audio instructions for dirty minded users. *Added secret compartment for lubrication storage. *Added alternate grip for left handed users. *Reduced noise by 10%. *Improved battery lifespan by 40%. *Improved adjustable penetration depth by 34%. *Improved night illumination glow by 78%. *Fixed an issue where some users would experience radiation based mutations resulting in super powers. *Fixed an issue where some users would see God. *Fixed an issue where some users inexplicably had a craving for a chili dog after using the Von Madd Vibrator. 3 Responses to “Von Madd Laboratories Vibrator Update” but I bought the vibrator specifically for the radion-based superpowers! (Well, that and the clit-to-jelly function, *sigh*) Please un-modify your upgrades this instant. God told me you would if I asked nicely. When I purchased this vibrator at the pharmacy last month, I was promised that it would help my wife’s sore shoulders (she works as a clerk for the city). Soon after I gave it to her, her outlook improved greatly. She was more cheerful throughout the day, did not mind at all when I had to stay late at work (which happens about 4 times a week), and was a genuinely happier person. I had assumed that this was due to the efficacy of your product, and that her shoulders were no longer giving her trouble. However, this month’s water bill came in the mail yesterday, and I see that our usage has tripled since last month. When I asked my wife, she smiled oddly and told me she was taking longer showers to help her aching shoulders. Clearly your product is not working. Please send me an address where I may return it. Medeline – Radiation Based Superpowers unfortunately came with 7 different forms of cancer. Frank Miller- We are sorry to hear your wife’s shoulders are still sore. May we suggest she try out our new ‘Fuck Yeah!’ setting and then ask her if she wants to return it. We recommend that you ask her this at a considerable distance from her. Galactic Erotica Encyclopedia Entry for Shon Richards.com Shon Richards is a writer of fine erotica since 1996. This blog is full of dirty stories, forays into erotic madness, explorations of sensual pulp, deep dives into domination and submission, reviews of all forms of erotic medium and the occasional erotic experiment gone mad. Caution is advised as genitals may be aroused. All content is copyright Shon Richards, 2006-2016 My Latest Book My next interactive erotica novel is available for purchase Amazon. Banging Your Sister-In-Law lets you play an unhappily married man who receives unexpected filthy invitations from his sister-in-law. The man is planning to divorce his wife anyway, so he takes advantage of the family reunion to indulge in some forbidden carnal desires. What he doesn’t know is that his wife is up to something as well and his mother-in-law has very inappropriate plans of her own. All sorts of wrong choices are available for the reader to make. This is the third book in my Amazon series of interactive erotica. After spending two years writing about the aliens in the last book, I decided to go with something more realistic like adultery, sexual rivalry, cuckholding, public sex, domination, dirty talk, unexpected threesomes, deviant mother figures and an appreciation for crab fritters.
{ "pile_set_name": "Pile-CC" }
‘Emo’ killings in Iraq create different reactions among religious clerics Iraqi religious clerics have reacted differently over reports of the killing of dozens of “Emo” teenagers in the country. Recently, activists rang the alarm over the killing of dozens of teenagers by religious police for donning “Emo” hair styles. “Emo” is a popular culture by some teenagers in many parts of the world and comes from the English word “emotional.” “Emos” use their appearances and type of accessories as a way to express their emotions and to embody their will and their view of life in their behavior. Their way of dressing and use of certain accessories such as piercings is not acceptable by some conservative sections of the Iraqi society, some even brand them as a cult of “devil worshipers.” The death toll of the total number of “Emo” youth is not clear, but reports of their killings have created a big uproar in Iraq. Hana al-Bayaty of Brussels Tribunal, an NGO dealing with Iraqi issues, said the current figure ranges “between 90 and 100.” Various reactions Extremist groups include names of “emo” teenagers in two lists, warning them if they do not leave their “emo” ways, they will be killed. The lists were proliferated in public places in Baghdad. (Courtesy of Sumaria News TV) But on the other end of the spectrum, one of the most revered Shiite sheikhs in Iraq, Ayatollah Ali al-Sistani, said on Thursday that targeting “Emo” youth is an act of “terrorism” and a “bad phenomenon for the peaceful co-existence project.” Another revered Iraqi cleric said in a statement on Friday that the killing of “Emo” teenagers’ in the country was exaggerated and fabricated to serve those of certain anti-religion, government agendas. The revered Ayatollah Mohammed al-Yakoubi was reported by Al Sumaria News as saying that it should be everyone’s religious duty to advise “Emo” youth. The cleric said that those who exaggerated the alleged “Emo” killings have a political agenda aimed at tarnishing the image of those who are religious and have problems with the current government. “Media outlets have published some news on the killing of 'Emo' teenagers in Baghdad and other provinces but did not confirm the authenticity or the correctness of neither the news nor the numbers mentioned.” While the Egypt-based Al Akhbar newspaper cited statements released by the Iraqi interior ministry saying that it gave permission to religious or moral police to go after “Emo” youth, it denied on Thursday incidents of “Emo” killings taking place and accused the media of fabricating reports. “The ‘Emo phenomenon’ or devil worshiping is being probed by the moral police who have the approval to eliminate it as soon as possible since it’s detrimentally affecting the society and becoming a danger,” Al Akhbar cited on Friday the Iraqi Interior Ministry. Emerging evidence Iraqi police sources and witnesses said that there were “mysterious” operations targeting “Emo” teenagers. The teenagers were assaulted with concrete blocks to batter their heads, coinciding with reports coming from activists and the Brussels NGO. Activists have also posted pictures online web of dead teenagers with their heads bashed in. Sumaria News reported police sources as saying that last week, around five “Emos” were killed in Baghdad. Three of those killed on Monday were in the northern capital, and two took place in the more upscale area of Karada. In other Iraqi provinces police talked of “mysterious” suicides taking place, all of which involved “emo” teenagers. An official from Babil’s province police department told Sumaria News that “seven suicide attempts took place in the province, from November 2011 and January 2012, two for girls and five for boys.” Investigation shows that the seven were “emos” who listened to rock music. “Religion is innocent from such crimes if proven true, and whoever kills a human being outside the legal frame, is guilty of committing a crime against God,” Yacoubi said, emphasizing that behavior of teenagers who are socially deviant should be treated with care. He reiterated that those behind the news have agendas to estrange people from religion and said the whole uproar is a “made up phobia.” He also accused the media of plotting against these teenagers by portraying them as “Mossad” agents that intend to spread vice and perversion in society. Meanwhile, activists took a snapshot pictures of lists that include names of “emo” teenagers, warning them that if they do not leave their “emo” way, they will be murdered. It is not the first time that oddballs in the Iraqi society were targeted, homosexuals were brutally murdered before. One man, called Abu Sajat, told Sumaria News that a lot of homosexuals were warned before their killings, adding “there is no doubt that there are serious groups that are intending to kill the “emos”.”
{ "pile_set_name": "Pile-CC" }
et v(i) = -6*i + 17. Let n(b) = 3*b - 9. Let y(r) = -7*n(r) - 4*v(r). Let u be y(4). What is the second derivative of u*p + 0*p - 5*p**2 - 2*p**2 - p wrt p? -14 What is the second derivative of -32 + 31*d**3 - 3*d + 4*d - 2*d**2 + 68*d**3 - 223*d**3 - 187*d**3 wrt d? -1866*d - 4 Let g(h) be the second derivative of -213*h**5/20 - h**4/6 - 8*h**2 + 251*h. What is the third derivative of g(b) wrt b? -1278 Suppose 8*n = 4*n + 20. Suppose n*v - 2 = 8. What is the second derivative of 3*u - v*u**4 - u**4 - 8*u + 2*u**4 wrt u? -12*u**2 Suppose 4*q - 1170 = -5*q. Find the second derivative of -59*j - 55*j + j**3 + q*j wrt j. 6*j Let j(m) be the second derivative of -m**6/15 + 5*m**5/2 - 43*m**4/12 + 10*m + 4. What is the third derivative of j(s) wrt s? -48*s + 300 Let g(b) be the third derivative of -b**6/30 + b**4/24 + 11*b**3/2 + 18*b**2. Find the first derivative of g(a) wrt a. -12*a**2 + 1 What is the second derivative of 71*v**2 - 12*v - 2*v - 34*v wrt v? 142 Differentiate -90 + 167 + 106*f**3 + 21*f**3 wrt f. 381*f**2 Let r(b) be the third derivative of b**7/42 + 7*b**5/60 - 155*b**3/6 - 2*b**2 + 53*b. What is the first derivative of r(d) wrt d? 20*d**3 + 14*d Find the third derivative of -127*z**4 - 124*z**4 + 31*z**2 + 547 - 546 wrt z. -6024*z Let p(o) = -94*o**2 - 126*o + 3. Let w(q) = -187*q**2 - 251*q + 5. Let f(h) = -5*p(h) + 3*w(h). Find the second derivative of f(c) wrt c. -182 Find the first derivative of -16219 + 16170 + 21*s - 80*s wrt s. -59 Let q(d) = 3*d - 2. Let c be q(3). What is the derivative of -34*m - 28*m + c + 65*m wrt m? 3 Let v(u) = u**3 - 13*u**2 + 11*u + 14. Let w be v(12). Find the third derivative of 136*i**4 + 4*i**w + 0*i**2 - 146*i**4 wrt i. -240*i Let j(d) = -17*d**3 - 143*d**2 + 220*d. Let c(q) = 6*q**3 + 48*q**2 - 73*q. Let h(r) = -11*c(r) - 4*j(r). Find the second derivative of h(a) wrt a. 12*a + 88 Let m(c) = -6 - 1 - 3 + c + 2. Let s be m(10). Find the second derivative of 6*f + 5*f**2 - 4*f**2 - f**s - 3*f**2 wrt f. -6 Let y = -41 - -41. Find the second derivative of 0*z**3 + 16*z - 31*z**4 + 54*z**4 + y*z**3 wrt z. 276*z**2 Find the third derivative of 1050*z**2 + 68*z + 29*z**3 - 213*z + 145*z wrt z. 174 Let k = 1142 - 1142. Let g(y) be the second derivative of 1/4*y**5 + k*y**2 + 0*y**3 - 1/6*y**4 + 0 - 6*y. Find the third derivative of g(w) wrt w. 30 Suppose -5*r - 1 = 2*m + m, 0 = -m + 5*r + 13. Find the second derivative of -4*g + 24*g + g**2 + m*g - 24*g**2 wrt g. -46 Let h be (9 - (3 + 1))/(-1). Let p(x) = -3*x - 11. Let c be p(h). Find the first derivative of -12 - 14 + c*r + 18 wrt r. 4 Let u(r) be the first derivative of 0*r**5 + 4 + 2/3*r**3 + 0*r + 0*r**4 + 0*r**2 + 4*r**6. Find the third derivative of u(g) wrt g. 1440*g**2 Let c be (-19 + 15)*(-15)/4. Let x be (20/25)/(2/c). What is the third derivative of 7*m**2 + 0*m**2 + x*m**2 + m**2 + 6*m**6 wrt m? 720*m**3 Let z(j) be the second derivative of 0*j**2 - 5/2*j**4 + 0 + 0*j**3 - 23/20*j**5 - j. Find the third derivative of z(h) wrt h. -138 Let x be 96/(-20)*20/(-3). Let m(j) be the first derivative of x*j**6 - 31*j**6 + j**2 + 3 + 2. Find the second derivative of m(k) wrt k. 120*k**3 Suppose -17 = -a + 2. Suppose -1 = 2*m - a. Find the second derivative of 2*d - d + 9 + 4*d**3 - m wrt d. 24*d Differentiate -c**2 - 64*c**4 - 366 - 57*c**4 + c**2 + 24*c**4 with respect to c. -388*c**3 Let t(x) = 19*x**2 + 4*x. Let h(n) = n. Let u(l) = 30*h(l) + 5*t(l). What is the second derivative of u(b) wrt b? 190 Let v(r) = -9*r**3 - 11*r**3 + 15*r**3 + r - 9*r. Let l(u) = -u**4 - 6*u**3 - 8*u. Let x(s) = 5*l(s) - 6*v(s). What is the second derivative of x(y) wrt y? -60*y**2 Let i(w) be the second derivative of -133*w**5/20 + 143*w**4/12 - 3*w + 2. Find the third derivative of i(a) wrt a. -798 Let t(u) = -11*u**3 - 8*u**2 + 4. Let o(k) = 15*k**2 + 3 + 33*k**3 - 13 + 3 - 10*k**3. Let x(l) = -4*o(l) - 7*t(l). Find the third derivative of x(h) wrt h. -90 Let d(l) = -18*l**2 + 49*l + 4. Let k(o) = -17*o**2 + 49*o + 5. Let i(w) = -5*d(w) + 4*k(w). What is the second derivative of i(g) wrt g? 44 Let w(q) = 1 + 5 + 0 - 19 + 7 - 7*q. Suppose t - 2*t = 6. Let o(p) = 15*p + 12. Let b(n) = t*o(n) - 13*w(n). What is the first derivative of b(a) wrt a? 1 Let b(w) = -3*w**5 + 15*w**4 + 102*w - 2. Let v(z) = -z**5 - z**4. Let j(r) = b(r) + v(r). Find the second derivative of j(x) wrt x. -80*x**3 + 168*x**2 Let j be (-2)/(-8) + (-60)/(-16). What is the third derivative of 30*b**2 - 15*b**4 + 2*b**j + 0*b**4 wrt b? -312*b What is the second derivative of 2*j + 150*j**2 - 58*j - j - 44*j wrt j? 300 Suppose p + 0*p - 4 = 0. Find the second derivative of 37*g + 0*g + 25*g**p - 4*g - 37*g**4 wrt g. -144*g**2 What is the third derivative of 38*u + 82*u**6 - 2*u**2 - 39*u**6 + 6*u**4 - 37*u**6 + 29*u wrt u? 720*u**3 + 144*u Let w(a) be the second derivative of -14*a**4/3 + a**3/3 - 93*a**2/2 + 11*a. What is the first derivative of w(b) wrt b? -112*b + 2 Let f(a) = 106*a - 33. Let h(b) = -35*b + 11. Let g(n) = -2*f(n) - 7*h(n). Let d be g(7). What is the derivative of -2 + 220*j - 5*j**3 - d*j wrt j? -15*j**2 Let i be (-1 - 0) + -6 + 8. Suppose -3*n = -2*a - 8, 3*a = -4 + i. Find the second derivative of -h - h**2 - h**2 + n*h wrt h. -4 Find the second derivative of n + n**2 - 11*n**2 + 44 + 6*n - 8*n wrt n. -20 Let y(s) be the first derivative of 78*s**2 + 56*s - 21. What is the first derivative of y(x) wrt x? 156 Let t(d) be the third derivative of -23*d**6/12 - 13*d**5/60 - d**3/3 - d**2 + 61. Find the third derivative of t(w) wrt w. -1380 Let p = -38 - -41. Find the third derivative of 23*i**2 - 7*i**p - 5*i**3 + 12*i**3 - 19*i**5 wrt i. -1140*i**2 Let h(i) = i**2 + 4*i - 1. Let d be h(-5). Find the first derivative of -2 - 2*r**3 + 2 - 11*r**3 + d wrt r. -39*r**2 Let a(h) = h + 35. Let n be a(-25). What is the third derivative of 38*w**3 + n - 8 - 2 + 40*w**2 wrt w? 228 Let j be 8/(-3)*(-162)/4. Let t be 4/14 + j/14. What is the third derivative of 6*p**2 - 253*p + 253*p + t*p**3 wrt p? 48 Let i be 4/6*18*(-3 + 4). Differentiate 18*c - 10 - i + 1 with respect to c. 18 What is the second derivative of 9 + 7*f**4 - 77*f**3 - 15*f + 10*f - 10*f**4 + f**4 - 5*f wrt f? -24*f**2 - 462*f What is the third derivative of -512*g**2 + 458*g**2 - 100*g**4 + 511*g**4 wrt g? 9864*g Let b(r) = -22*r - 37. Let k(i) be the second derivative of i**3/6 - i**2/2 - 26*i. Let t(v) = b(v) - 4*k(v). What is the first derivative of t(z) wrt z? -26 What is the second derivative of -101*n + 229*n**3 - 69*n**3 + 17*n - 11*n wrt n? 960*n Let p(x) be the third derivative of -1/20*x**6 + 1/24*x**4 + 0*x**5 + x**2 - 2*x**3 + 0*x + 5. What is the second derivative of p(w) wrt w? -36*w Let i(t) = 2*t - 12. Let p be i(7). Suppose -4*w - p*l = -6, -w + 4*l = 12 - 36. What is the third derivative of -15*r**2 + w*r**2 + r**4 + r**4 + 4*r**2 wrt r? 48*r Let z(s) = s**3 + 4*s. Let m(w) = 393*w**3 + 18*w**2 - 11*w. Let j(l) = -m(l) - 3*z(l). Find the third derivative of j(p) wrt p. -2376 Let n(t) be the second derivative of -1/6*t**3 + 2 + 21*t**2 - 3*t + t**5 + 0*t**4. Find the second derivative of n(k) wrt k. 120*k Let k be (-4)/(-6) - 4/(-3). Suppose -12 = k*o - 5*o. Find the second derivative of 0*f**4 + 3*f**4 - 6*f + 0*f - 2*f**o wrt f. 12*f**2 Let r(g) be the second derivative of -2*g**6/15 - g**4/12 - 2*g**3/3 - 4*g**2 - 15*g. Find the second derivative of r(k) wrt k. -48*k**2 - 2 Let y(a) = a**2 + 6*a + 6. Let n be y(-6). Let c be (-3)/n*6*-1. Differentiate 3 + 3*w**4 - 42*w**c + 42*w**3 with respect to w. 12*w**3 Let j(d) = 112*d**4 - 3*d**3 + 23*d**2 - 6. Let k(g) = -g**4 + g**3 + 2. Let r(a) = j(a) + 3*k(a). Find the third derivative of r(u) wrt u. 2616*u Let j(p) = -253*p**3 - 304*p**2 + 5*p - 10. Let h(n) = 126*n**3 + 151*n**2 - 2*n + 4. Let b(y) = -5*h(y) - 2*j(y). What is the third derivative of b(r) wrt r? -744 Let w(v) = 23*v**3 + 396*v + 12. Let s(o) = -46*o**3 - 790*o - 22. Let r(x) = -6*s(x) - 11*w(x). Find the second derivative of r(z) wrt z. 138*z Let c(s) be the second derivative of -59*s**6/30 - s**5/10 - 107*s**4/12 + s + 15. What is the third derivative of c(d) wrt d? -1416*d - 12 Let p(t) be the second derivative of -14*t + 0*t**5 + 0*t**3 + 3/7*t**7 - 1/2*t**4 + 0*t**2 + 0*t**6 + 0
{ "pile_set_name": "DM Mathematics" }
Samsung’s Galaxy S8 Active got confirmed by AT&T accidentally A week back we reported about Samsung Galaxy S8 Active leaks. Now AT&T accidentally confirms Samsung’s Galaxy S8 Active presence. FAQs section of a AT&T and Samsung’s currently running TV promotion shows reference of the upcoming phone. However AT&T has already taken it down but it already confirmed the Active phone. Some information from our previous posts on Active leaks. Some changes compared to Samsung Galaxy S8 are – Its not curved like S8. Unlike previous generations, this one has on screen keys instead of rugged physical keys.
{ "pile_set_name": "Pile-CC" }
1872 Tamworth by-election The Tamworth by-election of 1872 was fought on 16 April 1872[20 3]. The byelection was fought due to the death of the incumbent Liberal MP, John Peel. It was won by the Conservative candidate Robert William Hanbury. References Category:1872 in England Category:Politics of Tamworth, Staffordshire Category:1872 elections in the United Kingdom Category:By-elections to the Parliament of the United Kingdom in Staffordshire constituencies Category:19th century in Staffordshire
{ "pile_set_name": "Wikipedia (en)" }
Q: Page curl with active content I've got an magazine app. In it, I implement AFK Page Flipper for page curl. But now I have no idea how to create active content. I mean for example when I've got image on page I want to tap on it and get bigger in model view. Like in Flipboard app. Any ideas? Thx for reply! A: I got it... Flipboard has page turning on swipe not on tap. So wehn I want to show image I tap. When I want to turn over page, I sipe...
{ "pile_set_name": "StackExchange" }
Q: Limit results of each group I want to limit the records in each group, so that when I aggregate them into a JSON object in the select-statement, it only takes the N conversations with hightest count Any ideas? My query: select dt.id as app_id, json_build_object( 'rows', array_agg( json_build_object( 'url', dt.started_at_url, 'count', dt.count ) ) ) as data from ( select a.id, c.started_at_url, count(c.id) from apps a left join conversations c on c.app_id = a.id where started_at_url is not null and c.started_at::date > (current_date - (7 || ' days')::interval)::date group by a.id, c.started_at_url order by count desc ) as dt where dt.id = 'ASnYW1-RgCl0I' group by dt.id A: Your problem is similar to a groupwise-max problem, and there are many solutions to this. Filtering row_number window function A simple one is to use row_number() window function and filter out for only rows that the result is < N (using 5 as an example): select dt.id as app_id, json_build_object( 'rows', array_agg( json_build_object( 'url', dt.started_at_url, 'count', dt.count ) ) ) as data from ( select a.id, c.started_at_url, count(c.id) as count, row_number() over(partition by a.id order by count(c.id) desc) as rn from apps a left join conversations c on c.app_id = a.id where started_at_url is not null and c.started_at > (current_date - (7 || ' days')::interval)::date group by a.id, c.started_at_url order by count desc ) as dt where dt.id = 'ASnYW1-RgCl0I' and dt.rn <= 5 /* get top 5 only */ group by dt.id Using LATERAL Another option is to use LATERAL and LIMIT to bring back only the results you are interested in: select a.id as app_id, json_build_object( 'rows', array_agg( json_build_object( 'url', dt.started_at_url, 'count', dt.count ) ) ) as data form apps a, lateral( select c.started_at_url, count(*) as count from conversations c where c.app_id = a.id /* here is why lateral is necessary */ and c.started_at_url is not null and c.started_at > (current_date - (7 || ' days')::interval)::date group by c.started_at_url order by count(*) desc limit 5 /* get top 5 only */ ) as dt where a.id = 'ASnYW1-RgCl0I' group by a.id OBS: I haven't tried those, so there may be a typo. You can provide sample data sets if you wish some testing. OBS 2: If you are really filtering by app_id on your final query, then you don't even need that GROUP BY clause.
{ "pile_set_name": "StackExchange" }
Cortical venous infarcts and acute limb ischaemia in acute carbon monoxide poisoning: A rare case report. A case of carbon monoxide poisoning is presented with unusual complications; some of which have not been reported previously. A 48-years-old Asian male presented to the emergency department with dyspnoea, altered state of consciousness and pale discolouration of skin after being locked inside a factory room with burning coal. Patient was in acute respiratory distress. Arterial blood gas analysis showed respiratory acidosis with hypoxaemia. On 3rd day, patient developed dark coloured urine and right upper limb ischaemia. Acute renal failure was diagnosed. A doppler ultrasound showed stenosis of radial and ulnar arteries. 0n 8th day, patient regained consciousness and complained of loss of vision. An MRI of the brain revealed bilateral occipital venous infarcts. Cortical venous infarcts and arterial stenosis are rare complications of acute carbon monoxide poisoning.
{ "pile_set_name": "PubMed Abstracts" }
version: 1 dn: m-oid=2.16.840.1.113719.1.203.4.7,ou=attributeTypes,cn=dhcp,ou=schema creatorsname: uid=admin,ou=system objectclass: metaAttributeType objectclass: metaTop objectclass: top m-equality: caseIgnoreIA5Match m-syntax: 1.3.6.1.4.1.1466.115.121.1.40 m-singlevalue: FALSE m-collective: FALSE m-nousermodification: FALSE m-usage: USER_APPLICATIONS m-oid: 2.16.840.1.113719.1.203.4.7 m-name: dhcpOption m-description: Encoded option values to be sent to clients. Each value represen ts a single option and contains (OptionTag, Length, OptionValue) encoded in the format used by DHCP. m-obsolete: FALSE entryUUID: bcc51eca-c877-4f6d-a5d9-b3234b0159c6 entryCSN: 20130919081858.846000Z#000000#000#000000 entryParentId: e4750b4b-8dc1-4318-b657-bba5670b4f0b createTimestamp: 20130919081910.823Z
{ "pile_set_name": "Github" }
PRESENT EMPLOYMENT 1. Physician: I am currently working as a Physician seeing patients referred to me by General Practitioners and other doctors within my areas of expertise, primarily general medicine, sexual health and affective disorders and clinical pharmacology. 2. Medico-legal Expert Witness: providing opinions for the Court and advising the legal profession in the fields of prescription medicines, illegal drugs, general internal medicine and sexual health. Instructions have been received from solicitors, barristers, Crown Prosecution Services, Police Forces, the Armed Forces, the Police Federation and legal consultancies. 3. Founder and Director of Positive Under Pressure™:Positive Under Pressure™ is a pressure management company which assists individuals, groups, organisations and Government agencies identify and modify their responses to pressure, so that they gain the benefits of enhanced performance without the negative impact of stress, and addictive coping strategies. The techniques used in Positive Under Pressure™ have now proven useful to over seventy five groups of General Practitioners who use the techniques to modify their own stress levels as well as teaching the techniques to their patients. Seminars have been performed for The Royal College of General Practitioners, The Royal Society of Medicine, The Royal Defence Medical College and the Royal Navy. Training programmes have also been run at The Royal College of Psychiatrists. The book Positive Under Pressure is published by Thorsons 4. Pharmaceutical Consultancy: I offer advice to the Pharmaceutical, Biotechnology and Healthcare industries. I advise on both business development and registration of pharmaceutical products and medical devices. The clinical development programmes I have been responsible for are attached. RECENT EMPLOYMENT 1. Psychiatric Clinical Investigator: I worked with Dr Cosmo Hallström, Consultant Psychiatrist, at the Capio Nightingale Clinic Chelsea, performing psychiatric clinical studies. These include studies in depression, anxiety and panic. 2. Director of Same Day Doctor: A private medical clinic comprising medical, sexual and psychological clinics in Harley Street, employing three doctors, two nurses, two practice managers, three administrators and additional staff as needed. 4. Parents Under Pressure: I was responsible for the parent support groups at the Capio Nightingale Adolescent Mental Health Unit at Radnor Walk, Chelsea, supporting parents through the stress of having their children admitted to this hospital and having mental health problems. 5. Honorary Clinical Assistant: In Renal Medicine to Professor John Cunningham, The Middlesex and University College London. PRE REGISTRATION EXPERIENCE Senior House Officer in General Medicine, The London Hospital, Whitechapel Senior House Officer in Accident & Emergency Senior House Officer in Venerology to Dr Ambrose King Senior House Officer in General Medicine & Gastroenterology to Dr Wright Senior House Officer in General Medicine and Respiratory Medicine to Drs Hughes and Hickman HIGHER PROFESSIONAL TRAINING Lecturer in Medicine, the London Hospital, Whitechapel on the Medical Unit, Director, Professor Ledingham. 1976 - 1980 Honorary Registrar in General Medicine to Prof Ledingham and sexual health to Drs Dunlop and Rodin.. All medical unit positions gave experience as the most senior doctor on the unit during 'admission takes' for the firm. The unit was on 'take' 1/5 and 1 weekend in 5. When the NHS position was as an Honorary Registrar, there was no senior registrar on call. Cover was provided to all other Junior staff at the hospital. Clinics were two general medical clinics a week for Prof Ledingham and Prof. Floyer and either an Endocrinology clinic for Prof Cohen or a nephrology clinic for Dr Goodwin or Marsh. A clinic in sexual health was continued and patients were managed on the wards. Jan 1976 – Jun 1976 Honorary Registrar in Cardiology to Dr Wallace- Brigden and Alistair MacDonald. Experience involved training in left and right ventricular cathetrisiton M mode and later 2D echocardiography, pacemaker implantation both permanent and temporary, and ahrythmia assessment. There were three cardiology clinics a week and a general medical clinic was continued. Additional experience was obtained in being responsible to the consultant for the running of the coronary care unit and cardiac doctor on call on a 1 in 4 rota. Jul 1976 – Dec 1976 Honorary Registrar in Medicine to Prof Mike Floyer. Experience as before. In addition I was prime mover in establishing the London Hospital Hypertension Clinic under the guidance of Professor J M Ledingham. My responsibilities included staffing and funding the Unit, and I was involved in prioritising research within it. The clinic had both a service and a research commitment. I advised general physicians on the management and investigations of hypertensive patients, as well as having referrals direct from general practice. The research carried out in the clinic forms the basis of many of my publications. A screening programme was established in general practice to identify patients with high blood pressure. Advice and guidance were obtained from Prof. Colin Dollery (Hammersmith Hospital) and Professor Robertson (MRC Blood Pressure Unit, Glasgow) in the establishment of such programmes as well as the management of specialist hypertension units. The London Hospital Clinic was one of the first in the UK. 1977 Honorary Senior Registrar in General Medicine to Profs Ledingham, Floyer and Cohen. There was continuous research and teaching experience. During this period there was a rotation of six months of general medicine and six months of additional experience. I continued to manage and run the London Hospital Hypertension Clinic during this time. It consisted of a technician, a research sister and a secretary, all self funded from research grants I obtained. Other Lecturers in Medicine used the clinic for their experience and research. I remained responsible to Prof Ledingham for the clinical care of patients. I had three periods on the wards being responsible to the consultant for the clinical care of patients (experience as before). I had two further periods of attachment to the Cardiology Department (experience as before). There was an additional six months attachment to the Renal Unit, being responsible to Drs Goodwin and Marsh, where my experience included, renal in patient and out patient care, pre and post dialysis care, as well as the care of pre and post transplantation patients. Further endocrinology experience was obtained with Professor Cohen. 1977 - 1980 Honorary Lecturer in Medicine, The London Hospital. On leaving the Medical Unit, I joined MSD (see later). I continued with my responsibility to fund and manage the Hypertension Clinic responsible to Prof Cohen, in the position of Honorary Lecturer in Medicine on the retirement of Prof Ledingham. The management of the unit was taken over by Dr Goodwin when I left MSD. Publications resulting from my work at the London Hospital can be found throughout my publications on the PubMed List attached. They are numbers 35, 52 and 56-57 and 59-71. My research on the unit included: 1. Effects of beta blocking drugs on the peripheral circulation 2. Treatment of resistant hypertension 3. Converting enzyme inhibitors and inhibition in the treatment of hypertension and the physiological responses to stress 1981 - 1985 Honorary Lecturer in Clinical Pharmacology to Prof Paul Turner, Professor of Clinical Pharmacology, St Bartholomew's Hospital Experience included an out patient clinic in general medicine and clinical pharmacology as well as a research clinic. I studied the effects of converting enzyme inhibitors on plasma neuropeptides in conjunction with Professor Lesley Rees, Professor of Chemical Pathology, St Bartholomew's Hospital, in patients with hypertension and heart failure and in patients on non-steroidal anti-inflammatory agents. These papers can be found in my full Publications List and in the PubMed list paper numbers 45 and 50. 1981-87 Director and Co-ordinator of Cardiovascular Research, Old Church Hospital, Romford, Essex. In conjunction with Dr Stephens, I established a research charity in this District General Hospital, undertaking studies in cardiovascular disease. There were five clinics a week, one each in Hypertension, Angina, Heart Failure, Ahrhythmias and Emergency Referrals. I also attended two NHS clinics in General Cardiology. The unit undertook ½ the NHS workload at the hospital. The publications resulting from this work in the unit are throughout my Publications Lists. 1985 - 1995 Director and Trustee of Romford Cardiovascular Research I have been a Director and Trustee of the Unit which was handed over to the NHS in June 2003. The research funds were donated to the NHS trust in November 2004. COLLEGE AND N.H.S. ADMINISTRATIVE RESPONSIBILITY Throughout employment at the London Hospital and Merck Sharp & Dohme, I was formally involved in the teaching of both medical students and junior hospital doctors studying for their M R C P. I organised the London Hospital MRCP Tutorial Scheme. I organised the MRCP examinations on three occasions. I was a member of: * The London Hospital Medical College Education Committee. * London Hospital Division of Medicine Working Party on Postgraduate Education. * Chairman of the Tower Hamlets Junior Hospital Staff Committee. * Junior Hospital Staff Representative - Division of Medicine and Final Medical Committee PHARMACEUTICAL INDUSTRY EXPERIENCE Director of Clinical Research, Merck Sharp & Dohme I was responsible for all MSD's clinical research in the United Kingdom, which involved a clinical commitment to cardiovascular disease, rheumatological disease, gastroenterological disease and antibiotic therapy and many other therapeutic classes. While directing this research programme, members of my department had a total of over 100 publications and presentations. The research programme resulted in over three times this number of communications of research. I was involved in approximately 600 clinical research studies spanning the complete spectrum of pharmacological research from Phase I to Phase IV. I developed a strategy that all research performed by the Company should be of a sufficiently high standard to stand peer review and even the Company's multi centre GP studies were presented at major UK scientific societies. I continued with attachments at two London teaching hospitals throughout my career within the pharmaceutical industry. I was responsible for the UK development of Enalapril, a converting enzyme inhibitor. I was responsible for the Phase III development of Norfloxacin, a urinary antibiotic, Imipenem, a broad spectrum antibiotic and Indocid Gel, a topical preparation of Indomethacin, and Famotidin, an H2 antagonist and Simvastatin a lipid lowering agent. In addition, research programmes included those for cardiovascular drugs - Aldomet, Moduretic, Blocadren, Amiloride and Moducren, ophthalmic drugs, particularly Timoptol, anti-rheumatic drugs, such as Indomethacin, Indocid and Clinoril and Osmosin, anti-infectives. I also developed with Merck Sharp & Dohme the concept of sponsorship for academic research which had a corporate interest. The most successful of these projects was the programme to investigate the relationship between a lowered serum potassium and the occurrence of arrhythmias. This work is often quoted and has played a major part in developing the hypothesis out- lined above. Staff - five physicians, seven CRAs and secretaries. PREVIOUS EMPLOYMENT Chief Executive Officer, ROSTRUM Group Limited I was Chairman and founder of MCRC Group Limited, which changed its name to ROSTRUM in 1994, the international contract research organisation with offices in the UK, USA, France, Germany, Italy, South Africa and Australia. I established MCRC as a leading European contract research and consultancy organisation and have established research units at many hospitals. The Company, under my direction, has successfully handled research, registration and statistical projects for most of the major American and European pharmaceutical companies. I have personally supervised clinical expert reports, which have led to the registration of many drugs in the areas of hypertension, heart failure, angina, arrhythmias, antibiotics, urology, oncology and respiratory medicine, as well as many others. I also founded and developed ROSTRUM, an organisation which trains in research and regulatory skills, personal development programmes, and organises many international conferences. I have lectured widely and taught for major innovative pharmaceutical companies on GCP and drug development throughout Europe as well as in the Near East, Far East and South American countries. I consistently presented original research at major international and national scientific meetings, as well as publishing articles on many subjects at the rate of twenty per year. I co-authored "GCP for Investigators" which has sold over 40,000 copies in English, Spanish, German, French, Italian and Russian. 1986 - 1995 President and CEO, IBRD-ROSTRUM GLOBAL, a multi-national clinical Research organisation serving the pharmaceutical industry. It employed 500 people, with major offices in California, Philadelphia, London and across Europe. The turnover was appropriate for a company of this size within the CRO sector. My direct reports were the Senior Vice President, Finance, the Senior Vice Presidents for Operations in the USA and Europe, and the Senior Vice Presidents for Sales and Medical Affairs. I was responsible to the Board of Kuraya, a Japanese company, for the scientific performance of the global company. I continued with scientific input and consultancy activities. My most notable scientific achievement was completing the filing of a complete NDA for a Japanese client for an anti epileptic. I have successfully completed Phase III programmes for anti-hypertensives and anti-infectives, Phase I programmes for insulin sensitisers and disease modifying drugs in rheumatoid arthritis, and Phase II programmes in heart failure and insulin sensitising drugs. I was responsible for the registration of four medicines worldwide: an antiarrhythmics, an hypnotic, an anti-epileptic and an anti-biotic. I registered 30 other medicines in the UK and Europe including an anaesthetic, an anti-cancer agent, an anti-emetic, ereutho proetin and urological products as well as many other cardio vascular and anti infective compounds. Pharmacoepidemiological studies on non-steroidal anti inflammatory drugs and the gastro intestinal system, H2 antagonists and malignancy, cardiac safety of drugs for erectile dysfunction, asthma and many other classes of compounds. KEY SKILLS Clinical Investigator * Good knowledge of the clinical trial process and GCP. * Friendly, reassuring attitude to patients to build good relationships and confidence. * Ability to network with General Practitioners to reassure them of their patient's safety and to gain their cooperation in the clinical trial process. * An in-depth knowledge of pharmaceutical medicine helps put safety concerns into a clinical perspective. * Founded the London Hospital Hypertension Clinic in 1975 and performed many phase I, II and III studies on anti-hypertensive compounds. * Honorary Physician, Department of Clinical Pharmacology at St Bartholomew's Hospital, 1980 – 1984. Performed Phase I, II and III Studies anti-hypertensive compounds. * 1984 to 2003 responsible for clinical research within the Department of Cardiology at Oldchurch Hospital and have performed over 60 studies in all aspects of cardiovascular disease. Pharmaceutical Physician * Broad range of therapeutic knowledge and contact with opinion leaders * Able to collect and critically analyse data and information from the company literature, searches and academics, to construct the best critical path for product development and registration * Co-ordinating author of 30 clinical expert reports in cardiovascular, CNS, antibiotic and many other therapeutic areas. * Leader of crisis management team in support of four products. * Networking and influencing abilities enabling the distillation of divergent opinions from regulators and consultants across national boundaries to develop plans in line with the company's strategy which are endorsed by these critical groups * Sound track record of developing practical clinical operation plans with critical go/no-go decision points, especially fast track 2A studies * An ability to get research presented and published for key meetings and journals. I have over 75 peer-reviewed articles and a larger number of presentations, many in conjunction with leading experts in the fields of my interest. BusinessBelow, I list a few of the comments made about me by a Human Resource Director, of one of my major clients: * 'Excellent communications skills' * 'Strategic thinker – not entrenched in the now or the past' * 'Proven commercial acumen and ability' * 'Commercial experience in environments of rapid growth and change' * 'Intellectual agility and stamina' * 'Visionary – not necessarily detail conscious, but able to view the wider picture and associated issues' * 'Technical knowledge' * 'The ability to distil issues, quickly and concisely, resulting in identification of pertinent points and required action' * 'Outcome focused' * 'Influencing abilities' * 'Networker' * 'A contributor of ideas – an active participant in meetings' * 'Willingness and ability to challenge the status quo, leading to positive outcome/solution' * 'Persuasive' * 'Financially aware/competent' * The Royal Society of Medicine * The Royal College of General Practitioners * The Royal College of Psychiatrists * Royal Defence Medical College * The Royal Navy * HM Treasury * University of Wales * Association of Accounting Technicians * Pfizer * Pharmacia * International Business Communications * 60 seminars have been performed for local GP Tutors as part of refresher courses, study days or weekly meetings * 10 seminars have been performed for Human Resource Departments in the Healthcare sector * 5 seminars have been performed for NHS Trusts * University of Portsmouth * Coping with Stress BMA News, March 1999* I want to Dive into Life Now, Not Push it Away Zest, May 1999* Tackle Stress Before it Tackles You The Times, June 1999, p3* Overworked, Over Stressed, Over Here The Independent on Sunday, August 1999, p9* Health Report – The Working Weak The Observer Magazine, February 2000 p17* Expert Tips for Time Management The Sunday Times, March 2000, p4* Positive under pressure Sunday Express, June 2000* How your body reacts when you get dumped The Mirror, July 2000* Off work again, eh? Real Business, July/August 2000* From stress to serenity Here's Health, July 2000, p44* How are you coping with pressure? New Impact, August 2000, Issue 6 Vol 6* Health Report Observer Magazine, September 2000* Take control of your health Home & Life, October 2000* Ready to relax Accounting Technician, October 2000* Are you hooked on being busy? Woman's Weekly, November 2000, p22* How to pick up early signs of stress The Observer, 12 November 2000* How to avoid suffering from stress The Observer, 19 November 2000* How to deal with stressed-out colleagues The Observer, 26 November 2000* Your Body The Mirror, November 2000* Occupational Hazards Shape, January/February 2001* Hurry Sickness Daily Express, August 2001* The Break-up Body Daily Mail, 17 September 2001, p42-43* What Type of Eater Are You Daily Express (ExpressWoman), 20 August 2002, p32-33* I Juggle Too Much Woman, 10 February 2003, p29 * Friend or Foe? (Men's Health) netdoctor.co.uk, January 2000* Christmas Advice icommerce.ie, December 2000* Saying 'No' helps GPs' stress doctorsworld.com, April 2001* GPs shut surgeries over workloads ITN.co.uk/news, June 2001* Stressbusting's guide to what to read and listen to now stressbusting.co.uk/help* Good times, bad times - stress in the good old days Channel4.com* Work Pressure – Can you handle it? topjobs.co.uk – instant expert article
{ "pile_set_name": "Pile-CC" }
Prevent chat window background color change? Prevent chat window background color change? [14.0.8064.206] I don't know if this is an extremely new feature or if everyone I chat with has the same background color, but today for the first time a chat window opened when a friend messaged me and it did not use the same color as my default background, but rather the SENDER's color. DO NOT WANT!!! I looked through Tools | Options and don't see anything that suggests "Always use my personal background color for chat windows." Can this be done? Can I tell Messenger to NEVER use the sender's background colors, only my own? > [14.0.8064.206] > > I don't know if this is an extremely new feature or if everyone I chat with has the same > background color, but today for the first time a chat window opened when a friend messaged > me and it did not use the same color as my default background, but rather the SENDER's > color. > > DO NOT WANT!!! > > I looked through Tools | Options and don't see anything that suggests "Always use my > personal background color for chat windows." Can this be done? Can I tell Messenger to > NEVER use the sender's background colors, only my own? > Re: Prevent chat window background color change? > Greetings Jeff, > > You can do this, however, this feature is a bit hidden. > > Right-click in the scene area (the top where the background colour is, > their display name, etc.) and choose 'Use the default scene'. > > Unfortunately this is per contact, so you'll need to do it for everyone. Egad. I never would have right-clicked there. Thanks! It's working, but you're right: it's too bad this can't be set globally. About Us Windows Vista Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 10" and related materials are trademarks of Microsoft Corp.
{ "pile_set_name": "Pile-CC" }
\\*\\*Properly conceived, a universal basic income could radically transform society.\\*\\* Popular Criticisms Of A Universal Basic Income ​ The idea around a universal basic income is not a new idea. It has backers all across the political and economic spectrum. On the right, many people have rightly seen the potential in eliminating expensive bureaucratic government agencies and processes that exist primarily to determine eligibility for assistance programs. If everyone is eligible, the process is simplified greatly. On the left, people are recognizing the emerging “crisis of capitalism” in which the majority of people lose access to their purchasing power as most jobs are replaced by automation and robots. However, very few ideas actually consider how the UBI should be funded. The reason why many people are opposed to the idea of a UBI is because most plans use income tax as the source of the revenue that would fund it. More often than not, earning an income requires incredibly hard work and sacrifice and the idea that some people might be given the same monetary reward without having to work at all seems unfair. We begin to think “why should we have to give our lives to a job to earn money while others can live without giving anything?” The truth is the income tax really amounts to extortion – why should anyone else be entitled to the fruits of another’s labor? For this reason, funding a UBI through income tax is a terrible idea and will never get anywhere. ​ \\*\\*Taxing one's labor is theft but taxing the use of resources that people did not create themselves is not.\\*\\* We All Deserve Access To The Planet That Gave Birth To Us ​ Instead, to fund the UBI in a way that is fair, we need to create a new narrative based on a different understanding of the principles of private ownership that our economy is founded upon. When the Europeans first came to North America, they wanted to try to buy the land from the Native Americans. But the native tribes couldn’t understand the logic behind owning land. “How could anyone own the land?” they thought, “for they did not create it.” The Native Americans saw land as the equivalent of both water and air. These are things that can’t belong to people. They believed all beings had a right to access these basic resources by virtue of the simple fact that they were born on this planet. The idea that because you owned land you had the right to exclude other people from occupying it was a radical idea to them. We need to realize that there are certain things that shouldn’t ever be able to be owned – things like air, water, and land – if that means giving some people the ability to exclude others from meeting their needs. A properly constructed UBI would be based on the idea that the resources that weren’t created by human labor – the land, the airwaves, our public spaces, the skies, and even money itself – belong to everyone. And therefore, if you want exclusive access to any of these resources either to live or to start a business, you should have to compensate the rest of humanity for their reduced access. These economic “rents” already exist in the form of property taxes, licenses, and interest payments. But instead of sharing this wealth (that no human being created) with everyone, the wealthy elites appropriate it all to themselves. This is what must change. ​ \\*\\*What right does anyone have to own that which they did not create?\\*\\* Giving Everyone A Base Level Of Access ​ The UBI ought to be funded from these “rent” payments. Just how much would that be? In the United States, the value of all the land (just the land itself, not the buildings or roads which were made from human labor) is estimated to be valued at $23 trillion. A five percent “exclusivity fee” would generate $1.15 trillion annually. Interest paid on money loaned ought to be another public asset. After all, no individual gives money its value. The value of money comes from our collective willingness to believe that it has value. Nearly $4 trillion was paid in collective interestfrom US households, businesses, governments, and non-profits in 2015. Instead of this money flowing to the shareholders of private banks, it, too, should form the collective pool that the UBI is drawn from. Combining these revenue sources into a $5 trillion yearly fund would yield a net of $18,000 a year for every adult and $9,000 a year for every child in the United States. To save money, some have proposed means testing the UBI by only giving it to those who make below a certain income. This, too, is a terrible idea. To avoid the disastrous social implications that have been applied to people on government assistance in the past, it is critical that the UBI be given to all, even those with well-off means. If the wealthy choose to donate their UBI allotment to a cause of their choice, that should be their decision to make. The UBI is not earned the way additional money would be earned by trading one’s labor for it. Instead, the UBI should be considered to be every human being’s inheritance from the gifts and toils of all previous generations. ​ \\*\\*Everyone deserves the ability to meet their needs without having to be controlled.\\*\\* It Is Far Cheaper To Feed The World Than It Is To Bomb It ​ But the true economic efficiency resulting from this form of a UBI comes much later. Today, enormous amounts of money are spent at both the collective and individual levels on various form of security. A typical response to the theft of a $30,000 car might be to put the thief in jail for 5 years at a total cost of $250,000. Wouldn’t it have been cheaper just to ensure that everyone had access to a car? The implementation of a UBI would likely spawn the creation of an entirely new $5 trillion a year “life services” market that would create unprecedented levels of access for the average person. In a society in which everyone has the ability to meet their basic needs, crime would be dramatically reduced. The main cause of crime and international conflict is scarcity. When people (and any animal really) can meet their needs, they are less likely to harm each other. Today, the US spends $39 billion a year on prisons, $350 billion a year on security, and $1100 billion a year on military. It isn’t hard to imagine that these expenditures could be drastically reduced in a world where people no longer had conflict over meeting their basic needs. Intuitively, I think we all know that it is far cheaper to feed the world than it is to bomb it. But our stories around private property, money, wealth, ownership, and access must evolve before we will have access to the tools that can solve the economic crises we face today. In our society, money is the fuel of creation. The rich know this very well. Given enough money, you can pay just about anyone to do just about anything. A universal basic income gives everyone the chance to find their place in this world. It will be important to educate people that the UBI is a gift that is given to them by the previous generations of human beings who toiled and sacrificed their blood, sweat, tears, and lives while they endured the growing pains of industrial civilization. Only by sharing in the fruits of our collective success will be able to continue evolving human society sustainably far into the future.
{ "pile_set_name": "Pile-CC" }
Let students discuss what it would have been like to send and receive notes through the post office or a telegraph machine instead of today's rapid forms of communication such as text messaging and email. Use students' enthusiasm about Valentine's Day to get them reading by creating a display of books about love and friendship. After all that learning, it will be time for some sweets. For an easy and impressive treat that children can help prepare, smoosh slice-and-bake chocolate chip cookie dough around store-bought sandwich cookies and bake in muffin tins according to cookie dough instructions. You can make these cookie-stuffed cookies with any type of sandwich cookie, but using the limited edition Red Velvet Oreos and some seasonal sprinkles will make these extra festive. 8.13.2014 At the beginning of the school year, it's time to write your long term plan. This document is an ultra brief summary of what you plan to teach and do throughout the entire school year. Before you can map out a curricular plan based on the required objectives or units for your class, it's important to know which days you will actually be at school and when major events, including national holidays and local celebrations, will take place. In some cases, these events will cause interruptions in your normal schedule. Other times, you will be able to use observances, celebrations, and seasons as a theme for your lessons. For example, rather than trying to convince students to stop thinking about the Super Bowl, you can just use their interest in football to build investment in your lessons by letting them practice formulas like speed or investigate geography of past events, history of the sport, biographies of football players and so on. Here are all the calendars you'll need to plan events and thematic lessons in the school year ahead. District Calendar + Previous Year's Plan Start by penciling in school holidays, early dismissal days, class parties, and other local events that will affect your planning. If you are a veteran educator, also get out the previous year's long term plan (you had one, right?), so that you can remember when you did things last school year. Anti-Defamation League Calendar of Observances The Anti-Defamation League provides dates for international observances and holidays for major world religions. Use these calendars to become aware of religious events that may be important to the students in your community and to introduce students to different culture's traditions. Days of the Year This quirky calendar lists major as well as lesser-known observances throughout the year. If you enjoy planning lessons thematically, this is a great resource to find out about odd celebrations such as Bad Poetry Day and Thank a Mailman Day. Perma-Bound Author Illustrator Birthday Calendar This interactive calendar lists birthdays for tons of famous authors and illustrators and provides links to books created by each person.Every month also includes a mini biography & photo of a featured artist or writer. These resources can be used for author studies or to create an easy and informative bulletin board featuring different authors each month. Library of Congress Today in History This site features historical events for every date on the calendar. Each date offers an article about a significant past event and includes primary documents related to whatever took place on the day. This resource is one you can come back to throughout the year, even daily, to support social studies lessons. LOC also provides a searchable archive of the articles which you can access by date or keyword. 8.12.2014 "His brain felt larger, roomier. It was as if several doors in the dark room of his self (doors he hadn't even known existed) had suddenly been flung wide. Everything was shot through with meaning, purpose, light. However, the squirrel was still a squirrel." - Flora & Ulysses by Kate DiCamillo & K.G. Campbell Kate DiCamillo received her second Newbery Award this year for the amazing novel, Flora and Ulysses: The Illuminated Adventures, a story about a young comic book fan who is navigating her parents' recent divorce and the general awkwardness of growing up when she witnesses the transformation of a squirrel into a poetry-writing superhero. (DiCamillo's first award was earned a decade ago for The Tale of Despereaux). As a self-proclaimed cynic, Flora is initially skeptical but becomes cautiously hopeful about the squirrel's hidden abilities. As Flora starts to accept and appreciate the squirrel's unique talents, she also begins to view herself and her life through a less cynical lens. In addition to offering an adorably quirky and heartwarming story, Flora and Ulysses features an innovative format that appeals to readers of all ages. Most of the book is presented through traditional blocks of text, however, line drawings and strips of comic action by K.G. Campbell are included to illuminate the written story. This book is wonderful as an independent or small group read, but it also makes a great read-aloud, especially if you are able to show the illustrations using a document camera and projector. If you are sharing this novel with students, check out these resources for further exploring the book and the talents of its creators. K.G. Campbell, illustrator K.G. Campbell is responsible for the illuminated qualities of this fantastic book as well as many other popular novels and stories. Visit Campbell's site to discover the beautiful illustrations he has contributed to Flora and Ulysses and his other projects including a lovably quirky picture book Campbell wrote and illustrated called Lester's Dreadful Sweaters, which is an enjoyable read-aloud for all ages. 7.28.2014 With summer vacation winding down, it's time to begin easing into school-year routines. We know children need to start going to bed and waking earlier so that the back-to-school transition won't be abrupt and unpleasant for them (or their parents and teachers). These final weeks before school begins are likewise the perfect time for adults to develop meaningful morning routines. Rise Early During my career, I have gone back and forth in my feelings about the early start of my work day and the related early bedtime, sometimes enjoying the routine and other times feeling like I am squeezing my adult life into a child's schedule. Nevertheless, experience tells me I should embrace the routine that fits my job. To get your day started on the right foot, begin by waking early. Yes, I know that teachers already start early, even if you do hit the snooze button until the last possible moment before jumping into whatever clothes are close and then applying makeup and eating breakfast drinking coffee at red lights on the way to work. I have definitely employed this approach with moderate success. But, often, this harried routine has left me to get through the workday lunch-less, simply reacting to each problem as it arises. By waking earlier (about an hour earlier than it takes me to get dressed), I can begin my day prepared, confident, purposeful, and with a feeling that I have cared for myself before beginning to serve others. Stretch Once your eyes are open and the alarm is turned off, it's time to wake up your body and prepare it for the rigors of the day. You should customize this part of your routine based on what your body will be put through during the school day. Since I have struggled with foot and heel pain caused by taking a zillion steps a day on hard tile floors, I begin flexing and pointing my toes while I'm still in bed, in order to stretch my calves and feet before they take their first steps of the morning. Then, I focus on stretching the rest of my body. I like to follow an extremely basic yoga routine called sun salutation. Visit Women's Health Magazine for a written and visual guide to this series of basic stretches, or check out Portal Yogi for a slightly extended version of the stretch sequence. Meditate Now that your body is awake and alert, it's time to prepare your brain for the day. As humans, we have some self-destructive mental tendencies. In an article about mindfulness meditation, Psychology Today explains, "First, we cause ourselves suffering by trying to get away from pain and attempting to hang on to pleasure...Second, we cause suffering when we try to prop up a false identity usually known as ego." Meditation can help our brains form better, more productive habits. Once you get into this routine, you can customize this part of the morning to fit your goals, but to get started, consider following one of the short, guided meditations provided by UCLA's Mindful Awareness Research Center. After this mindfulness practice, I shift my thoughts briefly toward gratitude. Instead of letting your brain fill with worries first thing in the morning, spend a moment being aware of the people and circumstances that provide you with beauty, pleasure, comfort, and happiness. We can all think of something to be thankful for, even on the groggiest, greyest morning. This process will help you start the day with joy instead of stress. Finally, I move my thoughts to my intentions for the day. This is not time to make a huge to-do list. Instead, focus on one or two things that will make you feel proud at the end of the day. These goals can vary widely from tasks, such as getting to the gym after work, to intentions like treating others with a generous spirit or avoiding vocalizing mundane complaints throughout the day. This entire brain-preparation routine will only take 7-10 minutes, but it can make an enormous difference in the remainder of the day. Dress Now it is time to complete the tasks that were formerly the entire "getting ready" routine. Treat yourself well by using non-toxic cosmetics (check out the safety of your products using the Environmental Working Group's Skin Deep database) and dressing in comfortable, well-fitting clothes. A tight waistband or lack of pockets can lead you to feel annoyed all day long as you sit uncomfortably or lose your keys repeatedly. Why deal with that frustration? Dress yourself in a way that is practical and that makes you feel confident. For me, comfortable shoes are absolutely non-negotiable, due to the aforementioned foot pain. (Read more about my thoughts on teacher shoes.) This process is much simpler if you have pre-selected your clothes the night before. More on this later in a discussion of evening routines. Also, finish getting dressed now instead of planning to apply makeup or file your nails (or whatever) on the way to work or just after arriving for the sake of your safety and sanity. Eat A teacher's job is not for the faint of heart (or soul or body). If you want to perform well, your body is going to need fuel. During my first days working as a teacher, I developed a morning habit of feasting on a Nutrigrain bar and a ginormous diet soda. Although this wasn't even a tasty breakfast, it seemed like the right menu. After all, I didn't feel hungry at sun's-not-even-up-o'clock, and I just wanted to wake myself up. Having learned more about the weirdo chemicals in factory-produced "food," I tried slightly healthier variations of this meal, such as a granola bar and a homemade latte. However, I'm here to tell you, stuffing random food-like substances + caffeine down the hatch is not the most productive approach to your morning meal. Build a habit of eating a nutritious breakfast, even if it's quick and small. Some of my favorites are a bowl of real oatmeal (not the packet full of artificial colors and flavors) or a smoothie. Both of these are great ways to sneak extra servings of fruits and vegetables and even dietary supplements into your day. For example, a spoonful of ground flax seed, which is jam-packed with Omega-3s and other essential and often under-consumed vitamins, is basically unnoticeable once it's stirred into either of these breakfasts. Also, drink water. It's good for your everything. You know this. Just do it. Consider having your first glass of water just after you get out of bed. I keep a water bottle on my nightstand and finish whatever I didn't drink during the night as soon as I wake up. Have another glass with breakfast. With two down, you just have six more glasses to go during your bathroom-break-less work day. Pack Up You're almost ready to leave for work, and just look at what all you've accomplished! Before you walk out the door, though, be sure you have everything you will need. Yes, you are going to feel like a pack-mule as you walk to the car, but it is so worthwhile compared to feeling unprepared all day. Check your teacher bag for your school ID and keys, so you don't have to beg a custodian to let you into your room over and over all day. Fill up your water bottle - the big one. Yes, I know you don't get to go to the restroom. Just bring the water and drink it anyway. Someone will watch your students for the 30 seconds it takes to run to the bathroom as long as you are willing to return the favor. You're going to be doing a lot of talking and moving, and anyway, water's good for your everything, remember? Get your lunch ready. The food that is available for you to purchase at work is not good enough. (It's not good enough for the kid's either, but that's a topic for another day.) You are an individual with specific dietary needs and preferences, and a double side of soggy cafeteria fries or an overpriced candy bar is simply not enough - not enough calories, not enough nutrition, not enough enjoyment. Get out your lunchbox, and fill it with enough nutritious, delicious items to feed yourself a meal and two snacks. I always include nuts and dried fruits for my morning and afternoon snacks, and I usually pack leftovers and fresh fruit for my midday meal. Trying to lose weight? Pack even more fruits, so that you aren't tempted by the box of stale doughnuts in the teachers' lounge. Morning can easily become a rushed and stressful part of the day if you don't pre-plan a routine and then loyally carry it out. Trying to accomplish all the things on this list without a plan would be mentally exhausting. By establishing a routine ahead of time, you can use your morning to care for yourself and prepare for your day instead of getting bogged down with decision fatigue or simply sleeping through this opportunity. Happy Lunar New Year! Today's celebration creates a perfect opportunity for a multidisciplinary story time. Since many children are only familiar with the Western / Gregorian calendar, begin by introducing the lunar calendar, which is based on moon cycles rather than Earth's movement around the sun. After the stories, encourage students to make text-to-self connections regarding the traditions mentioned in the books. Students will discover that many cultures share similar traditions. Then demonstrate how students can count backward to their birth year and learn about the Chinese zodiac. The zodiac chart is printed in the back of D Is for Dragon Dance, for the letter Z, and you can download and print a beautiful version created and shared by Jan Brett. I've written about this printable before, but kids seriously love it and will practice patterns and arithmetic with this image for as long as you will let them. Finally, let students practice creating similes by comparing the upcoming year to a horse, since 2014 is the year of the horse. Kids can browse non-fiction books about horses to get ideas for adjectives to use in their similes.
{ "pile_set_name": "Pile-CC" }
386 So.2d 60 (1980) Ken GURITZ, d/b/a Electrodex, Inc., Appellant, v. AMERICAN MOTIVATE, INC., an Illinois Corporation, Appellee. No. 80-942. District Court of Appeal of Florida, Second District. July 25, 1980. *61 Richard H. Bailey of Harrison, Harllee, Porges & Bailey, Bradenton, for appellant. Thomas M. Gallen of Miller, Gallen, Kaklis & Venable, Bradenton, for appellee. CAMPBELL, Judge. Electrodex appeals from an order granting American Motivate's motion to abate for lack of jurisdiction. We reverse. Electrodex filed a two-count complaint in the lower court, one count being an action on an open account and the other being an action on goods sold and delivered. Electrodex's complaint alleges in part: 3. Defendant, American, is an Illinois corporation with its principal place of business in West Chicago, Illinois. ..... 5. The basis of the sum owed from the defendant to the complainant, is a sale of goods as contemplated by Chapter 671 and Chapter 672, Florida Statutes, which statutes provide that remedy shall be liberally administered and the aggrieved party may be put in as good position as if the other party had fully performed pursuant to Section 671.102, Florida Statutes. ..... 8. Defendant, American, has done acts enumerated in Section 48.193, Florida Statutes, which submit the defendant, American, to the jurisdiction of the courts of this state for a cause of action arising from the doing of said acts, which acts are more particularly described as follows: (a) engages in business in this state; (b) breaches a contract in this state by failing to perform acts required by the contract to be performed in this state, and more particularly failing to make payment for the goods sold and delivered by the plaintiff to the defendant in Bradenton, Manatee County, Florida. 9. Both the making and performance of the contract in question to sell and buy goods occurred in Manatee County, Florida. 10. The place for delivery of the goods sold by the plaintiff, Electrodex, to the defendant, American, was the seller's place of business, to wit: 6211 17th Street East, Manatee County, Florida. American Motivate responded with an unsupported special appearance considered by the trial court to be a motion to abate for lack of jurisdiction over the person. Two issues are raised by this appeal: 1) whether Electrodex in its complaint pled facts which clearly justify as a matter of law the application of the long arm statute, and 2) whether American Motivate's unsupported motion was sufficient to shift the burden to Electrodex to prove the allegations in the complaint which justify the application of the long arm statute. *62 We find that the allegations contained in Electrodex's complaint are sufficient. Section 48.193(1)(g), Florida Statutes provides that any person who personally or through an agent "[b]reaches a contract in this state by failing to perform acts required by the contract to be performed in this state" submits himself to the jurisdiction of the courts of this state. Paragraph 8(b) of the complaint sufficiently alleges that American Motivate breached a contract in this state. Cf. Maddax International Corporation v. Belcher Intercontinental Moving Services, Inc., 342 So.2d 1082 (Fla. 2d DCA 1977). (Appellee alleged in its complaint that the parties entered into a contract which had been breached by the appellant's failure to perform acts required by the contract to be performed in this state, to wit: the payment for the services required by the appellee. This court held that where an express promise to pay exists and no place of payment is stipulated, payment is due at the location of the creditor and that is where the breach occurs.) We also find American Motivate's unsupported motion insufficient to transfer the burden to Electrodex to prove the jurisdictional allegations in the complaint. The court in Elmex Corporation v. Atlantic Federal Savings & Loan Association of Fort Lauderdale, 325 So.2d 58 (Fla. 4th DCA 1976), discussed the plaintiff's burden in invoking and the defendant's burden in challenging the application of Section 48.193(1)(g), Florida Statutes. The court stated "[i]t may be unnecessary for the defendant to do anything more than file a simple (unsupported) motion where the allegations of the complaint are legally insufficient." (Emphasis added.) Id. at 61-62. The court further stated that in order for a defendant to assert factual matters not appearing on the face of the record he must support the motion with an affidavit, deposition or other proof; if this supported motion contradicts the complaint, the issue becomes one of proof, and the burden shifts to the plaintiff to prove the allegations in the complaint which justify application of the long arm statute. In the instant case, American Motivate filed an unsupported motion challenging jurisdiction over the person. Based on Elmex, supra, this unsupported motion would be sufficient if the allegations in the complaint were legally insufficient. Since this is not the case, the trial court erred by granting the defendant's unsupported motion to abate. Based on the foregoing reasoning, we vacate the order of the trial court and remand for further consistent proceedings. BOARDMAN, Acting C.J., and RYDER, J., concur.
{ "pile_set_name": "FreeLaw" }
Eagle's Wing Eagle's Wing is a Euro-Western Eastmancolor film made in 1979. It stars Martin Sheen, Sam Waterston and Harvey Keitel. It was directed by Anthony Harvey, with a story by Michael Syson and a screenplay by John Briley. It won the British Society of Cinematographers Best Cinematography Award for 1979. Plot The story has three plot strands that run concurrently through the film: a stagecoach carrying a rich widow home to her family's hacienda, a war party of Indians returning to their village, and two fur traders waiting to meet a different group of Indians with whom they trade. The war party attacks the other Indians and kills their leader, who owns a magnificent white stallion. White Bull (Waterston) attempts to capture the horse, but it is too quick and makes off carrying the dead chief. Pike (Sheen) and Henry (Keitel) wait in vain for the traders and are then attacked themselves by the war party. Henry is killed, the Indians take the trader's horses, and Pike is left alone with only a mule. Travelling alone, he comes across the funeral of the dead chief. He saves the white stallion from ritual slaughter, abandons his mule, and continues his travels. The Medicine Man conducting the ritual is accidentally killed while Pike is taking the horse. The war party finds the stage coach, attacks it, kills the driver, guard, and one of the passengers, and then leaves White Bull to ransack the coach and passengers of all valuables. White Bull gathers a hoard of jewels and other valuable items, takes a white girl for himself, and leaves the other survivors standing in the desert. One of the survivors, a priest, takes a coach horse and rides off to alert the hacienda. The story then becomes a four-way chase. After gaining the white stallion from Pike, White Bull, the girl, the treasure and the stallion continue towards the native's village; Pike goes after the stallion; a posse from the hacienda sets out to recover the coach passengers and the girl, and members of the Medicine Man's tribe seek to avenge his death. After a series of to-and-fro adventures, the film ends as White Bull rides off alone with the stallion while Pike, utterly defeated, stands and watches him go; the girl is still behind Pike, waiting to be rescued. Cast Martin Sheen as Pike Sam Waterston as White Bull Harvey Keitel as Henry Stephane Audran as The Widow John Castle as The Priest Caroline Langrishe as Judith Jorge Russek as Gonzalo Manuel Ojeda as Miguel Jorge Luke as Red Sky Pedro Damian as Jose Claudio Brook as Sanchez José Carlos Ruiz as Lame Wolf Farnesio de Bernal as The Monk Cecilia Camacho as The Young Girl Enrique Lucero as The Shaman Production The film was based on a story by Michael Syson, who worked for the BBC. Director Anthony Harvey said Syson "wrote it as kind of a short story, a series of ideas with a very strong story line but not really a script." The film attracted Harvey "as a chance to break away from the subjects I have done before, really to have complete freedom. It was a film with a very thin script in a way but it had a very strong story. It didn't have much detail and no dialogue at all, except for the first ten minutes between Harvey Keitel and Sam Waterston. It was very much a director's subject." Harvey said "The moment I read Eagle's Wing I knew very clearly the kind of things I wanted visually, and talked to Billy Williams for days about it." Harvey says he and John Briley sat down and wrote a script "for about a month before we went on location, just to make sense of it." The film was shot in nine weeks in and around Durango, Mexico in early 1978, finishing by April. Harvey says this was "short but we had the luxury of a small unit." Harvey said they would go "three or four hours out of" Durango. "We were looking for desolate landscapes. The movie is about loneliness and people who don't communicate... We didn't want any romantic David Lean sunsets but rather black and threatening skies." He looked for unusual landscapes that appeared "like the surface of the moon". Harvey says after filming was complete and he was on another project, Players, "the money people came in and... made a number of cuts and put back some things I had cut out." Reception The film was released in England in 1979. Harvey said "the reviews couldn't have been better if I'd written them myself." The Observer called it "dazzling" and The Guardian saying it was "well worth seeing". However the film was not a commercial success and it was a number of years before it was released in the US to varied reviews. It was one of the last movies financed by the Rank Organisation. References External links Category:1970s Western (genre) films Category:1979 films Category:British films Category:British Western (genre) films Category:Films directed by Anthony Harvey Category:Films scored by Marc Wilkinson
{ "pile_set_name": "Wikipedia (en)" }
CONFIG_ARM=y CONFIG_GIC_V3_ITS=y CONFIG_TARGET_LX2160AQDS=y CONFIG_TFABOOT=y CONFIG_SYS_TEXT_BASE=0x82000000 CONFIG_SYS_MALLOC_F_LEN=0x6000 CONFIG_NR_DRAM_BANKS=3 CONFIG_ENV_SIZE=0x2000 CONFIG_ENV_OFFSET=0x500000 CONFIG_ENV_SECT_SIZE=0x20000 CONFIG_DM_GPIO=y CONFIG_FSPI_AHB_EN_4BYTE=y CONFIG_ARMV8_SEC_FIRMWARE_SUPPORT=y CONFIG_SEC_FIRMWARE_ARMV8_PSCI=y CONFIG_DEFAULT_DEVICE_TREE="fsl-lx2160a-qds" CONFIG_AHCI=y CONFIG_OF_BOARD_FIXUP=y CONFIG_FIT_VERBOSE=y CONFIG_OF_BOARD_SETUP=y CONFIG_OF_STDOUT_VIA_ALIAS=y CONFIG_BOOTDELAY=10 CONFIG_USE_BOOTARGS=y CONFIG_BOOTARGS="console=ttyAMA0,115200 root=/dev/ram0 earlycon=pl011,mmio32,0x21c0000 ramdisk_size=0x2000000 default_hugepagesz=1024m hugepagesz=1024m hugepages=2 pci=pcie_bus_perf" # CONFIG_USE_BOOTCOMMAND is not set CONFIG_MISC_INIT_R=y CONFIG_BOARD_EARLY_INIT_R=y CONFIG_CMD_GREPENV=y CONFIG_CMD_EEPROM=y CONFIG_CMD_DM=y CONFIG_CMD_GPT=y CONFIG_CMD_I2C=y CONFIG_CMD_MMC=y CONFIG_CMD_PCI=y CONFIG_CMD_USB=y CONFIG_CMD_WDT=y CONFIG_CMD_CACHE=y CONFIG_MP=y CONFIG_OF_CONTROL=y CONFIG_OF_LIST="fsl-lx2160a-qds-3-x-x fsl-lx2160a-qds-7-x-x fsl-lx2160a-qds-19-x-x fsl-lx2160a-qds-20-x-x fsl-lx2160a-qds-3-11-x fsl-lx2160a-qds-7-11-x fsl-lx2160a-qds-7-11-x fsl-lx2160a-qds-19-11-x fsl-lx2160a-qds-20-11-x" CONFIG_MULTI_DTB_FIT=y CONFIG_ENV_OVERWRITE=y CONFIG_ENV_IS_IN_MMC=y CONFIG_ENV_IS_IN_SPI_FLASH=y CONFIG_ENV_ADDR=0x20500000 CONFIG_NET_RANDOM_ETHADDR=y CONFIG_DM=y CONFIG_SATA_CEVA=y CONFIG_FSL_CAAM=y CONFIG_DM_I2C=y CONFIG_I2C_SET_DEFAULT_BUS_NUM=y CONFIG_I2C_DEFAULT_BUS_NUMBER=0 CONFIG_I2C_MUX=y CONFIG_I2C_MUX_PCA954x=y CONFIG_DM_MMC=y CONFIG_FSL_ESDHC=y CONFIG_MTD=y CONFIG_DM_SPI_FLASH=y CONFIG_SPI_FLASH_EON=y CONFIG_SPI_FLASH_SPANSION=y CONFIG_SPI_FLASH_STMICRO=y CONFIG_SPI_FLASH_SST=y # CONFIG_SPI_FLASH_USE_4K_SECTORS is not set CONFIG_PHYLIB=y CONFIG_PHY_AQUANTIA=y CONFIG_PHY_CORTINA=y CONFIG_PHY_REALTEK=y CONFIG_PHY_VITESSE=y CONFIG_DM_ETH=y CONFIG_DM_MDIO=y CONFIG_DM_MDIO_MUX=y CONFIG_E1000=y CONFIG_MDIO_MUX_I2CREG=y CONFIG_FSL_LS_MDIO=y CONFIG_PCI=y CONFIG_DM_PCI=y CONFIG_DM_PCI_COMPAT=y CONFIG_PCIE_LAYERSCAPE_RC=y CONFIG_DM_RTC=y CONFIG_RTC_PCF2127=y CONFIG_DM_SCSI=y CONFIG_DM_SERIAL=y CONFIG_SPI=y CONFIG_DM_SPI=y CONFIG_FSL_DSPI=y CONFIG_NXP_FSPI=y CONFIG_USB=y CONFIG_DM_USB=y CONFIG_USB_XHCI_HCD=y CONFIG_USB_XHCI_DWC3=y CONFIG_WDT=y CONFIG_WDT_SBSA=y CONFIG_EFI_LOADER_BOUNCE_BUFFER=y
{ "pile_set_name": "Github" }
Blackstone Valley Wool Blanket - Olive (20 Blankets) Price This is a warm, budget-priced wool blanket. Affordable and practical for homeless shelters and large-scale disaster relief efforts. This blanket is not recommended for machine washing. Treated with Dupont X-12 for fire retardancy.
{ "pile_set_name": "Pile-CC" }
Pneumococcal disease in a paediatric population in a hospital of central Italy: a clinical and microbiological case series from 1992 to 2006. Streptococcus pneumoniae is frequently isolated from carrier children, but it also causes localized and invasive diseases. Increasing incidence of chemoresistance can affect the efficacy of empiric therapy and it motivates interest in primary prophylaxis. The study aims to investigate clinical and microbiological features of paediatric pneumococcal infections in an Italian province. Retrospective clinical analysis of 640 children, hospitalized from 1992 to 2006 with one culture positive for S. pneumoniae, was performed. Chemosusceptibility tests and serotyping were carried out on isolates; statistical analysis was applied to compare variables. Overall, 47.8% were carriers, 49% and 3.2% had, respectively, a localized or invasive disease; S. pneumoniae aetiology accounted for 25% of meningitis and 16% of sepsis. On the total isolates, 10.2% were penicillin non-susceptible, 35.15% were erythromycin resistant, with increasing rates over years. Prevalent invasive serotypes were 1 (38.1%) and 7F (9.5%). The study sustains pneumococcal disease relevance in children, on the strength of a 15 year observation. Long time period can represent a limit due to population characteristics changing; a selection bias could also be present due to hospitalized only patient analysis. However, we documented variable evolution of chemoresistance and a peculiar serotype spreading, offering microbiological basis for an appropriate clinical approach.
{ "pile_set_name": "PubMed Abstracts" }
Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 3 Light Large Pendant in Nanti Champagne... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 3 Light Semi Flush Mount in Nanti Champagne... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. This Post Lantern has a 3 inch fitter... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 3 Light Large Pendant in Iron Oxide... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 3 Light Large Pendant in Iron Oxide... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 1 Light Mini Pendant in Iron Oxide ... Magnificently detailed metal work is silhouetted with an etched cream glass to create a skillfully designed work of art. The Nanti family is rated for both indoor and outdoor application without compromising style. 3 Light Semi Flush Mount in Iron Oxide...
{ "pile_set_name": "Pile-CC" }
Heavy rains, including those this past week associated with Tropical Storm Andrea, have completely eliminated all symptoms of the severe drought that persevered here for years but are a mixed blessing for those in timber and agriculture ventures.
{ "pile_set_name": "Pile-CC" }
[Perceptions and practices of people with diabetes to diabetes mellitus at the Centre National University Hospital Hubert Maga Koutoucou Cotonou]. The study was initiated to assess patient knowledge about diabetes mellitus, identify food claims and finally to identify the daily practices. The study was sectional and descriptive. The study population consists of diabetic seen in CNHU-HKM the outpatient Cotonou. Data collection is done by a questionnaire. Deficiencies were noted in knowledge of patients on diabetes mellitus.Several allegations have been identified. Patients think that the diabetic should not eat fruits, rice or corn dough. The practice of physical activity is low. Difficulties were encountered preventing compliance with diet, the practice of physical activity and glycemic control. Of challenges remain to improve the knowledge, attitudes and practices of patients. Therapeutic education is necessary for all diabetics.
{ "pile_set_name": "PubMed Abstracts" }
Tako Tsubo cardiomyopathy, presenting with cardiogenic shock in a 24-year-old patient with anorexia nervosa. Tako Tsubo cardiomyopathy is a serious condition that is caused by heart failure due to inordinate stress. We here present a case of a young woman with this disorder in association with anorexia nervosa. We postulate a pathophysiological relationship and discuss the management of Tako Tsubo cardiomyopathy.
{ "pile_set_name": "PubMed Abstracts" }
Lower leg and foot injuries in tennis and other racquet sports. Injuries to the lower extremity are common in racquet sports. These can be either acute or chronic. Although acute injuries usually respond to treatment, chronic injuries are often less amenable to treatment. The occurrence of both kinds of injuries, however, can often be prevented by proper training techniques including stretching and strengthening exercises. These exercises are also important components of a proper rehabilitation program.
{ "pile_set_name": "PubMed Abstracts" }
Q: Swift how to make a function execute after another function completed I wrote two Swift functions (just a multiple choice quiz app) func createQuestions() // this goes to Parse, and fetches the questions data that are going to question the users and store them into local arrays func newQuestion() // this also fetches some other data (for example, some incorrect choices) from Parse and read local variables and finally set the labels to correctly display to the users I want in ViewDidLoad, first execute createQuestion(), after it is fully completed then run newQuestion(). Otherwise the newQuestion() has some issues when reading from local variables that were supposed to be fetched. How am I going to manage that? EDIT: I learned to use closure! One more follow up question. I am using a for loop to create questions. However, the problem is that the for loop does not execute orderly. Then my check for repeated function (vocabTestedIndices) fails and it would bring two identical questions. I want the for loop to execute one by one, so the questions created will not be overlapped. code image A: try override func viewDidLoad() { super.viewDidLoad() self.createQuestions { () -> () in self.newQuestion() } } func createQuestions(handleComplete:(()->())){ // do something handleComplete() // call it when finished stuff what you want } func newQuestion(){ // do other stuff } A: What about swift defer from this post? func deferExample() { defer { print("Leaving scope, time to cleanup!") } print("Performing some operation...") } // Prints: // Performing some operation... // Leaving scope, time to cleanup!
{ "pile_set_name": "StackExchange" }
[Biological assay of prostaglandin using the rat uterus]. Isometric contraction of rat uterus with PGF2alpha was more reliable and stable than when a stomach preparation was used. The method of PG bioassay using rat uterus was as follows: ovariectomized rats were given 10 mug of estrone s.c. 48 hr before the decapitation. Uterine strips were suspended in 2 ml of organ bath containing modified Locke-Ringer solution at 25 degrees C, and the isometric contraction was determined. As little as 4 ng/ml of PGF2alpha was determined quantitatively with this method.
{ "pile_set_name": "PubMed Abstracts" }
1. Field of the Invention The present invention relates to a lens holder capable of confirming a mounted state of a lens and a lens shape measuring apparatus using the lens holder and configured to measure a lens shape such as a lens shape of an eyeglass frame, a shape of a demo lens, a template for a lens, or the like. 2. Description of the Related Art There is conventionally known a lens holder used for holding a lens, a template, a demo lens or the like when measuring a shape of the lens and so on, or used for holding an eyeglass lens between a pair of lens rotating shafts of a lens contour processing apparatus when grinding the eyeglass lens. As the lens holder, there exists a flange type lens holder in which a flange, for example, an elongate egg-shaped flange is integrally provided on one end of a shaft portion, or a lens absorbing jig (absorbing cup) in which a rubber cup is integrally provided on one end of a shaft portion. As this flange-type lens holder, there is known an adhesive-type lens holder in which the lens shape is adhered to the flange by a double-faced adhesive tape, or a screw fixing-type lens shape in which a positioning pin provided on the flange is engaged in a positioning hole of a template and the template is fixed to the flange by means of a fixing screw inserted into a center hole of the template. Meanwhile, in the lens absorbing jig, the lens can be held on the shaft portion by absorbing the lens to a rubber cup. By the way, there is known a lens shape measuring apparatus in which a shaft portion of a lens holding member configured to hold a lens is fitted in a cylinder or engaging tube portion of a lens holder. The lens holder is set to the lens shape measuring apparatus in such a manner that the lens is located downward, a circumferential edge of the lens is measured by a measuring element from below, and lens shape data (lens shape information) for processing a contour of the eyeglass lens is obtained (for reference, see Japanese Patent No. 3602175, FIG. 10). In addition, there is conventionally known a lens shape measuring apparatus for an eyeglass lens frame wherein, for example, a lens is attached to a lens absorbing jig. A cylinder or engaging tube portion of a lens holder is clamped to a shaft portion of a lens absorbing jig by means of a circular clamping member provided on the lens holder. The lens is located downward and thus the lens holder is set to the apparatus, a circumferential edge of the lens is measured by a measuring element from below, and lens shape information for processing an eyeglass lens (lens shape information) is obtained (for reference, see Japanese Patent No. 3989593, FIG. 6). However, in the aforementioned lens shape measuring apparatus, if the shaft portion of the lens holder by which the lens is held is engaged in the cylinder or engaging tube portion without providing a clamping member in the lens holder, since the engaging tube portion of the lens holder is directed downward, in the case that an engaging state of the shaft portion to the engaging tube portion is loose, when measuring the lens shape contour by abutting the measuring element against the circumference edge of the lens, the lens holding member and the lens are out of place from the engaging tube portion of the lens holder, there is a problem that the lens shape contour cannot be correctly measured. Further, a lens can be shaped like an elongate crab eye having a short height (crab eye-like lens shape) such as a recent rimless frame in which holes for mounting metal fittings are formed. In the case that a lens shape as mentioned above is measured by the lens shape measuring apparatus, if the lens like the elongate crab eye (crab eye-like lens shape) is attached to a normal lens holding member, the positions of the holes are hidden, so that there is a possibility that the positions of the holes cannot be correctly detected. Accordingly, if the positions of the holes cannot be correctly detected, since there are no data concerning the positions of the holes of the lens, the circumferential edge of the eyeglass is processed by the lens processing apparatus based upon the above-mentioned lens shape information. After the finishing process of the thus ground eyeglass, when forming the holes in the eyeglass, there has been a possibility that the holes are formed in wrong positions. Furthermore, in the above-mentioned lens shape measuring apparatus, since the clamping member of the lens is circular in shape, such as a recent rimless frame, if the lens like the elongate crab eye having a short height (crab eye-like lens shape) is attached to the lens of a normal lens holding member, the positions of the holes cannot be detected. Since the positions of the holes cannot be correctly recognized, after the finishing process of the thus ground eyeglass, when forming holes in the eyeglass, there has been a possibility that the holes would be formed in wrong positions.
{ "pile_set_name": "USPTO Backgrounds" }
Bacterial community characterization in the soils of native and restored rainforest fragments. The Brazilian Atlantic Forest ("Mata Atlântica") has been largely studied due to its valuable and unique biodiversity. Unfortunately, this priceless ecosystem has been widely deforested and only 10 % of its original area is still untouched. Some projects have been successfully implemented to restore its fauna and flora but there is a lack of information on how the soil bacterial communities respond to this process. Thus, our aim was to evaluate the influence of soil attributes and seasonality on soil bacterial communities of rainforest fragments under restoration processes. Soil samples from a native site and two ongoing restoration fragments with different times of implementation (10 and 20 years) were collected and assayed by using culture-independent approaches. Our findings demonstrate that seasonality barely altered the bacterial distribution whereas soil chemical attributes and plant species were related to bacterial community structure during the restoration process. Moreover, the strict relationship observed for two bacterial groups, Solibacteriaceae and Verrucomicrobia, increasing from the more recently planted (10 years) to the native site, with the 20 year old restoration site in the middle, which may suggest their use as bioindicators of soil quality and recovery of forest fragments being restored.
{ "pile_set_name": "PubMed Abstracts" }
The effects of manipulating expectations through placebo and nocebo administration on gastric tachyarrhythmia and motion-induced nausea. Interest in the role of expectation in the development of nausea and other adverse conditions has existed for decades. The purpose of this study was to examine the effects of manipulating expectations through the administration of placebos and nocebos on nausea and gastric tachyarrhythmia provoked by a rotating optokinetic drum. Seventy-five participants were assigned to one of three groups. Positive-expectancy group participants were given placebo pills that would allegedly protect them against the development of nausea and motion sickness. Negative-expectancy group participants were given the same pills as nocebos; they were led to believe there was a tendency for them to make nausea somewhat worse. Placebo-control group participants were told the pills were indeed placebos that would have no effect whatsoever. Subjective symptoms of motion sickness were significantly lower among negative-expectancy group participants than positive-expectancy and placebo-control group participants (p<0.05). Gastric tachyarrhythmia, the abnormal stomach activity that frequently accompanies nausea, was also significantly lower among negative-expectancy group participants than positive-expectancy and Placebo-Control Group participants during drum rotation (p<.05) [corrected] Inducing negative expectations through nocebo administration reduced nausea and gastric dysrhythmia during exposure to provocative motion, whereas positive placebos were ineffective for preventing symptom development. That manipulation of expectation affected gastric physiological responses as well as reports of symptoms, suggests an unspecified psychophysiological mechanism was responsible for the observed group differences. These results also suggest that patients preparing for difficult medical procedures may benefit most from being provided with detailed information about how unpleasant their condition may become.
{ "pile_set_name": "PubMed Abstracts" }
und 0.010526528 to two decimal places. 0.01 Round 188.0667112 to the nearest ten. 190 What is -5549546.28 rounded to the nearest 1000000? -6000000 What is -61906.9439 rounded to the nearest 10000? -60000 What is 5.082397014 rounded to 4 dps? 5.0824 Round 3067.14736 to the nearest integer. 3067 Round 0.00080478049 to 4 dps. 0.0008 What is -0.000003907853 rounded to seven decimal places? -0.0000039 Round 52.892351 to the nearest 10. 50 What is 1552386.9 rounded to the nearest ten thousand? 1550000 Round 783570.4 to the nearest ten thousand. 780000 What is 3.2397384404 rounded to 4 dps? 3.2397 Round 1621248.25 to the nearest one thousand. 1621000 Round 1.18243908 to 3 decimal places. 1.182 Round -2969356926 to the nearest one million. -2969000000 Round 2213071.01 to the nearest 100. 2213100 Round -3364.596848 to 0 dps. -3365 Round -27957.5249 to zero dps. -27958 Round -365372852.2 to the nearest 10000. -365370000 Round -9.71303827 to the nearest integer. -10 Round -30559574.5 to the nearest ten thousand. -30560000 Round -1884.768477 to the nearest integer. -1885 Round -0.00207660932 to 7 decimal places. -0.0020766 What is 92359810 rounded to the nearest 10000? 92360000 What is 3860.78601 rounded to one dp? 3860.8 What is 3123476650 rounded to the nearest one hundred thousand? 3123500000 Round 417.801795 to 2 decimal places. 417.8 What is 0.0001525712326 rounded to 7 dps? 0.0001526 What is 4.1079105 rounded to zero decimal places? 4 Round 1589745.3603 to the nearest 10. 1589750 What is 0.00000204091171 rounded to seven decimal places? 0.000002 Round -1241371.27 to the nearest 100000. -1200000 Round -0.00145392777 to 7 dps. -0.0014539 Round -20.1267584 to 1 decimal place. -20.1 What is 72126.366 rounded to the nearest one thousand? 72000 Round 3220581.9 to the nearest one million. 3000000 Round 42.910779 to two dps. 42.91 What is 631.921852 rounded to the nearest ten? 630 What is 0.19446073 rounded to two dps? 0.19 What is -5594409.504 rounded to the nearest one hundred thousand? -5600000 Round -0.00000137732724 to seven dps. -0.0000014 Round -0.0070029193 to 4 decimal places. -0.007 What is 0.001345540127 rounded to seven dps? 0.0013455 What is 2699.47135 rounded to the nearest ten? 2700 Round -0.0321359548 to 5 dps. -0.03214 What is 11.25139221 rounded to the nearest integer? 11 What is 136339780 rounded to the nearest one hundred thousand? 136300000 What is -2036.234263 rounded to the nearest one hundred? -2000 Round -551142048 to the nearest ten thousand. -551140000 What is 0.000108908334 rounded to 4 decimal places? 0.0001 What is 179200.3729 rounded to the nearest 1000? 179000 Round -0.606962633 to 1 dp. -0.6 What is 14.3390492 rounded to the nearest ten? 10 What is -0.03900018085 rounded to 2 decimal places? -0.04 Round 755.12098 to the nearest ten. 760 Round 85.6300616 to the nearest 10. 90 What is -2.7044535 rounded to 2 decimal places? -2.7 Round -41736.52944 to the nearest ten thousand. -40000 Round 2213.41913 to the nearest integer. 2213 Round -61.949621 to zero dps. -62 What is 0.88338974 rounded to three decimal places? 0.883 Round 0.004954054725 to seven decimal places. 0.0049541 What is 950.49271 rounded to the nearest integer? 950 What is 3238380.438 rounded to the nearest 100000? 3200000 Round -0.1702261266 to 2 decimal places. -0.17 Round 770.62784 to 0 decimal places. 771 What is -85.045424 rounded to the nearest 100? -100 What is 0.011450294 rounded to five dps? 0.01145 What is -0.02165342465 rounded to four dps? -0.0217 What is -97709226 rounded to the nearest 100000? -97700000 What is 252.2701133 rounded to two dps? 252.27 Round -0.00287791071 to seven decimal places. -0.0028779 Round 0.3123195 to four dps. 0.3123 What is -0.0576130295 rounded to five dps? -0.05761 What is 92562487 rounded to the nearest ten thousand? 92560000 Round 5083970010 to the nearest 1000000. 5084000000 What is -13701263.5 rounded to the nearest ten thousand? -13700000 What is -0.0007103610111 rounded to six dps? -0.00071 What is -0.000002193699659 rounded to 7 dps? -0.0000022 Round -20439.5127 to 0 decimal places. -20440 Round 132.39293452 to the nearest 10. 130 Round -0.0000136547528 to 6 dps. -0.000014 Round -309187972.7 to the nearest one million. -309000000 Round -964614.07 to the nearest 1000. -965000 Round -0.052476505 to 5 decimal places. -0.05248 What is -1017.5977279 rounded to the nearest 10? -1020 Round -432070.03 to the nearest 100. -432100 What is 0.000132385086 rounded to five decimal places? 0.00013 Round 1076487.04 to the nearest 100000. 1100000 Round 0.0000041598919 to seven decimal places. 0.0000042 Round 53854824.8 to the nearest 100000. 53900000 Round 522459.904 to the nearest 10000. 520000 What is -3727311.3 rounded to the nearest 1000? -3727000 What is -153.83491912 rounded to 1 dp? -153.8 Round -6110447.8 to the nearest one thousand. -6110000 Round -3317919000 to the nearest 100000. -3317900000 What is 17416056902 rounded to the nearest one hundred thousand? 17416100000 Round 2.7327232 to the nearest integer. 3 What is -0.353033668 rounded to five decimal places? -0.35303 Round 1633255.26 to the nearest 10000. 1630000 Round 0.00000289345933 to 7 decimal places. 0.0000029 What is -169688.0442 rounded to the nearest 100000? -200000 What is 182885483 rounded to the nearest 1000000? 183000000 What is -1.835104551 rounded to one decimal place? -1.8 What is -0.1276491681 rounded to five decimal places? -0.12765 Round 3106846.12 to the nearest 100000. 3100000 What is -0.015664673 rounded to 2 dps? -0.02 Round 37.8032367 to one dp. 37.8 What is -60398361 rounded to the nearest one million? -60000000 What is 1705811.58 rounded to the nearest 100000? 1700000 Round 2693.78798 to the nearest integer. 2694 What is 0.00262309672 rounded to four dps? 0.0026 What is 0.0015530197 rounded to four decimal places? 0.0016 Round 1645.9192 to the nearest 10. 1650 What is -5.330537612 rounded to 0 dps? -5 What is -532396.5653 rounded to the nearest ten? -532400 Round -84.886188 to zero dps. -85 Round 0.000494547857 to six decimal places. 0.000495 What is 0.000761347336 rounded to four dps? 0.0008 Round -29327.591 to the nearest one thousand. -29000 Round -14753835.4 to the nearest 10000. -14750000 What is -2.267531 rounded to 2 dps? -2.27 What is 4.988543966 rounded to 1 dp? 5 Round -5.55590941 to one dp. -5.6 What is 0.000384964413 rounded to five decimal places? 0.00038 What is -0.000323324222 rounded to four dps? -0.0003 Round -0.0336280276 to 4 decimal places. -0.0336 Round -718.0324 to the nearest integer. -718 Round -53.81103084 to 2 dps. -53.81 What is -0.5796290813 rounded to 4 decimal places? -0.5796 What is -0.000008325843562 rounded to 6 decimal places? -0.000008 What is 3139.20051 rounded to the nearest 100? 3100 What is 0.000000748394151 rounded to seven decimal places? 0.0000007 Round 4332.07933 to the nearest 100. 4300 Round 74347578.1 to the nearest one million. 74000000 Round -86.44511 to 2 dps. -86.45 Round -0.0184811679 to 5 decimal places. -0.01848 What is 0.00009886427007 rounded to 5 dps? 0.0001 Round -3.647179724 to the nearest integer. -4 What is 7.86962693 rounded to 1 dp? 7.9 Round -4473811.689 to the nearest one hundred thousand. -4500000 What is 655548.73 rounded to the nearest 10000? 660000 What is 1561777.3 rounded to the nearest one hundred thousand? 1600000 What is -1.042633777 rounded to 2 dps? -1.04 What is 92211.05 rounded to the nearest 100? 92200 Round 808.5816494 to 1 dp. 808.6 Round 714977462 to the nearest 10000. 714980000 Round -0.00000079429541 to 7 dps. -0.0000008 Round 10337334904 to the nearest one million. 10337000000 Round 20723.9285 to the nearest 10. 20720 What is -0.425185043 rounded to two dps? -0.43 What is 0.0008188643945 rounded to 7 decimal places? 0.0008189 What is -0.0532615732 rounded to 3 dps? -0.053 What is 2.6556757 rounded to 3 dps? 2.656 Round 1805433490 to the nearest one hundred thousand. 1805400000 What is -0.789351225 rounded to two dps? -0.79 What is -213.340888 rounded to the nearest ten? -210 Round -0.00250535376 to seven dps. -0.0025054 Round -0.0000690384858 to five dps. -0.00007 Round 4116.9648 to the nearest 100. 4100 What is 1131.9718 rounded to the nearest 10? 1130 What i
{ "pile_set_name": "DM Mathematics" }
A children’s picture book on life: A book about being yourself and treating others with respect that comes from within. **2016 Mom’s Choice Awards – Gold Award Recipient**The book’s main character is Tooki, a friendly owl who teaches children to love themselves unconditionally, recognize their own uniqueness, and treat others with kindness. Tooki talks to our children with the Dr. Seuss-style of respecting them just the way they are. You will want to read this book over and over with children young and old. Perfect for a quick bedtime read with young children Excellent for early and beginning readers (includes a few bigger words you can teach them)
{ "pile_set_name": "Pile-CC" }
Litmus test (politics) A litmus test is a question asked of a potential candidate for high office, the answer to which would determine whether the nominating official would proceed with the appointment or nomination. The expression is a metaphor based on the litmus test in chemistry, in which one is able to test the general acidity of a substance, but not its exact pH. Those who must approve a nominee, such as a justice of the Supreme Court of the United States, may also be said to apply a litmus test to determine whether the nominee will receive their vote. In these contexts, the phrase comes up most often with respect to nominations to the judiciary. Usage The metaphor of a litmus test has been used in American politics since the mid-twentieth century. During United States presidential election campaigns, litmus tests the nominees might use are more fervently discussed when vacancies for the U.S. Supreme Court appear likely. Advocates for various social ideas or policies often wrangle heatedly over what litmus test, if any, the president ought to apply when nominating a new candidate for a spot on the Supreme Court. Support for, or opposition to, abortion is one example of a common decisive factor in single-issue politics; another might be support of strict constructionism. Defenders of litmus tests argue that some issues are so important that it overwhelms other concerns (especially if there are other qualified candidates that pass the test). The political litmus test is often used when appointing judges. However, this test to determine the political attitude of a nominee is not without error. Supreme Court Chief Justice Earl Warren was appointed under the impression that he was conservative but his tenure was marked by liberal dissents. Today, the litmus test is used along with other methods such as past voting records when selecting political candidates. The Republican Liberty Caucus is opposed to litmus tests for judges, stating in their goals that they "oppose ‘litmus tests’ for judicial nominees who are qualified and recognize that the sole function of the courts is to interpret the Constitution. We oppose judicial amendments or the crafting of new law by any court." Professor Eugene Volokh believes that the legitimacy of such tests is a "tough question", and argues that they may undermine the fairness of the judiciary: Imagine a justice testifies under oath before the Senate about his views on (say) abortion, and later reaches a contrary decision [after carefully examining the arguments]. "Perjury!" partisans on the relevant side will likely cry: They'll assume the statement made with an eye towards confirmation was a lie, rather than that the justice has genuinely changed his mind. Even if no calls for impeachment follow, the rancor and contempt towards the justice would be much greater than if he had simply disappointed his backers' expectations. Faced with that danger, a justice may well feel pressured into deciding the way that he testified, and rejecting attempts at persuasion. Yet that would be a violation of the judge's duty to sincerely consider the parties' arguments. References Category:Political terminology of the United States
{ "pile_set_name": "Wikipedia (en)" }
EXCLUSIVE TMW - Bologna-Catania duel for Adejo Currently about to leave Serie B side Reggina on a full basis, Nigerian defensive jolly Daniel Adejo (23), according to latest rumours collected in exclusive by TuttoMercatoWeb, is tracked by both Bologna and Catania. Currently on a deal with Anderlecht until June 2018 and already tracked by AC Milan, Benfica, Swansea City, Borussia Monchengladbach, Borussia Dortmund, Newcastle and Chelsea, Serbian international striker Aleksandar Mitrovic (20), according to latest rumours coming from both UK
{ "pile_set_name": "Pile-CC" }
-2 and 22. -44 51*0.05 2.55 76 * -0.4 -30.4 What is the product of -4 and -476? 1904 -6.812*4 -27.248 0.0457*-8 -0.3656 What is 0.0279 times -2? -0.0558 42*0.29 12.18 What is the product of -971 and -5? 4855 Multiply -0.58 and 0.2. -0.116 Multiply -0.4 and -199. 79.6 What is the product of -4 and 5? -20 Work out -0.1 * 33. -3.3 What is the product of -4 and 899.3? -3597.2 8 * -46 -368 What is the product of -0.049 and 0.7? -0.0343 0 * -330 0 What is 16 times -2.2? -35.2 Product of -257 and -0.3. 77.1 Product of 0.14 and -0.02. -0.0028 Work out 26.44 * -0.09. -2.3796 0.9*14.6 13.14 Product of 3 and -5.65. -16.95 Work out -0.5 * 0.79. -0.395 -1978 * 1 -1978 3 * 197.7 593.1 Calculate 28*-18. -504 Calculate -196.3*-8. 1570.4 Calculate 2*-0.161. -0.322 Product of -9 and 1.19. -10.71 Multiply 2 and 788. 1576 Calculate -0.096*-1. 0.096 -0.02 times 127 -2.54 What is the product of 125 and 4? 500 Multiply 8.4 and -79. -663.6 6*-51 -306 -1.593*-1 1.593 Calculate 111*5. 555 -3*0.26 -0.78 What is the product of 0.06 and 92? 5.52 Product of 0.5 and -163. -81.5 1 * 9.6 9.6 -8449 * -0.3 2534.7 Multiply 1 and -0.07. -0.07 9205 * -0.3 -2761.5 What is 0.755 times -19? -14.345 Work out 0.5 * -376. -188 What is -5912 times 0.6? -3547.2 Calculate -0.2*-264. 52.8 Multiply 0.7 and -250. -175 Work out -260 * -0.03. 7.8 What is -0.3 times 45? -13.5 0 * 1625 0 Product of -0.348 and 3. -1.044 What is the product of -15 and 80? -1200 What is -5 times -54? 270 What is the product of -0.1 and 547? -54.7 Work out 16842 * -2. -33684 What is -1 times 2.7? -2.7 -18 times 92 -1656 1.97 * -2 -3.94 What is the product of 0.08 and 3342? 267.36 What is the product of 0.2 and 3.6? 0.72 Calculate -28*-0.5. 14 Multiply 55.5 and 0.3. 16.65 Multiply 26 and 27. 702 -0.489*6.7 -3.2763 Product of 4 and -365. -1460 What is 0.7 times 3.8? 2.66 Work out -2.14 * -1. 2.14 0.054*0.01 0.00054 Work out -0.21 * -7.1. 1.491 Work out 0.2 * 0.11. 0.022 Calculate -4*0.62. -2.48 Product of 210 and -0.1. -21 Multiply 1 and -0.1583. -0.1583 What is the product of 1 and 1? 1 Multiply -154.8 and -5. 774 Work out 0.4 * -0.0479. -0.01916 What is the product of 105 and 0.005? 0.525 1.17 times -3 -3.51 Product of 0.34 and -0.5. -0.17 Calculate -2060*-2. 4120 Work out 0.06 * 3.1. 0.186 0.05 times 0.238 0.0119 What is 0.9 times 13? 11.7 Calculate -2*165.6. -331.2 Calculate 0.1*861. 86.1 0.2 times 0.21 0.042 -84 * -2 168 3 * 1067 3201 Product of 1 and -70. -70 Work out 0.066 * -72. -4.752 What is the product of 56 and -0.06? -3.36 Calculate -147.7*9. -1329.3 -4.954 * 5 -24.77 Multiply 6045 and -0.5. -3022.5 Work out 5 * -0.0658. -0.329 -5332*0.1 -533.2 Multiply 4 and -9.19. -36.76 Work out -28774 * 1. -28774 Multiply -2772 and 4. -11088 Calculate -0.5*-116. 58 -5 * -1649 8245 0.1*30.5 3.05 0.4 * -0.0033 -0.00132 38.22 times -1.1 -42.042 What is 4389 times 4? 17556 Multiply 1 and 2.2. 2.2 What is 0.3 times -0.044? -0.0132 Product of 0.5 and 63. 31.5 Work out -41.5 * 15. -622.5 Multiply -1128 and 0.1. -112.8 Calculate -4*244. -976 Work out -7 * -2.9. 20.3 Work out 0.85 * 5. 4.25 -581*-4 2324 Product of 1261 and -1. -1261 Calculate 113*-25. -2825 0.42 * -2 -0.84 228 * -19 -4332 What is the product of 0.0252 and 0.5? 0.0126 -84*18 -1512 What is the product of 37.9 and -6.3? -238.77 -0.5 times 389 -194.5 Product of -1.5 and 2062. -3093 What is 0.3 times 3.9? 1.17 4*563 2252 4318*-0.1 -431.8 4 * 24 96 Work out -0.0761 * -0.4. 0.03044 Calculate 12*1409. 16908 0.2 times 2651 530.2 -160 times 2 -320 0.5 * 0.115 0.0575 Work out -2078 * -3. 6234 What is the product of 1.15 and 39? 44.85 4*-7 -28 Work out -18 * 1.3. -23.4 Product of -0.01 and 0.5. -0.005 0.5 times 0.029 0.0145 Multiply -0.5 and 77. -38.5 Work out -28.66 * -5. 143.3 Calculate -0.3*-0.21. 0.063 Product of -8 and 6. -48 Product of -0.152 and -0.2. 0.0304 What is -49 times 204? -9996 Product of 5329 and 0.4. 2131.6 -0.12*-0.191 0.02292 705*4 2820 Work out 24 * -53. -1272 Multiply 0.1 and 45. 4.5 -10*5 -50 Work out 0.497 * 1. 0.497 Calculate -0.28*93. -26.04 Product of -0.36 and -3. 1.08 Calculate 5*0.25. 1.25 Calculate -1061*0.07. -74.27 Calculate -0.057*-0.3. 0.0171 -0.12*-14.9 1.788 0.255 * -1 -0.255 Multiply -202 and 8. -1616 Calculate 0.5*0.58. 0.29 What is -5 times 11.1? -55.5 What is the product of -3 and 0.21? -0.63 Product of 2 and 1.48. 2.96 -0.2 times 63 -12.6 -1.5*0.8 -1.2 Product of 73 and -0.083. -6.059 16 * -1 -16 Calculate 8.54*6. 51.24 -2.5 * 11 -27.5 What is -0.024 times 0.07? -0.00168 What is -11 times 1? -11 What is 209 times -0.4? -83.6 -80 times 1.13 -90.4 Multiply 0.0635 and -7. -0.4445 What is -0.1 times -0.068? 0.0068 -2.8 * -7 19.6 0.703 * -1 -0.703 Calculate -4*-7.02. 28.08 -0.2 * -173 34.6 Calculate 275*-29. -7975 Calculate -39*-0.08. 3.12 Product of -0.244 and 5. -1.22 Multiply 0.32 and 1.4. 0.448 -0.1 * -3.5 0.35 Calculate -11442*-0.3. 3432.6 Calculate -0.52*0.5. -0.26 3.2 * 1 3.2 -84*-1 84 0.65*-0.06 -0.039 Calculate -66*-283. 18678 What is the product of 1.5 and -0.1? -0.15 Work out -0.6 * -2678. 1606.8 35 times 0.3 10.5 Calculate -2010*-5. 10050 Multiply 0.2 and 1147. 229.4 Work out -1 * 0.01227. -0.01227 Multiply -0.19 and -0.04. 0.0076 0*-141 0 Work out -2.615 * 0.3. -0.7845 Calculate 1593.3*-0.4. -637.32 1.7*-1 -1.7 737*1 737 Work out 218 * -13. -2834 Product of 0.1 and 4481. 448.1 Work out 8311 * -1. -8311 -49*15 -735 Calculate -0.6*62. -37.2 1.529*-0.08 -0.12232 24.16 * -0.1 -2.416 What is 3.77 times -0.3? -1.131 What is the product of 0.2 and 2.295? 0.459 Product of -7.2 and 0.12. -0.864 0.040789 times -1 -0.040789 Work out 0.04 * 17.6. 0.704 -2*-2.4 4.8 -0.3 times 2.7 -0.81 Calculate -1589*0.1. -158.9 Multiply 1 and 310. 310 -5 * -103 515 Calculate -1*-11.21. 11.21 Multiply -0.619 and -0.5. 0.3095 -0.21*-1 0.21 -0.5 * -0.16 0.08 3 * 45.6 136.8 Multiply -0.4 and 1039. -415.6 -0.2973 * 0.3 -0.08919 What is the product of -16.3 and -0.48? 7.824 Product of -3.06 and -0.1. 0.306 Multiply 79 and -89. -7031 Product of -6 and 698. -4188 Multiply -41.1 and 0.1. -4.11 Product of -5949 and 2. -11898 3.1*0.43 1.333 -6 times -17.2 103.2 Product of -0.095 and -0.07. 0.00665 Product of -11.9 and -0.6. 7.14 Work out -0.4 * -11. 4.4 Work out -5 * -9.5. 47.5 3 * -17 -51 Calculate 0.3*0.08. 0.024 Work out -38 * 49. -1862 What is the product of -3.5 and -8? 28 -0.1 * 0.7836 -0.07836 What is -0.2 times -108.8? 21.76 55 times -7 -385 Multiply 0.3 and 1.5. 0.45 Work out -666.3 * 0.1. -66.63 Calculate 0.4*-20. -8 Multiply 122 and -0.21. -25.62 0.46*0.54 0.2484 What is 0.0263 times -0.051? -0.0013413 Work out 5 * 255. 1275 What is the product of -0.6 and 1480? -888 What is the product of 33.1 and 0? 0 What is the product of 2 and -33? -66 2 times -40 -80 -0.09*6 -0.54 Work out 9 * 44. 396 Calculate 0.072*-0.3. -0.0216 What is the product of -1.5 and 60? -90 Calculate 0.13*0.08. 0.0104 Product of -3603 and 0.1. -360.3 Calculate -2*-176. 352 -0.047 * -0.0299 0.0014053 What is the product of 0.2 and -112? -22.4 Multiply -0.073 and 1. -0.073 What is -484 times 1.6? -774.4 What is -0.7 times 14? -9.8 Product of -27 and 45. -1215 Product of -0.4 and 9.6. -3.84 -0.3 * -0.68 0.204 0 * -283 0 -2 * 338 -676 1.2 times 0.5 0.6 96 times 0.05 4.8 0.09 * 0 0 -0.08 times -31 2.48 Product of -594 and -0.4. 237.6 Work out 11 * -100. -1100 -105 times 0.5 -52.5 Product of 0.4 and -343. -137.2 -0.6 * 9.5 -5.7 Work out 22 * 14. 308 Product of 0.6 and -27.1. -16.26 0.3 * -23 -6.9 Product of -25 and -0.5. 12.5 -1 * 89 -89 Calculate -3*2860. -8580 Calculate 22*-6. -132 19.2 * 5 96 Multiply -3.3 and 14. -46.2 Multiply 37 and -64. -2368 -0.54 times -82 44.28 757 * -4 -3028 Calculate -126*-0.3. 37.8 What is the product of -0.8 and -0.17? 0.136 Product of -145 and -0.7. 101.5 Multiply -0.3 and -6.4. 1.92 4*-2.88 -11.52 -0.137*-9 1.233 75.5 times 2 151 Calculate -0.151*0.5. -0.0755 Work out 370 * -1.2. -444 Product of -2.3 and -6. 13.8 What is the product of 0.008 and 21? 0.168 Multiply 94 and 8. 752 Multiply -0.057435 and 0.1. -0.0057435 17.803 times 5 89.015 0.07 * -1 -0.07 Product of 5 and 293. 1465 Work out -3 * 0.26. -0.78 What is -2 times 0.3078? -0.6156 Product of -0.2 and 3.49. -0.698 What is the product of -5 and -193? 965 22 * 140 3080 -178 * -14 2492 What is the
{ "pile_set_name": "DM Mathematics" }
Q: Get Custom Attributes Generic I am creating a custom attribute. And I will use it in multiple classes: [AttributeUsage(AttributeTargets.Method | AttributeTargets.Property | AttributeTargets.Field | AttributeTargets.Parameter, AllowMultiple = false)] public sealed class Display : System.Attribute { public string Name { get; set; } public string Internal { get; set; } } public class Class1 { [Display(Name = "ID")] public int ID { get; set; } [Display(Name = "Name")] public string Title { get; set; } } public class Class2 { [Display(Name = "ID")] public int ID { get; set; } [Display(Name = "Name")] public string Title { get; set; } } Here is working correctly but I want to make it as generic as possible, like the MVC example: Class1 class1 = new Class1(); class1.Title.DisplayName () / / returns the value "Name" The only thing I could do was generate a loop of my property, but I need to inform my typeof Class1 foreach (var prop in typeof(Class1).GetProperties()) { var attrs = (Display[])prop.GetCustomAttributes(typeof(Display), false); foreach (var attr in attrs) { Console.WriteLine("{0}: {1}", prop.Name, attr.Name); } } Is there any way to do it the way I want? A: You can't do what you've shown exactly because class1.Title is an expression that evaluates to a string. You are looking for metadata about the Title property. You can use Expression Trees to let you write something close. Here's a short helper class that pulls the attribute value out of a property in an expression tree: public static class PropertyHelper { public static string GetDisplayName<T>(Expression<Func<T, object>> propertyExpression) { Expression expression; if (propertyExpression.Body.NodeType == ExpressionType.Convert) { expression = ((UnaryExpression)propertyExpression.Body).Operand; } else { expression = propertyExpression.Body; } if (expression.NodeType != ExpressionType.MemberAccess) { throw new ArgumentException("Must be a property expression.", "propertyExpression"); } var me = (MemberExpression)expression; var member = me.Member; var att = member.GetCustomAttributes(typeof(DisplayAttribute), false).OfType<DisplayAttribute>().FirstOrDefault(); if (att != null) { return att.Name; } else { // No attribute found, just use the actual name. return member.Name; } } public static string GetDisplayName<T>(this T target, Expression<Func<T, object>> propertyExpression) { return GetDisplayName<T>(propertyExpression); } } And here's some sample usage. Note that you really don't even need an instance to get this metadata, but I have included an extension method that may be convenient. public static void Main(string[] args) { Class1 class1 = new Class1(); Console.WriteLine(class1.GetDisplayName(c => c.Title)); Console.WriteLine(PropertyHelper.GetDisplayName<Class1>(c => c.Title)); }
{ "pile_set_name": "StackExchange" }
Our Condolences To Voters In Single Candidate Districts Now that the 2014 candidate filing period is over, we have a better idea of what the coming months of campaigning are going to be like. Unfortunately, things are not looking up for fans of political engagement. Filing for political office wrapped up Friday, meaning the campaign season has started. Before things get too involved in that debate, we would like to take a minute to congratulate the candidates who have already won —— those who brought no opposition in the election. Whether because their credibility with the public is so substantial, their political presence is so daunting or their opposition so disorganized, a lot of officeholders were guaranteed another term in office, which deserves public acknowledgement. They close the editorial in much the same way. But before that gets rolling, we congratulate the ones who have already won. Your willingness to lead the community and serve the people is laudable. You have our thanks and our best wishes. But is this really something that is worthy of applause or congratulations? When we look at the numbers, we really don’t see anything worth congratulating. Of the five US Congress seats, one received no competition. District 1 Rep. Jim Bridenstine will be able to take his seat in Washington with no real effort or challenge. On the state level, we have 8 uncontested Senate seats and 50 uncontested House seats. That is 46% of all state legislative seats. When we add in the 4 Senate and 14 House seats that will be decided in Oklahoma’s closed partisan primaries. This means that 57% of Oklahoma’s legislative seats will be decided without the approval of the vast majority of their voters. On a more positive note, there are the three major statewide elections that will allow for a lot of civil engagement with voters. We have very active races for both US Senate seats and the Governor. All three races have candidates from the Republican and Democratic parties and from Independents. But for all those voters in US Congressional District 1 and all those voters from the state legislative districts with no November election we express our condolences. We wish you the best in coming years. We know it must be difficult to not have any say in who represents you. The Case For Ballot Access Reform Oklahomans for Ballot Access Reform is once again calling on state lawmakers to demonstrate their faith in democracy and hand the keys to the electoral process back over to the voting public. To this end, we have written and published a brief putting forth the evidence in support of Ballot Access Reform. Read and Share the Press Release and Ballot Access Brief.
{ "pile_set_name": "Pile-CC" }
Rails.application.configure do # Settings specified here will take precedence over those in config/application.rb. # Code is not reloaded between requests. config.cache_classes = true # Eager load code on boot. This eager loads most of Rails and # your application in memory, allowing both threaded web servers # and those relying on copy on write to perform better. # Rake tasks automatically ignore this option for performance. config.eager_load = true # Full error reports are disabled and caching is turned on. config.consider_all_requests_local = false config.action_controller.perform_caching = true # Ensures that a master key has been made available in either ENV["RAILS_MASTER_KEY"] # or in config/master.key. This key is used to decrypt credentials (and other encrypted files). # config.require_master_key = true # Disable serving static files from the `/public` folder by default since # Apache or NGINX already handles this. config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present? # Compress CSS using a preprocessor. # config.assets.css_compressor = :sass # Do not fallback to assets pipeline if a precompiled asset is missed. config.assets.compile = false # Enable serving of images, stylesheets, and JavaScripts from an asset server. # config.action_controller.asset_host = 'http://assets.example.com' # Specifies the header that your server uses for sending files. # config.action_dispatch.x_sendfile_header = 'X-Sendfile' # for Apache # config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for NGINX # Store uploaded files on the local file system (see config/storage.yml for options). config.active_storage.service = :local # Mount Action Cable outside main process or domain. # config.action_cable.mount_path = nil # config.action_cable.url = 'wss://example.com/cable' # config.action_cable.allowed_request_origins = [ 'http://example.com', /http:\/\/example.*/ ] # Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies. # config.force_ssl = true # Use the lowest log level to ensure availability of diagnostic information # when problems arise. config.log_level = :debug # Prepend all log lines with the following tags. config.log_tags = [ :request_id ] # Use a different cache store in production. # config.cache_store = :mem_cache_store # Use a real queuing backend for Active Job (and separate queues per environment). # config.active_job.queue_adapter = :resque # config.active_job.queue_name_prefix = "dummy_production" config.action_mailer.perform_caching = false # Ignore bad email addresses and do not raise email delivery errors. # Set this to true and configure the email server for immediate delivery to raise delivery errors. # config.action_mailer.raise_delivery_errors = false # Enable locale fallbacks for I18n (makes lookups for any locale fall back to # the I18n.default_locale when a translation cannot be found). config.i18n.fallbacks = true # Send deprecation notices to registered listeners. config.active_support.deprecation = :notify # Use default logging formatter so that PID and timestamp are not suppressed. config.log_formatter = ::Logger::Formatter.new # Use a different logger for distributed setups. # require 'syslog/logger' # config.logger = ActiveSupport::TaggedLogging.new(Syslog::Logger.new 'app-name') if ENV["RAILS_LOG_TO_STDOUT"].present? logger = ActiveSupport::Logger.new(STDOUT) logger.formatter = config.log_formatter config.logger = ActiveSupport::TaggedLogging.new(logger) end # Do not dump schema after migrations. config.active_record.dump_schema_after_migration = false # Inserts middleware to perform automatic connection switching. # The `database_selector` hash is used to pass options to the DatabaseSelector # middleware. The `delay` is used to determine how long to wait after a write # to send a subsequent read to the primary. # # The `database_resolver` class is used by the middleware to determine which # database is appropriate to use based on the time delay. # # The `database_resolver_context` class is used by the middleware to set # timestamps for the last write to the primary. The resolver uses the context # class timestamps to determine how long to wait before reading from the # replica. # # By default Rails will store a last write timestamp in the session. The # DatabaseSelector middleware is designed as such you can define your own # strategy for connection switching and pass that into the middleware through # these configuration options. # config.active_record.database_selector = { delay: 2.seconds } # config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver # config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session end
{ "pile_set_name": "Github" }
Put the bottom of your shoe to your face. Now, before you ask “has Ed gone plum crazy,” hear me out. Putting your cell phone there is just as unsanitary: that’s how much bacteria lives on those devices according to some microbiologists. Now that you’re thoroughly grossed out, here’s a possible solution–Violight’s UV cell phone sanitizer. The company has been producing UV toothbrush sanitizers since it’s founding in 2004. Ultraviolet light is a fairly reliable sanitizer, and has seen increasing use when aiming to kill off bacteria. Violight claims its own system eradicates about 99.9 percent of these bad guys, including e.coli, strep, salmonella, listeria and even the H1N1 flu virus in a matter of minutes. In a demo, the device looked pretty easy to use. The cell phone is placed in the cradle which includes a sensor to detect the weight of the device. If theres a device in the unit, when the cover is placed on top it turns on and starts the process. In about three minutes it’s done, indicated by a pulsing blue light on the front which will indicate its completion when the light turns off. Germaphobes rejoice, your phone is now clean. The sanitizer will retail for $49.95 when it becomes available in October, the company says. The modern world’s desire for total isolation from all germs and bacteria is quite ridiculous. Soon, our immune systems will fail altogether from disuse. By contrast, exposure to germs and bacteria improves our ability to ward off disease. I don’t advocate leaving E. coli and salmonella on food to make us sick, but if we all become like Howie Mandell, we’ll all have warts and be unwilling to touch anyone or anything!
{ "pile_set_name": "Pile-CC" }
A cornea is a transparent membrane with a thickness of about 0.5 mm, and has a five-layer structure which consists of corneal epithelial cells, a Bowman membrane, corneal stroma, a Descemet's membrane and corneal endothelial cells, in this order from the surface of body. Among these, although the corneal epithelial cells have high regenerating capability, a serious corneal ulcer might occur, or a nonreversible corneal opacity might be left, in the case where a disorder of the epithelial cells is serious. Therefore, there have been strong demands for measuring the degree of the disorder of the corneal epithelium quantitatively and carrying out a proper treatment in an earlier stage. Since the conventional method for measuring the degree of a disorder of the cornea mainly depends on a visual inspection method, it is difficult to quantitate the disorder of the corneal epithelium, and there has been a demand for a method for representing the degree of the disorder of the cornea by a numeric value and for quantitating the degree of the disorder thereof. In recent years, it has been known that by measuring a corneal transepithelial electric resistance (TER) that represents a barrier function of the cornea, the degree of the disorder of the cornea can be quantitatively measured (see Non-Patent Documents 1 to 3). The TER is indicated by the product of a corneal electric resistance value (Rc) and an area (a) of a portion to be measured, that is, TER=Rc×a. When the corneal epithelium has a disorder, the barrier function of the cornea is lowered to cause a low corneal TER value. The present inventors at first carried out experiments in which a cut-out cornea was fixed on a Ussing chamber by which an accurate TER can be determined by using a short-circuit current method, with the value of the area (a) being constant; however, since the cut-out cornea is different in its state from that in a living body, the experiments failed to reflect a true biological reaction. Therefore, the present inventors have found a method in which electrode needles are placed on the corneal epithelium and in the anterior chamber of a living eye and an electric resistance is measured by flowing an electric current therebetween (see Non-Patent Document 4). In this case, the measurement of TER is performed for a living cornea by using the principle of the Ussing chamber, and an accurate TER can be determined when the value of the measurement area (a) is constant. However, since this method is an invasive method that is accompanied by an anterior chamber centesis, it is not possible to apply this method to human, and this method is not suitable for diagnosis and treatment of a corneal disorder in the clinical field. Non-Patent Document 5 has disclosed a method for measuring a corneal resistance value about a living eye by using corneal contact lens (CL) provided with electrodes. Since this method is not invasive to respective portions of the eye, it may be applicable to human. However, in this method, since a suitable insulator is not provided, a reliable electric current flow through the corneal epithelium cannot be made because of the existence of a lacrimal fluid on a surface of the eye, and the resulting measured value is considered to mainly represent the state of the lacrimal fluid rather than the state of the corneal epithelium. Moreover, since the CL is fixed onto the cornea by suction, the measuring device itself might cause a corneal disorder. Furthermore, it has been reported in the Japanese Society for Ocular Pharmacology in 2007 that this method failed to detect the existence of small corneal erosion. It can be considered that the electrodes fail to sufficiently detect the corneal disorder, since an electric current to be measured does not completely pass through the corneal epithelium. In addition, since this method does not take the area (a) through which the electric current flows into consideration, a TER that enables accurate evaluation of the corneal disorder cannot be determined. In this manner, the method of Non-Patent document 5 has a problem with the detection sensitivity. Moreover, Patent Document 1 has disclosed a device for detecting a damage and the like of the corneal epithelium, and this device measures a reduction in a potential difference between a cornea and a sclera as an index for the damage. A corneal electric potential (Vc) serving as a parameter in Patent Document 1 is defined by the corneal electric resistance value (Rc) and an ion transporting function (current) (Ic) by the corneal endothelium, which is represented by Vc=Rc×Ic. Moreover, a corneal electric potential (Vc) is compared with the scleral electric potential (Vs) and k in Vc=kVs serves as an index indicating the corneal disorder. Consequently, k=Rc×Ic/Vs represents the corneal disorder. In this measurement, in order to reflect the Rc to the k, the Ic and the Vs need to be measured with high precision; however, in an actual operation, the current Ic generated from the corneal endothelium is very weak and is not constant. Since the scleral electric potential Vs varies depending on measuring conditions and individuals, it is not possible to determine a corneal disorder accurately when compared with this. Moreover, as described earlier, the TER, which directly reflects the corneal disorder, is represented by TER=Rc×a, and in order to determine a TER, the area (a) through which a current passes needs to be determined; however, it is not accurately determined by this measurement. Consequently, it is not possible to accurately measure the corneal disorder by using this measuring method. Although Patent Document 1 roughly observes the electrophysiological phenomenon that the electric resistance value is lowered upon occurrence of a corneal disorder, its measuring principle and accuracy are far behind the technique of the present application. patent document 1: JP-Y-S48-10716 non-patent document 1: Rojanasakul Y. et al., Int. J. Pharm., (66) 131-142 (1990) non-patent document 2: Rojanasakul Y. et al., Int. J. Pharm., (63)1-16 (1990) non-patent document 3: Chetoni P. et al., Toxicol In Vitro, (17)497-504 (2003) non-patent document 4: Uematsu M. et al., Ophthalmic Research, 39, 308-314, 2007 non-patent document 5: Masamichi Fukuda et al., Atarashii Ganka (New Ophthalmology) 24(4):521-525, 2007
{ "pile_set_name": "USPTO Backgrounds" }
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2016, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.haxx.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ #include "curl_setup.h" #if defined(__INTEL_COMPILER) && defined(__unix__) #ifdef HAVE_NETINET_IN_H # include <netinet/in.h> #endif #ifdef HAVE_ARPA_INET_H # include <arpa/inet.h> #endif #endif /* __INTEL_COMPILER && __unix__ */ #define BUILDING_WARNLESS_C 1 #include "warnless.h" #define CURL_MASK_SCHAR 0x7F #define CURL_MASK_UCHAR 0xFF #if (SIZEOF_SHORT == 2) # define CURL_MASK_SSHORT 0x7FFF # define CURL_MASK_USHORT 0xFFFF #elif (SIZEOF_SHORT == 4) # define CURL_MASK_SSHORT 0x7FFFFFFF # define CURL_MASK_USHORT 0xFFFFFFFF #elif (SIZEOF_SHORT == 8) # define CURL_MASK_SSHORT 0x7FFFFFFFFFFFFFFF # define CURL_MASK_USHORT 0xFFFFFFFFFFFFFFFF #else # error "SIZEOF_SHORT not defined" #endif #if (SIZEOF_INT == 2) # define CURL_MASK_SINT 0x7FFF # define CURL_MASK_UINT 0xFFFF #elif (SIZEOF_INT == 4) # define CURL_MASK_SINT 0x7FFFFFFF # define CURL_MASK_UINT 0xFFFFFFFF #elif (SIZEOF_INT == 8) # define CURL_MASK_SINT 0x7FFFFFFFFFFFFFFF # define CURL_MASK_UINT 0xFFFFFFFFFFFFFFFF #elif (SIZEOF_INT == 16) # define CURL_MASK_SINT 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF # define CURL_MASK_UINT 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF #else # error "SIZEOF_INT not defined" #endif #if (CURL_SIZEOF_LONG == 2) # define CURL_MASK_SLONG 0x7FFFL # define CURL_MASK_ULONG 0xFFFFUL #elif (CURL_SIZEOF_LONG == 4) # define CURL_MASK_SLONG 0x7FFFFFFFL # define CURL_MASK_ULONG 0xFFFFFFFFUL #elif (CURL_SIZEOF_LONG == 8) # define CURL_MASK_SLONG 0x7FFFFFFFFFFFFFFFL # define CURL_MASK_ULONG 0xFFFFFFFFFFFFFFFFUL #elif (CURL_SIZEOF_LONG == 16) # define CURL_MASK_SLONG 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFL # define CURL_MASK_ULONG 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFUL #else # error "CURL_SIZEOF_LONG not defined" #endif #if (CURL_SIZEOF_CURL_OFF_T == 2) # define CURL_MASK_SCOFFT CURL_OFF_T_C(0x7FFF) # define CURL_MASK_UCOFFT CURL_OFF_TU_C(0xFFFF) #elif (CURL_SIZEOF_CURL_OFF_T == 4) # define CURL_MASK_SCOFFT CURL_OFF_T_C(0x7FFFFFFF) # define CURL_MASK_UCOFFT CURL_OFF_TU_C(0xFFFFFFFF) #elif (CURL_SIZEOF_CURL_OFF_T == 8) # define CURL_MASK_SCOFFT CURL_OFF_T_C(0x7FFFFFFFFFFFFFFF) # define CURL_MASK_UCOFFT CURL_OFF_TU_C(0xFFFFFFFFFFFFFFFF) #elif (CURL_SIZEOF_CURL_OFF_T == 16) # define CURL_MASK_SCOFFT CURL_OFF_T_C(0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) # define CURL_MASK_UCOFFT CURL_OFF_TU_C(0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) #else # error "CURL_SIZEOF_CURL_OFF_T not defined" #endif #if (SIZEOF_SIZE_T == SIZEOF_SHORT) # define CURL_MASK_SSIZE_T CURL_MASK_SSHORT # define CURL_MASK_USIZE_T CURL_MASK_USHORT #elif (SIZEOF_SIZE_T == SIZEOF_INT) # define CURL_MASK_SSIZE_T CURL_MASK_SINT # define CURL_MASK_USIZE_T CURL_MASK_UINT #elif (SIZEOF_SIZE_T == CURL_SIZEOF_LONG) # define CURL_MASK_SSIZE_T CURL_MASK_SLONG # define CURL_MASK_USIZE_T CURL_MASK_ULONG #elif (SIZEOF_SIZE_T == CURL_SIZEOF_CURL_OFF_T) # define CURL_MASK_SSIZE_T CURL_MASK_SCOFFT # define CURL_MASK_USIZE_T CURL_MASK_UCOFFT #else # error "SIZEOF_SIZE_T not defined" #endif /* ** unsigned long to unsigned short */ unsigned short curlx_ultous(unsigned long ulnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(ulnum <= (unsigned long) CURL_MASK_USHORT); return (unsigned short)(ulnum & (unsigned long) CURL_MASK_USHORT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned long to unsigned char */ unsigned char curlx_ultouc(unsigned long ulnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(ulnum <= (unsigned long) CURL_MASK_UCHAR); return (unsigned char)(ulnum & (unsigned long) CURL_MASK_UCHAR); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned long to signed int */ int curlx_ultosi(unsigned long ulnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(ulnum <= (unsigned long) CURL_MASK_SINT); return (int)(ulnum & (unsigned long) CURL_MASK_SINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned size_t to signed curl_off_t */ curl_off_t curlx_uztoso(size_t uznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #elif defined(_MSC_VER) # pragma warning(push) # pragma warning(disable:4310) /* cast truncates constant value */ #endif DEBUGASSERT(uznum <= (size_t) CURL_MASK_SCOFFT); return (curl_off_t)(uznum & (size_t) CURL_MASK_SCOFFT); #if defined(__INTEL_COMPILER) || defined(_MSC_VER) # pragma warning(pop) #endif } /* ** unsigned size_t to signed int */ int curlx_uztosi(size_t uznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(uznum <= (size_t) CURL_MASK_SINT); return (int)(uznum & (size_t) CURL_MASK_SINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned size_t to unsigned long */ unsigned long curlx_uztoul(size_t uznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif #if (CURL_SIZEOF_LONG < SIZEOF_SIZE_T) DEBUGASSERT(uznum <= (size_t) CURL_MASK_ULONG); #endif return (unsigned long)(uznum & (size_t) CURL_MASK_ULONG); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned size_t to unsigned int */ unsigned int curlx_uztoui(size_t uznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif #if (SIZEOF_INT < SIZEOF_SIZE_T) DEBUGASSERT(uznum <= (size_t) CURL_MASK_UINT); #endif return (unsigned int)(uznum & (size_t) CURL_MASK_UINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed long to signed int */ int curlx_sltosi(long slnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(slnum >= 0); #if (SIZEOF_INT < CURL_SIZEOF_LONG) DEBUGASSERT((unsigned long) slnum <= (unsigned long) CURL_MASK_SINT); #endif return (int)(slnum & (long) CURL_MASK_SINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed long to unsigned int */ unsigned int curlx_sltoui(long slnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(slnum >= 0); #if (SIZEOF_INT < CURL_SIZEOF_LONG) DEBUGASSERT((unsigned long) slnum <= (unsigned long) CURL_MASK_UINT); #endif return (unsigned int)(slnum & (long) CURL_MASK_UINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed long to unsigned short */ unsigned short curlx_sltous(long slnum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(slnum >= 0); DEBUGASSERT((unsigned long) slnum <= (unsigned long) CURL_MASK_USHORT); return (unsigned short)(slnum & (long) CURL_MASK_USHORT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned size_t to signed ssize_t */ ssize_t curlx_uztosz(size_t uznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(uznum <= (size_t) CURL_MASK_SSIZE_T); return (ssize_t)(uznum & (size_t) CURL_MASK_SSIZE_T); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed curl_off_t to unsigned size_t */ size_t curlx_sotouz(curl_off_t sonum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(sonum >= 0); return (size_t)(sonum & (curl_off_t) CURL_MASK_USIZE_T); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed ssize_t to signed int */ int curlx_sztosi(ssize_t sznum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(sznum >= 0); #if (SIZEOF_INT < SIZEOF_SIZE_T) DEBUGASSERT((size_t) sznum <= (size_t) CURL_MASK_SINT); #endif return (int)(sznum & (ssize_t) CURL_MASK_SINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned int to unsigned short */ unsigned short curlx_uitous(unsigned int uinum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(uinum <= (unsigned int) CURL_MASK_USHORT); return (unsigned short) (uinum & (unsigned int) CURL_MASK_USHORT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned int to unsigned char */ unsigned char curlx_uitouc(unsigned int uinum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(uinum <= (unsigned int) CURL_MASK_UCHAR); return (unsigned char) (uinum & (unsigned int) CURL_MASK_UCHAR); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** unsigned int to signed int */ int curlx_uitosi(unsigned int uinum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(uinum <= (unsigned int) CURL_MASK_SINT); return (int) (uinum & (unsigned int) CURL_MASK_SINT); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } /* ** signed int to unsigned size_t */ size_t curlx_sitouz(int sinum) { #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable:810) /* conversion may lose significant bits */ #endif DEBUGASSERT(sinum >= 0); return (size_t) sinum; #ifdef __INTEL_COMPILER # pragma warning(pop) #endif } #ifdef USE_WINSOCK /* ** curl_socket_t to signed int */ int curlx_sktosi(curl_socket_t s) { return (int)((ssize_t) s); } /* ** signed int to curl_socket_t */ curl_socket_t curlx_sitosk(int i) { return (curl_socket_t)((ssize_t) i); } #endif /* USE_WINSOCK */ #if defined(WIN32) || defined(_WIN32) ssize_t curlx_read(int fd, void *buf, size_t count) { return (ssize_t)read(fd, buf, curlx_uztoui(count)); } ssize_t curlx_write(int fd, const void *buf, size_t count) { return (ssize_t)write(fd, buf, curlx_uztoui(count)); } #endif /* WIN32 || _WIN32 */ #if defined(__INTEL_COMPILER) && defined(__unix__) int curlx_FD_ISSET(int fd, fd_set *fdset) { #pragma warning(push) #pragma warning(disable:1469) /* clobber ignored */ return FD_ISSET(fd, fdset); #pragma warning(pop) } void curlx_FD_SET(int fd, fd_set *fdset) { #pragma warning(push) #pragma warning(disable:1469) /* clobber ignored */ FD_SET(fd, fdset); #pragma warning(pop) } void curlx_FD_ZERO(fd_set *fdset) { #pragma warning(push) #pragma warning(disable:593) /* variable was set but never used */ FD_ZERO(fdset); #pragma warning(pop) } unsigned short curlx_htons(unsigned short usnum) { #if (__INTEL_COMPILER == 910) && defined(__i386__) return (unsigned short)(((usnum << 8) & 0xFF00) | ((usnum >> 8) & 0x00FF)); #else #pragma warning(push) #pragma warning(disable:810) /* conversion may lose significant bits */ return htons(usnum); #pragma warning(pop) #endif } unsigned short curlx_ntohs(unsigned short usnum) { #if (__INTEL_COMPILER == 910) && defined(__i386__) return (unsigned short)(((usnum << 8) & 0xFF00) | ((usnum >> 8) & 0x00FF)); #else #pragma warning(push) #pragma warning(disable:810) /* conversion may lose significant bits */ return ntohs(usnum); #pragma warning(pop) #endif } #endif /* __INTEL_COMPILER && __unix__ */
{ "pile_set_name": "Github" }
Ignite EvoKnit 3D Black / Black Stockists The Puma Ignite Evoknit 3D Pack is a set of new colourways of this technically progressive runner that was introduced at the end of last year. The finely woven Evoknit upper offer lightweight flexibility while letting your foot breath. The Ignite midsole gives excellent cushioning and traction.
{ "pile_set_name": "Pile-CC" }
1. Field of the Invention The present invention relates to desalination units, and more specifically to using photovoltaic cells and wind electric power generators to power a reverse osmosis desalination unit. 2. Description of the Related Art Although the use of wind energy for electricity, solar energy for distillation or desalination, or a combination thereof has been attempted in the related art, the use of the present combination of wind using a vertical-axis wind turbine, solar energy using photovoltaic cells and a desalination unit based on reverse osmosis process to provide portability, and increased efficiency and effectiveness is believed to be non existent in the literature. Thus, a portable and autonomous desalination system solving the aforementioned problems is desired.
{ "pile_set_name": "USPTO Backgrounds" }
Assessment of the efficacy and safety of pre-emptive anti-cytomegalovirus (CMV) therapy in HIV-infected patients with CMV viraemia. A number of studies have demonstrated that cytomegalovirus (CMV) viraemia is a strong predictor for CMV end-organ disease (EOD) and death in HIV-infected patients. We assess the efficacy and safety of pre-emptive anti-CMV therapy (PACT) for preventing these events. We performed a retrospective study of all HIV-infected patients seen in our institution who had detectable CMV viraemia in 2007. Seventy-one patients with advanced HIV disease (median CD4 cell count = 61 cells/mm(3)) were studied. Sixteen patients received PACT (mainly valganciclovir). Patients who received PACT had lower CD4 cell counts and higher blood CMV DNA levels. The cumulative incidence of CMV EOD and death at one year was 44% and 21% in patients with and without PACT, respectively (p = 0.013). Both PACT and high blood CMV DNA levels were significantly associated with CMV EOD and death in unadjusted analysis. In adjusted analyses, only blood CMV DNA levels remained significantly associated with the risk of CMV EOD and death, whereas PACT was associated with a non-significant trend towards reduced CMV EOD or death (hazard ratio: 0.25, p = 0.13). Five patients with PACT experienced severe drug-related adverse events. In conclusion, the use of PACT in HIV-infected patients with CMV viraemia could improve outcome but is associated with significant toxicity.
{ "pile_set_name": "PubMed Abstracts" }
Q: Colors in Android I am using colors in Android but I don't know the format. Here's a working example in JAVA. public Color getColor(int i) { switch (i%8) { case 0: return Color.blue; case 1: return Color.red; case 2: return Color.magenta; case 3: return Color.orange; case 4: return Color.pink; case 5: return Color.green; case 6: return Color.cyan; case 7: return new Color(250, 220, 100); } return (new Color((i*80)%255,(i*80)%255,(i*80)%255)) ; } How can I create this using Android. What I'm trying to do is return a color of eight possibilities. A: Hi you can follow the code given by @blackbelt here in this answer public int getColor(int i) { switch (i%8) { case 0: return Color.WHITE; case 1: return Color.RED; case 2: return Color.YELLOW; case 3: return Color.BLUE; case 4: return Color.GREEN; case 5: return Color.BLACK; } return Color.makeColor((i*80)%255,(i*80)%255,(i*80)%255)) ; } but as a practice perspective and better code writing rather than using like above way,please follow the given approach. At FILE LOCATION given below: have a colors.xml res/values/colors.xml The filename is arbitrary. The element's name will be used as the resource ID. RESOURCE REFERENCE: In Java: R.color.color_name In XML: @[package:]color/color_name SYNTAX: <?xml version="1.0" encoding="utf-8"?> <resources> <color name="color_name" >hex_color</color> </resources> ELEMENTS: <resources> Required. This must be the root node. No attributes. <color> A color expressed in hexadecimal, as described above. attributes: name String. A name for the color. This will be used as the resource ID. EXAMPLE: XML file saved at res/values/colors.xml: <?xml version="1.0" encoding="utf-8"?> <resources> <color name="opaque_red">#f00</color> <color name="translucent_red">#80ff0000</color> </resources> This application code retrieves the color resource: Resources res = getResources(); int color = res.getColor(R.color.opaque_red); This layout XML applies the color to an attribute: <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="@color/translucent_red" android:text="Hello"/> complete example create /res/values/colors.xml to add our Color Resources in XML <?xml version="1.0" encoding="utf-8"?> <resources> <color name="background_color">#f0f0f0</color> <color name="text_color_red">#ff0000</color> </resources> Modify main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/background_color" > <TextView android:id="@+id/text" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> </LinearLayout> Java Code import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class MainActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView Text = (TextView)findViewById(R.id.text); Text.setTextColor(getResources().getColor(R.color.text_color_red)); } } please follow http://developer.android.com/guide/topics/resources/more-resources.html for example http://android-er.blogspot.in/2010/03/using-color-in-android.html A: The name of color constants are uppercase on an Android, and a color is an int value. So you should change the return type from Color to int. Second you should change the constants name to reflect Android. Third you should use Color.makeColor to get the color from integer values public int getColor(int i) { switch (i%8) { case 0: return Color.BLUE; case 1: return Color.red; case 2: return Color.magenta; case 3: return Color.orange; case 4: return Color.pink; case 5: return Color.green; case 6: return Color.cyan; case 7: return Color.makeColor(250, 220, 100); } return Color.makeColor((i*80)%255,(i*80)%255,(i*80)%255)) ; } A: public int getColor(int i) { switch (i%8) { case 0: return Color.BLUE; case 1: return Color.RED; case 2: return Color.MAGENTA; case 3: return Color.GRAY; case 4: return Color.YELLOW; case 5: return Color.GREEN; case 6: return Color.CYAN; case 7: return 993399; } Random rnd = new Random(); int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256)); return color; }
{ "pile_set_name": "StackExchange" }
We use cookies to enhance your experience on our website. By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time.Find out moreJump to Content Diogo de Carvalho Cabral Although it has received less scholarly attention than firearms, microbes, domestic animals and plants, market economy, and statecraft, alphabetic reading and writing was crucial in the ... More Although it has received less scholarly attention than firearms, microbes, domestic animals and plants, market economy, and statecraft, alphabetic reading and writing was crucial in the European conquest and colonization of the Americas from the late 15th century on. Unlike the agrarian empires the Spaniards encountered in the Andes and the Mexican highlands, the Portuguese frontier advanced upon tribal peoples who relied exclusively on oral language, such as the Tupi of Atlantic Brazil. These were semi-sedentary horticultural villagers whose entire socio-ecology (myths and knowledge, territoriality, subsistence strategies, etc.) was conditioned by the face-to-faceness and fugacity of spoken words. In turn, their Portuguese colonizers—for a while rivaled by the French, who enjoyed short periods of stable settlement through the early 17th century—were urban-based, oceangoing merchants, bureaucrats, soldiers, and religious missionaries whose organization strictly depended on the durability and transferability of written texts. Even if most of the Portuguese who came to Brazil in the 16th century were themselves illiterate, colonization as a social enterprise framed their actions according to prescribed roles set down in writing (both handwriting and printed script). Thus, the Portuguese colonization of Brazilian native lands and human populations can be interpreted from the point of view of the imposition of an alphabetically organized way of life. Two major dimensions of this “letterscaping” can be discerned as to its impact on Amerindian bodies (human and nonhuman) and modes of understanding. Although the 16th century was only the introductory act in that drama, its historical record shows the basic outlines of the alphabetic colonization that would play out through the early 19th century: native decimation and enslavement, territory usurpation by sesmaria grants, forest recovery in former native croplands (then resignified as “virgin forest”), loss of native ecological knowledge not recorded in writing, disempowerment of native cultural attunement to the wild soundscape, among other processes. Edward D. Melillo Since the early 1800s, Chileans have imagined their nation’s history and destiny through an ever-changing array of transoceanic connections with the rest of the planet. At a deeper level, ... More Since the early 1800s, Chileans have imagined their nation’s history and destiny through an ever-changing array of transoceanic connections with the rest of the planet. At a deeper level, Chile’s relationship with the Pacific Ocean is built upon myriad collective memories and aspirational identities. The long arc of Chile’s linkages with the Pacific World—or the peoples and ecosystems in and around the Pacific Ocean—has yet to be fully explored by historians. This article fills this lacuna by analyzing five diverse historical episodes that span more than two centuries: first, Valparaíso’s growth into a Pacific commercial hub during the early 1800s; second, Chile’s role in the Californian and Australian gold rushes of the mid-1800s; third, the Chilean victory in the late-19th-century War of the Pacific; fourth, Chile’s burgeoning commercial relationship with China, which began in the years following the Second World War; and, finally, the emergence of a Chilean-Pacific variant of neoliberal ideology in the final decades of the 20th century. These five developments reveal a litany of ambiguities and antagonisms in Chile’s complicated, ongoing association with its western ocean. Mikael D. Wolfe What role did drought play in the outbreak of the Mexican Revolution of 1910? Although historians of the Mexican Revolution acknowledge that the effects of drought helped catalyze it, they ... More What role did drought play in the outbreak of the Mexican Revolution of 1910? Although historians of the Mexican Revolution acknowledge that the effects of drought helped catalyze it, they have not explored in any depth what connects drought to revolution. Instead, they usually subsume it within a more general discussion of agricultural cycles to explain the conduct and fortunes of popular revolutionary armies. In particular, they reference the onset of drought between 1907 and 1909 as exacerbating an economic downturn induced by severe recession in the United States. By then, Mexico had become economically integrated with its northern neighbor through rapidly growing foreign investment, trade, and cross-border migration facilitated by the railroad transportation revolution. These socioeconomic and ecological factors together led to steep declines in wages and earnings, devastating crop failures, spikes in food prices (principally corn and beans), and even famine in the lower and middle classes. Although suggestive, such passing references to drought in the historiography of the revolution do not furnish a clear picture of its effects and how they may have contributed to social and political conflict. In the 21st century, new technologies, methods, and sources—from historical meteorological reports and climate-related accounts gleaned from archival sources to modern historical climatological data reconstructions—facilitate doing more rigorous climate history. This article provides a sampling of these methods and sources on the role of drought in late 19th- and early 20th-century Mexico that can supplement, elucidate, and even revise our understanding of the origins of the Mexican Revolution. Guillermo Castro H. An environmental crisis is neither the result of a single factor nor of a combination of such. On the contrary, it results from a complex combination of modes of interaction between ... More An environmental crisis is neither the result of a single factor nor of a combination of such. On the contrary, it results from a complex combination of modes of interaction between natural and social systems, operating for periods in time and space. This holds true for the environmental crisis in Latin America, understood within the context of the first global environmental crisis in the history of our species. The combination of facts and processes with respect to the crisis in Latin America is associated with three distinct and interdependent historical periods: (1) The first period, one of long duration, marks the interaction with the natural world of the first humans to occupy the Americas and encompasses a timespan of at least 15,000 years before the European conquest of 1500–1550. (2) The second period, one of medium duration, corresponds to European control of the region between the 16th and 19th centuries, a timespan that witnessed the creation of tributary societies grounded in noncapitalist forms of organization, such as the indigenous commune, feudal primogeniture, and the great ecclesiastical properties, which were characteristics of peripheral economies that existed within the wider framework of the emerging modern global economic system. (3) The third period, one of shorter duration, extended from 1870 to 1970 during which capitalist forms of relationships between social systems and natural systems in the region developed. This period was succeeded, beginning about 1980, by decades of transition and crisis, a process that is still ongoing. In this transition, old and unresolved conflicts reemerge in a new context, which combines indigenous and peasant resistance to incorporation into a market economy with the fight of urban dwellers for access to the basic environmental conditions for life, such as safe drinking water, waste disposal, energy, and clean air. In this scenario, a culture of nature is taking shape, which combines general democratic demands with values and visions from indigenous and African American cultures and those of a middle-class intellectuality increasingly linked to global environmentalism. This culture faces state policies often associated with the interests of transnational corporations and complex negotiation processes for agreements on global environmental problems. In this process, the actions of the past have led to the emergence of a great diversity of development options, all of which are centered in one basic fact: that, in Latin America as elsewhere around the word, if we want a different environment we need to create different societies. Stuart McCook Coffee has played complex and diverse roles in shaping livelihoods and landscapes in Latin America. This tropical understory tree has been profitably cultivated on large estates, on ... More Coffee has played complex and diverse roles in shaping livelihoods and landscapes in Latin America. This tropical understory tree has been profitably cultivated on large estates, on peasant smallholdings, and at many scales in between. Coffee exports have fueled the economies of many parts of Latin America. At first, coffee farmers cleared and burned tropical forests to make way for their farms and increase production. Early farms benefited from the humus accumulated over centuries. In Brazil, farmers treated these tropical soils as nonrenewable resources and abandoned their farms once the soils were exhausted. In smaller coffee farms along the Cordillera—from Peru up to Mexico—coffee farming was not quite as wasteful of forests and soils. In the mid-20th century, scientific innovation in coffee farming became more widespread, especially in established coffee zones that were struggling with decreasing soil fertility, increasing soil erosion, and new diseases and pests. In the 1970s, national and international organizations promoted large-scale programs to “renovate” coffee production. These programs sought to dramatically increase productivity on coffee farms by eliminating shade, cultivating high-yielding coffee cultivars, and using chemical fertilizers and pesticides. Renovation brought tremendous gains in productivity over the short term, but at the cost of added economic and environmental vulnerability over the longer term. Since the end of the International Coffee Agreement in 1989, the global coffee market has become much more volatile. New coffee pioneer fronts are opening up in Brazil, Peru, and Honduras, while elsewhere coffee production is shrinking. NGOs and coffee farmers have promoted new forms of coffee production, especially Fair Trade and certified organic coffee. Still, most coffee farms in Latin America remain “conventional” farms, using a hybrid of modern and traditional tools. Economic and environmental sustainability remain elusive goals for many coffee farmers, and the threat is likely to increase as they grapple with the effects of climate change. Gregory T. Cushman Agrarian societies in Latin America and the Caribbean have accomplished some of the most important and influential innovations in agricultural knowledge and practice in world history—both ... More Agrarian societies in Latin America and the Caribbean have accomplished some of the most important and influential innovations in agricultural knowledge and practice in world history—both ancient and modern. These enabled indigenous civilizations in Mesoamerica and the Andes to attain some of the highest population densities and levels of cultural accomplishment of the premodern world. During the colonial era, produce from the region’s haciendas, plantations, and smallholdings provided an essential ecological underpinning for the development of the world’s first truly global networks of trade. From the 18th to the early 20th century, the transnational activities of agricultural improvers helped turn the region into one of the world’s primary exporters of agricultural commodities. This was one of the most tangible outcomes of the Enlightenment and early state-building efforts in the hemisphere. During the second half of the 20th century, the region provided a prime testing ground for input-intensive farming practices associated with the Green Revolution, which developed in close relation with import-substituting industrialization and technocratic forms of governance. The ability of farmers and ranchers to intensify production from the land using new cultivars, technologies, and techniques was critical to all of these accomplishments, but often occurred at the cost of irreversible environmental transformation and violent social conflict. Manure was often central to these histories of intensification because of its importance to the cycling of nutrients. The history of the extraction and use of guano as a fertilizer profoundly shaped the globalization of input-intensive agricultural practices around the globe, and exemplifies often-overlooked connectivities reaching across regional boundaries and between terrestrial and aquatic environments. Sherry Johnson The Caribbean’s most emblematic weather symbol is the hurricane, a large rotating storm that can bring destructive winds, coastal and inland flooding, and torrential rain. A hurricane ... More The Caribbean’s most emblematic weather symbol is the hurricane, a large rotating storm that can bring destructive winds, coastal and inland flooding, and torrential rain. A hurricane begins as a tropical depression, an area of low atmospheric pressure that produces clouds and thunderstorms. Hurricane season in the Caribbean runs from June 1 through November 30, although there have been infrequent storms that formed outside these dates. Hurricanes are classified according to their maximum wind speed, and when a tropical system reaches the wind speed of a tropical storm (35 mph), it is given a name. Lists of names, which are rotated periodically, are specific to certain regions. If a named storm is responsible for causing a significant number of deaths or property damage, the name is retired and replaced with another. Most deaths in a storm came from drowning, from storm surge along the coast or from flooding or mudslides in the interior. Storm-related deaths also occur when structures collapse or when victims are struck by flying debris. One important and underestimated cause of death after the passage of a storm is disease. Even if the destruction is not immediate, the passage of a hurricane can leave significant ecological damage along the coast and in the interior. Hurricanes can have a devastating effect on a community that takes a direct hit. Repeated hurricane strikes can leave a sense of helplessness and hopelessness, “hurricane fatigue.” Conversely, survivors of a disaster are often left with a feeling of confidence that, since they have endured the effects of at least one deadly hurricane, they can do so again. Until the last half of the 18th century, meteorology remained primitive, but the Age of Enlightenment brought scientific and ideological advances. Major beneficiaries were royal navies whose navigation manuals and nautical charts became increasingly more accurate. In 1821, William C. Redfield established the circular nature of storms and their counterclockwise rotation, while other scientists showed how wind currents within the storms moved upward. Once the coiled structure of hurricanes were established by mid-century, the term “cyclone” was applied, based upon the Greek word for the coils of a snake. After the mid-19th century, scientists moved from information gathering to attempts to predict hurricane strikes. Technology, in the form of the telegraph, was a key component in creating a forecasting system aided by organizations such as the Colegio de Belén, in Havana, Cuba. Later in the century, governments worldwide created official observation networks in which weather reports were radiotelegraphed from ships at sea to stations on land. The 20th century experienced advances, such as the use of kites and balloons, and the introduction of weather reconnaissance aircraft during World War II. In April 1960, the first satellite was launched to observe weather patterns, and by the early 1980s, ocean buoys and sophisticated radar systems made forecasts increasingly more accurate. Jeffrey M. Banister and Stacie G. Widdifield Historians have extensively explored the topic of water control in Mexico City. From the relationship between political power and hydraulics to detailed studies of drainage and other ... More Historians have extensively explored the topic of water control in Mexico City. From the relationship between political power and hydraulics to detailed studies of drainage and other large-scale infrastructure projects, the epic story of water in this megalopolis, constructed over a series of ancient lakes, continues to captivate people’s imaginations. Securing potable water for the fast-growing city is also a constant struggle, yet it has received comparatively less attention than drainage in historical research. Moreover, until quite recently scholars have not been especially concerned with water control as a process of representation—that is, a process shaped by, and shaping, visual culture. Yet, potable water brings together many stories about people and places both within and outside of the Basin of Mexico. As such, the history of potable water is communicated through a diverse array of objects and modern infrastructures not limited to the idea of waterworks in the traditional sense of the term. A more expansive view of “infrastructure” incorporates more than the commonplace objects of hydraulic management such as aqueducts, pumps, wells, and pipes: it also involves architecture, photography, and narrative history, official and unofficial. Built in the first decade of the 20th century as a response to acute water shortages, the impressively modern Xochimilco Potable Water Works exemplifies a system that delivered far more than fresh drinking water through its series of modern electric pumps and aqueduct. The system was a result of a larger modernization initiative launched by the administration of Porfirio Díaz (1876–1911). It wove together an official history of water, which included the annexation of Xochimilco’s springs, through its diverse infrastructures, including the engineering of the potable water system as well as the significance of the structures themselves in terms of locations and architectural elaboration in neo-styles (also known as historical styles) typical of the period. Demonstrably clear from the sheer investment in making the Xochimilco waterworks appealing to the public is that infrastructure can possess a rich visual culture of its own. Myron Echenberg During his breathtaking 19th-century scientific explorations of New Spain (as Mexico was known under Spanish rule), illustrious German scientific traveler Alexander von Humboldt crammed a ... More During his breathtaking 19th-century scientific explorations of New Spain (as Mexico was known under Spanish rule), illustrious German scientific traveler Alexander von Humboldt crammed a lifetime of scientific studies into one extraordinary year: exhausting inspections of three major colonial silver mines, prodigious hikes to the summits of most of Mexico’s major volcanoes while taking scientific measurements and botanical samples, careful study of hitherto secret Spanish colonial archives in Mexico City, and visits to recently uncovered archaeological sites of pre-Hispanic cultures. Humboldt wrote voluminously about his Mexican experiences and is an indispensable source of insights into the colony of New Spain on the eve of its troubled birth as independent Mexico a decade later. Aviva Chomsky Latin American labor has a well-established historiography, in dialogue with trends outside of the region. Environmental history is a newer and more exploratory field. In basic terms, ... More Latin American labor has a well-established historiography, in dialogue with trends outside of the region. Environmental history is a newer and more exploratory field. In basic terms, environmental history explores the relationships of humans with the natural world, sometimes referred to as “nonhuman nature.” This can include how humans have affected the natural world, how the natural world has affected human history, and histories of human ideas and belief systems about nature. Labor and environmental history grows from explorations of the connections between these two spheres. Humans interact with the natural world through their labor and from their class perspective. The natural world shapes the work that people do and the institutions and structures humans create to organize and control labor. Changing labor regimes change the ways that humans interact with, and think about, the natural world. Both labor and environmental histories are in some senses investigations of how humans relate to nature. This essay sets Latin American labor and environmental history in global historical context. After offering a chronological summary, it briefly examines connections between U.S. Latino and Latin American labor and environmental histories, and ends with a discussion of contemporary Latin American critical environmentalisms. PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LATIN AMERICAN HISTORY (latinamericanhistory.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).
{ "pile_set_name": "Pile-CC" }
[A Super-Elderly Patient with Recurrent Colon Cancer with Metastasis Effectively Treated with Capecitabine plus Bevacizumab Chemotherapy-A Case Report]. The safety and feasibility of chemotherapy for elderly patients is unclear. We report a super-elderly patient with liver metastases from colorectal cancer successfully treated with capecitabine plus bevacizumab chemotherapy. An 87-year-old woman underwent a colectomy for transverse colon. At 4 months postoperatively, she underwent hepatectomy for liver metastases. At 9 months after the first surgery, a new liver metastases(S4)was found. At this time, she rejected another hepatectomy. Therefore, we selected capecitabine plus bevacizumab chemotherapy, considering her age. After 18 courses of administration, the liver metastasis did not progress, and no new metastatic lesions were found on CT examination. Although as adverse events Grade 2 hand-foot syndrome developed, no other adverse event occurred. The patient's PS score was maintained at 0. We suggest capecitabine plus bevacizumab chemotherapy is an effective regimen for super-elderly patients with colorectal cancer.
{ "pile_set_name": "PubMed Abstracts" }
OSCN Found Document:STATE ex rel. OKLAHOMA BAR ASSOCIATION v. FAULKNER OSCN navigation Home Courts Court Dockets Legal Research Calendar Help Previous Case Top Of Index This Point in Index Citationize Next Case Print Only STATE ex rel. OKLAHOMA BAR ASSOCIATION v. FAULKNER2014 OK 67Case Number: SCBD-6064Decided: 07/15/2014THE SUPREME COURT OF THE STATE OF OKLAHOMACite as: 2014 OK 67, __ P.3d __ NOTICE: THIS OPINION HAS NOT BEEN RELEASED FOR PUBLICATION. UNTIL RELEASED, IT IS SUBJECT TO REVISION OR WITHDRAWAL. STATE OF OKLAHOMA ex rel. OKLAHOMA BAR ASSOCIATION, Complainant, v. M. CLYDE FAULKNER, Respondent. ORDER APPROVING RESIGNATIONFROM OKLAHOMA BAR ASSOCIATION PENDING DISCIPLINARY PROCEEDINGS ¶1 Upon consideration of the Oklahoma Bar Association's (OBA) May 20, 2014, application for an Order approving the resignation of M. Clyde Faulkner, #2846 pending disciplinary proceedings, this Court finds: 1. Mr. Faulkner filed his affidavit of resignation from membership in the OBA pending disciplinary proceedings on May 20, 2014, and surrendered his OBA membership card at that time. 2. Pursuant to Rule 8.1 of the Rules Governing Disciplinary Proceedings (RGDP), 5 O.S.2011, Ch. 1, App. 1-A, Faulkner's affidavit of resignation reflects that: a) Faulkner's resignation was freely and voluntarily rendered, was not the result of coercion and/or duress, and Respondent is fully aware of the consequences of his resignation; b) Faulkner is aware that the General Counsel's Office of the OBA is investigating specific allegations of misconduct against him by Amy Hart. He agrees that the allegations, if proven, would constitute violations of his oath as an attorney, Rule 1.3 of the RGDP, and Rules 1.1, 1.3, 1.5, 1.15, and 8.4(a) and (c) of the Oklahoma Rules of Professional Conduct (ORPC), 5 O.S.2011, Ch. 1, App. 3-A. Respondent voluntarily waives any and all right to contest the allegations set forth in the OBA's Complaint, which are as follows: On or about March 6, 2009, Mr. Faulkner received a settlement check for personal injuries suffered by his client. The check was in the amount of $18,000, which represented a $25,000 settlement, less $7,000, which was used as payment for his client's medical bills. Faulkner deposited the $18,000 check into his trust account, with $8,333.33 (one-third of the settlement amount) owed to him for attorney fees, and $9,666.67 remaining for his client. Respondent never distributed the remaining amount to his client, and his trust account balance dropped below $1,000 on January 13, 2010. During the pendency of the personal injury action, the client discharged Respondent and hired Amy Hart, who then filed the present grievance against Mr. Faulkner; c) Respondent agrees that he may be reinstated only upon full compliance with the conditions and procedures specified by Rule 11 of the RGDP, and that he may apply for reinstatement in no fewer than five (5) years from the effective date of his resignation. 3. Additionally, Respondent's affidavit reflects that he is familiar with Rule 9.1 of the RGDP and he agrees to comply with all provisions therein within twenty (20) days following the date of his resignation. 4. Respondent acknowledges that, as a result of his conduct, the Client Security Fund may receive claims from his former clients. He agrees that, should the OBA approve and pay such Client Security Fund claims, he will reimburse the principal amounts and applicable statutory interest prior to his filing of any application for reinstatement. 5. Respondent acknowledges and agrees that he is to cooperate with the General Counsel's Office of the OBA in identifying any active client cases wherein documents and files need to be returned or forwarded to new counsel, and any client cases wherein he owes fees or refunds. 6. Finally, Respondent acknowledges that under Rule 8.2 of the RGDP, the approval or disapproval of his resignation is with the discretion of the Supreme Court of Oklahoma. 7. The OBA advises that it has incurred costs in its investigation of the above-stated matter and is seeking reimbursement of such costs. In compliance with this Court's order of June 16, 2014, the OBA filed its Application to Assess Costs, in which it seeks reimbursement of costs related to postage, service of process, and transcript and witness expenses. Respondent filed no response or objection, his five (5) days in which to do so having now lapsed. ¶2 IT IS THEREFORE ORDERED that Respondent's resignation pending disciplinary proceedings is approved, effective May 20, 2014. ¶3 IT IS FURTHER ORDERED that the name of M. Clyde Faulkner be stricken from the roll of attorneys. Pursuant to Rule 11.1(e) of the RGDP, Respondent may apply for reinstatement in no fewer than five (5) years from the effective date of Respondent's resignation. Repayment to the Client Security Fund for any funds expended because of Respondent's conduct shall be a condition of his reinstatement in accordance with Rule 11.1(b) of the RGDP, as well as his cooperation with the General Counsel's Office of the OBA in identifying any active client cases wherein documents and files need to be returned or forwarded to new counsel, and any client cases wherein he owes fees or refunds. Respondent shall comply with Rule 9.1 of the RGDP within twenty (20) days following the date this Order is filed. ¶4 IT IS FURTHER ORDERED that Complainant's Application to Assess Costs is granted, and costs incurred by Complainant in its investigation of the above-stated matter shall be taxed against Respondent in the amount of $1,016.70, and in accordance with Rule 6.16 of the RGDP, shall be paid within ninety (90) days of the date this Order is filed. ¶5 DONE BY ORDER OF THE SUPREME COURT IN CONFERENCE this 15th day of July, 2014. /S/CHIEF JUSTICE ALL JUSTICES CONCUR Citationizer© Summary of Documents Citing This DocumentCite Name Level None Found.Citationizer: Table of AuthorityCite Name Level None Found.
{ "pile_set_name": "FreeLaw" }
The interior configuration and architecture of aircraft have become relatively standardized today. The arrangements of the passenger seats, bulkheads, lavatories, serving areas, and crew spaces have been developed for convenience and accommodation of both passengers and crew. The passenger compartments are typically divided into two or more sections with bulkheads and lavatories being positioned accordingly. Aisles and passageway spaces are left between sets of seats and at the access doors. The support lines and conduits for the accessory and auxiliary systems, such as conditioned air, water, hydraulics and electrical systems, are typically positioned in the lower bay below the passenger compartment (cabin) or in the crown or space above the passenger cabin. For some of these accessory systems, such as conditioned air and electrical systems, the wires and lines are passed between the lower bay and crown, or between one of those areas and the passenger cabins through the sidewalls or support members adjacent the exterior of the aircraft. The installation, repair, and modification of the accessory and auxiliary systems, as well as the cabin furnishings, is a considerable expense to aircraft owners and users. There is a need for improved interior systems and for more efficient design and use of cabin furnishings and associated systems. Often, the design and installation of cabin furnishing and auxiliary systems result in modification to the system transport elements, such as electrical wiring, fluid lines, and environmental control system ducts, which result in an increased cost and lead time for delivery of the desired aircraft. The problem is amplified for those transport systems that pass between the crown and lower bay of the aircraft since this may result in the loss of windows and sidewalls or longer than desired runs to fixed monuments at bulkheads. There is a need in the aircraft industry for improved, more efficient, less complex, and less costly configurations for cabin furnishings and associated auxiliary systems in order to obviate the afore-mentioned problems.
{ "pile_set_name": "USPTO Backgrounds" }
Search ends for Japanese climbers on Mt. McKinley The National Park Service says it has permanently suspended efforts to recover the bodies of four Japanese climbers killed in an avalanche on Alaska's Mount McKinley. A Denali National Park spokeswoman said in a news release Sunday that a team of 10 searchers reached the likely location of the climbers on Saturday, and that a mountaineering ranger lowered himself into the same crevasse that the party's one survivor fell into. The ranger probed through avalanche debris 100 feet beneath the glacier's surface and found a broken rope that matched that of the Japanese team. The risk of ice falling made it too dangerous to keep digging. Rangers also now say that the avalanche occurred early Wednesday morning, not Thursday. The lone survivor, 69-year-old Hitoshi Ogi, reached a base camp to report the avalanche Thursday afternoon.
{ "pile_set_name": "Pile-CC" }
1. Introduction {#s0005} =============== Human cytomegalovirus (HCMV) is a ubiquitous, highly host-specific herpesvirus that causes severe, sometimes life-threatening disease in congenitally infected newborns as well as in immunocompromised individuals such as bone marrow allograft transplant recipients and AIDS patients ([@bb0020], [@bb0045]).The ability to reactivate with high viral load from latency in immunocompromised or immunosuppressed hosts endowed HCMV with high pathogenesis ([@bb0230]). To date, no effective vaccine or antiviral drugs is available to prevent or treat HCMV infection. The most important and effective factor to prevent the development of serious complications after HCMV infection dwells in cellular immunity in humans ([@bb0060], [@bb0215]), which may involved transcriptional alterations. Whereas a virus can potentially modulate the level of cellular mRNA by various mechanisms, identification of the cellular transcription alterations that are repressed or activated after HCMV latent infection could unveil the dynamic virus-host interaction and facilitate the development of antiviral therapy. Several high-throughput studies provided a broad catalog of the transcriptional response of the cell during lytic HCMV infection by analyzing temporal changes in total RNA levels ([@bb0305], [@bb0120], [@bb0025], [@bb0225], [@bb0040], [@bb0110]). These studies identified a subset of differently expressed genes, which are involved in a variety of biological functions including innate immunity response, inflammation pathway, cell cycle regulation, cellular metabolism and cell adhesion, during lytic HCMV infection. However, the HCMV latency still remains untapped. Previous studies in HCMV infection focused on the changes in mRNAs. During the last decade, the importance of long noncoding RNAs (lncRNAs) as mediators involved in chromatin structure, gene regulation, subcellular structural organization, as well as nuclear-cytoplasmic trafficking has begun to be recognized ([@bb0010], [@bb0260], [@bb0160]). Recent studies have demonstrated the changes in host lncRNA expression in response to virus infection. In 2010, Yin et al.([@bb0285]) observed the differential expression of more than 4,800 lncRNAs (2,990 lncRNAs were up-regulated, whereas 1,876 lncRNAs were down-regulated) in response to enterovirus 71(EV71) infection. Ouyanget al.([@bb0195]) reported that a long noncoding RNA-negative regulator of antiviral response (*NRAV*) could modulate antiviral responses through suppression of multiple critical interferon-stimulated genes transcription, including *IFITM3* and *MxA*, after influenza Avirus (IAV) infection. These findings suggest the widespread differential expression of lncRNAs in response to virus infection and their potential roles in regulating the host response, including innate immunity. The recent developments in next-generation sequencing (NGS) technologies have enabled quantification of response in host cells to infections at transcriptomic level. As a NGS technology, RNA-seq not only outperforms the conventional microarray in sensitivity and specificity, but is also able to detect new genes, lncRNAs, rare transcripts, alternative splice isoforms and novel SNPs, which extends the limited analysis scope of microarrays ([@bb0155], [@bb0175], [@bb0190]). Therefore, this technology has gained rapidly expanding application in transcriptomic studies ([@bb0265], [@bb0210]). The present study adopted RNA-seq to identify global changes of mRNAs and lncRNAs in host cell during HCMV experimental latent infection of THP-1 cells, in an attempt to derive insights into the mechanism underlying alterations in response to infection. To our knowledge, this is the first study to explore the comprehensive transcriptome profile of both noncoding RNAs and mRNAs during HCMV experimental latent infection. These results fill the gap of knowledge in the complexity of the virus-cell interactions, and promise to offer clues with which the control of HCMV infection may advance. 2. Materials and Methods {#s0010} ======================== 2.1. Cells culture and virus infections {#s0015} --------------------------------------- THP-1 cell line was cultured in RPMI 1640 medium(GIBCO, Life Technologies) supplemented with 10% (v/v)fetal bovine serum, 100 units/ml Penicillin, and 100μg/ml Streptomycin (P/S) (GIBCO, Life Technologies) in incubator (5% CO2, at 37°C). To analyze cellular responses to HCMV infection, a THP-1 cellline was infected with HCMV Towne at a multiplicity of infection (MOI) of 5. RNAs were isolated and analyzed at 4 days through the infection cycle and were also obtained from mock-infected cells. HCMV Towne strain infection and latent infection cell modelwere performed as described previously([@bb0070], [@bb0295]). 2.2. RNA extraction, Illumina library construction and sequencing {#s0020} ----------------------------------------------------------------- Total RNA from THP-1 cells was extracted using Trizol reagent (Invitrogen, USA). Quantification and quality evaluation were performed with Nanodrop 2000 (Thermo Scientific) and Agilent 2100 Bioanalyzer (Agilent Technologies), respectively. Because the majority of RNAs purified from eukaryotic cells are large, structured ribosomal RNAs (rRNA), the mRNA signal must be enriched. We used a kit developed for the removal of the large rRNAs (18s and 28s rRNAs) from the samples. Subsequently, they were used for mRNA purification and library construction with the Truseq™ RNA Sample Preparation Kit v2 (Illumina, SanDiego, CA, USA) according to the manufacturer's instructions. Our samples were named mock-infected and latent-infected THP-1 respectively that were sequenced on an Illumina HiSeq™2000 (Illumina) with pair-end libraries in Berry Genomics Bio-Technology Co. (Beijing, China). 2.3. RNA-seq reads mapping and transcriptome reconstruction {#s0025} ----------------------------------------------------------- The spliced read aligner TopHat version V1.31 were used to map all sequencing reads to the human genome. All reads were mapped with TopHat ([@bb0255]) using the following parameters: segment-mismatches = 2, splice-mismatches = 0. Aligned reads from TopHat were assembled into transcriptome for each sample separately by Cufflinks. Hg 19 RefSeq (RNA sequences, GRCh37) was downloaded from the UCSC Genome Browser (<http://genome.ucsc.edu>)and all known noncoding genes from NONCODE 3.0 database ([@bb0035]). Cufflinks uses a probabilistic model to simultaneously assemble and quantify the expression level of a minimal set of isoforms and provides a maximum likelihood explanation of the expression data in a given locus. Cufflinks version V1.0.3 was run with default parameters (and 'min-frags-per-transfrag = 0'). The RNA-Seq reads used in this study have been deposited to NCBI Sequence Read Archive (SRA) database under accession number SRA458685. 2.4. Novel lncRNAs detection pipeline {#s0030} ------------------------------------- We implement the following four steps to enhance the reliability of constructing expressed lncRNAs from transcripts obtained from our two samples: (1) Select transcripts with multi-exon; (2)Select transcripts which are longer than 200 bases; (3) Calculate the coding potential of each transcript using CNCI (Coding Noncoding Index) in-house software([@bb0245]) to recover the transcripts which can be categorized as noncoding (CNCI, is a powerful signature tool that profiles adjoining nucleotide triplets to effectively distinguish protein-coding and non-coding sequences independent of known annotations; CNCI software is available at <http://www.bioinfo.org/software/cnci>); (4) Select transcripts that are located in the intron, intergenic and antisense regions from genes encoding known. 2.5. Identification of differentially expressed genes {#s0035} ----------------------------------------------------- The gene expression was calculated using the RPKM method (reads per kilobase transcriptome per million reads) and a minimum RPKM value of 0.1 is required for expressed genes/isoforms ([@bb0180]). The RPKM method is able to remove the influence of different gene lengths and sequencing discrepancies from the calculation of gene expression. Therefore, the calculated gene expression can be directly used to comparing the differences in gene expression among samples. Differentially expressed genes (DEGs) were defined as those with changes of at least 2-fold change between a pair of samples. 2.6. PCR validation {#s0040} ------------------- qPCR was performed to confirm the expression of mRNAs and lncRNAs by RNA-seq analysis. Briefly, cDNA was synthesized from total RNA (random hexamers) using Revert Aid First Strand cDNA Synthesis Kit (Thermo). Primers for 7 mRNAs and 7 lncRNAs were designed and synthesized (Tables S1 and S2). Then, qPCR was performed using a BIO-RAD CFX96 (BIO-RAD). The 20 μl PCR reactions included 1μl of cDNA product and 10 μl of FastStart Essential DNA Green Master (Roche). The reactions were incubated at 95°C for 5 min, followed by 40 cycles at 95°C for 5 s, 60°C for 30 s, and 72°C for 15 s. All reactions were run in triplicate. After reaction, the threshold cycle value (CT) data were determined using default threshold settings, and the mean CT was determined from the duplicate PCRs. The expression levels of mRNAs and lncRNAs were measured in terms of CT, and then normalized to GAPDH (endogenous gene) using -\[delta\]\[delta\] CT. 2.7. Gene ontology (GO) and signaling pathway analysis {#s0045} ------------------------------------------------------ Pathway and GO analyses were applied to determine the roles of these closest coding genes in biological pathways or GO terms. GO (<http://www.geneontology.org>/) is an international classification system for standardized gene functions, offering a controlled vocabulary and a strictly defined conceptualization for comprehensively describing the properties of genes and their products within any organism. Signaling pathway analysis was based on Kyoto Encyclopedia of Genes and Genomes (KEGG, <http://www.genome.jp/kegg/>) database. For the identification of the GO categories and pathways that the differentially expressed genes (DEGs) are predicted to participate in, all DEGs were mapped to terms in the GO and KEGG database and searched for significantly enriched GO and KEGG terms compared to the genomic background. 3. Results and Discussion {#s0050} ========================= 3.1. Mapping of RNA-seq reads and transcriptome reconstruction {#s0055} -------------------------------------------------------------- The original image data generated by the sequencing machine were converted into sequence data via base calling (Illumina pipeline CASAVA v1.8.2) and then a total of 169,008,624 valid reads were obtained by HiSeqTM 2000 (Illumina) 100 bppaired-end sequencing after a stringent filtering process ([Table 1](#t0005){ref-type="table"} ). The filtered reads were mapped to the human genome identifying a total of 180,616 transcripts and 33,243 genes in the two samples. The number of junctions, transcripts, protein-coding genes and noncoding genes for each sample are shown in Supplementary Tables S3 and S4.Table 1The reads of latent-infected and mock-infected THP-1 cells libraries mapping with reference genome.Table 1.latent-infectedmock-infectedreads numberpercentagereads numberpercentageTotal reads83,921,512100.00%85,087,112100.00%Total basepairs8,392,151,200100.00%8,508,711,200100.00%Total mapped reads47,490,95056.59%37,434,11944.00%Perfect match22,382,35626.67%15,592,32818.33%\<=2bp mismatch47,394,99056.48%37,364,06143.91%Unique match44,870,34753.47%35,471,87241.69%Multi-position match2,620,6033.12%1,962,2472.31%Total unmapped reads36,430,56243.41%47,652,99356.00% 3.2. Differential expression of mRNAs and lncRNAs inlatent-infected *vs.* mock-infected THP-1 cells {#s0060} --------------------------------------------------------------------------------------------------- With RNA-seq, we totally detected 6,158 lncRNAs and 32,815 coding transcripts (Tables S3 and S4). The lncRNAs and mRNAs of above two-fold change in latent-infected vs. mock-infected THP-1 cells were statistically analyzed. The number of DEGs (up- and down-regulated) and lncRNAs for each sample is shown in [Fig. 1](#f0005){ref-type="fig"} and [Table 2](#t0010){ref-type="table"} . Fold change (FC) values ranged from 43.05 to -32.31 in DEGs, from 32.81 to -34.86 in lncRNAs ([Supplementary Table S5](#ec0010){ref-type="supplementary-material"}, [Supplementary Table S6](#ec0015){ref-type="supplementary-material"}, [Supplementary Table S7](#ec0020){ref-type="supplementary-material"}, [Supplementary Table S8](#ec0025){ref-type="supplementary-material"}, [Supplementary Table S9](#ec0030){ref-type="supplementary-material"}, [Supplementary Table S10](#ec0035){ref-type="supplementary-material"} online). RNA-seq is able to provide quantitative read-out of mRNA expression levels of each sample.During viral infections, the interaction between viral and antiviral activities arouse rapid alterations in cellular gene expression, which had been shown to be effectively profiled by RNA-seq. In contrast to microarrays that just permit comparative analyses with relative expression values,Fig. 1Significantly differentially expressed coding genes (A) and noncode genes (B).The volcano plot (Kal's test) showing the relationship between the p-values of Kal's test and the magnitude of the difference in expression values of the two samples. Differentially expressed coding genes or noncode genes were highlighted in blue dot.Fig. 1.Table 2The numbers of difference coding and non-coding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Table 2.Groupcoding genenon-codingnovel non-codingUp-expressed1,098208438Down-expressed1,055140284 3.3. Validation of RNA-seq data by qRT-PCR {#s0065} ------------------------------------------ To verify the RNA-seq data, we performed qRT-PCR on a subset of 14 randomly chosen genes which represented 8 of the upregulated and 6 of the downregulated genes. The results of qRT-PCR indicated either an up- or down-regulation of transcription which correlate with the up- or down-regulation in RNA-seq. This validation demonstrated that theresults from RNA-seq and qPCR are highly concordant (r = 0.87, Pearson correlation)([Fig. 2](#f0010){ref-type="fig"} ). qRT-PCR tests confirmed the quality and robustness of the results.Fig. 2Validation of RNA-seq data by qRT-PCR. X-axis: -\[delta\] \[delta\] CT values from qPCR comparing HCMV-infected THP-1 cells or mock-infected cells. Y-axis: log~2~ (foldchange) between infected-or mock-infected THP-1 cells via RNA-seq. Pearson correlation coefficient (R) based on all genes is shown in black.Fig. 2. 3.4. Assignment of GO terms and KEGG pathways {#s0070} --------------------------------------------- To understand the functions of the DEGs and the biological processes involved in HCMV infection, all of the DEGs were mapped to terms in the GO and KEGG databases. The three main, independent GO categories are biological processes, molecular functions, and cellular components. The details are shown in [Fig. 3](#f0015){ref-type="fig"} . According to the results of GO analysis, the GO terms obviously differentiate up-regulated genes and down-regulated genes. The results of KEGG pathway analysis indicated that the unigenes were related to 21 signaling pathways, particularly to those in the cancer and MAPK signaling pathways ([Table 3](#t0015){ref-type="table"} ). Those differentially expressed genes are involved in pathways implicated in viral pathogenesis including apoptosis, inflammatory response and cell cycle progression. The dynamic alterations in the expression profile reflect human cells' response to HCMV infection as an attempt to antagonize viral replication and spread (see [Supplementary Table S11](#ec0040){ref-type="supplementary-material"}, [Supplementary Table S12](#ec0045){ref-type="supplementary-material"}, supplementary material online, for full details of GO terms).Fig. 3Gene ontology assignments for differentially expressed genes (DEGs) upon HCMV infection. The DEGs upon HCMV infection that matched various ontology (GO) categories (top 10) of biological process, cellular component and molecular function. The x-axis indicates the GO terms and the y-axis indicates enrichment score (-log10 (P value)). A, GO analysis for the up-regulated genes upon HCMV infection. B, GO analysis for the down-regulated genes upon HCMV infection.Fig. 3.Table 3Important KEGG pathways of coding genes influenced by HCMV infection.Table 3.PathwayUnigenes with pathway annotationsPathway IDSystemic lupus erythematosus23hsa05322MAPK signaling pathway44hsa04010Apoptosis28hsa04210p53 signaling pathway19hsa04115NOD-like receptor signaling pathway16hsa04621Neurotrophin signaling pathway23hsa04722Pathways in cancer49hsa05200Epithelial cell signaling in Helicobacter pylori infection15hsa05120Wnt signaling pathway24hsa04310Natural killer cell mediated cytotoxicity32hsa04650Hematopoietic cell lineage13hsa04640B cell receptor signaling pathway15hsa04662 3.5. Cell-death induced by HCMV infection {#s0075} ----------------------------------------- Apoptosis is an essential biological process in multicellular organisms induced in response to many extrinsic stimuli (such as oxidative stress and inflammatory reaction) and was considered as an infection-associated immunopathology ([@bb0145], [@bb0275]). Apoptosis of virus-infected cells plays an essential role in the immune system, as its removal of viral survival environment controls the proliferation of intracellular pathogens ([@bb0250]). Apoptosis can be initiated via intrinsic pathway and the receptor-mediated pathway, which are mediated through endoplasmic reticulum and mitochondrion, respectively([@bb0075]). Multiple previous studies indicated that human cells infected with HCMV undergo apoptosis([@bb0025], [@bb0040]).In this study, we found that 47 pro-apoptosis genes were up-regulated and 12 anti-apoptosis genes were down-regulated (especially *BCL-2*, 2.91-fold)at 4 days HCMV post infection ([Table 4](#t0020){ref-type="table"} ). Most of the up-regulated genes were involved in the mitochondrial pathway, including *Bcl-2-binding component 3* (*BBC3*), *CASP8* (2.58- and 2.49-fold, respectively). *CASP8*is one of the most crucial molecules for cell death induction([@bb0015]). It may directly cleave downstream effector, such as *caspase-3* and link the receptor to the mitochondrial pathway by cleavaging Bid to initiate a mitochondrial amplification loop([@bb0005]). *BBC3*, a pro-apoptotic protein that effects as a p53 up-regulated modulator of apoptosis (PUMA), has the ability to inhibit the interaction between the anti-apoptotic molecules, *Bcl-2* and the pro-apoptotic molecules, *Bax* and *Bak* and then result in apoptosis([@bb0100], [@bb0185]). Many other genes involved in tumor necrosis factor (TNF) signaling pathways (*TNFRSF10B* and *TNFSF14*) and p53 signal transduction (*ABL1*, *CDKN1A*, *CHEK2*and *HIPK2*). This indicated that the TNF signal pathway and the p53 signal pathway were activated upon HCMV infection, but the detailed mechanisms still need further investigation.Table 4List of the differentially expressed genes response to HCMV infection on apoptosis, immunity and cell cycle.Table 4.ResponseAssociated Genes**Apoptosis**Positive regulation of apoptosis(UP[a](#tf0005){ref-type="table-fn"})*ABL1, ABR, ADAMTSL4, APC, APOE,ARHGAP4, BBC3, BCL2L11, BCL3, CAPN10, CASP1, CASP8, CD5, CDKN1A, CEBPβ, CHEK2, DDIT3, DEDD, EIF5A, FGD4, HIPK2, HMOX1, IL1β, INHBA, JUN, MAP3K5, MMP9, NFκBIL1, NOTCH1, NQO1, NR3C1, NUPR1, P2RX7, PHLDA1, PLEKHG5, PSEN2, RAD9A, RIPK3, SCAND1, SMAD3, SQSTM1, TGFβ1, TIAL1, TICAM1, TNFRSF10B, TNFSF14, WWOX*Negative regulation of apoptosis(DN^b^)*ADNP, ANXA4, APBB2, ASNS, BCL2, BNIP3, CACNA1A, CASP3, CHD8, DLX1, EYA1, MEF2C, MPO, NME1, NUP62, PRLR, RASA1, RTEL1, SKP2, SOD2, TERT, XRCC4, YBX3, YWHAZ*Anti-apoptosis(UP)*ANXA1, APOE, BCL2A1, CEBPβ, HMOX1, HSP90β1, HSPA1A, HSPA1B, HSPA5, IER3, IκBKβ, IL1β, IRAK1, NFκB1, PRNP, RELA, SOCS3, SPHK1, SQSTM1, THBS1, VEGFA, VIMP*

**Immune response**Positive regulation of immune system process(UP)*C1R, C3AR1, CD46, CD5, CD55, CD74, CDKN1A, CR1, EBI3, FCER1G, FYN, ICAM1, IL1β, IL4R, IRAK1, MAP3K7, P2RX7, PSEN2, RARα, RELA, SLC11A1, TGFβ1, THBS1, TICAM1, TNFSF14, VEGFA*Virus-host interaction(UP)*DDX39B, HIPK2, IRF7, SMAD3, TGFβ1*Positive regulation of cytokine production(UP)*AGPAT1, BCL3, CARD8, CASP1, FCER1G, IL1β, MAP3K7, P2RX7, RARα, SLC11A1, SMAD3, TGFβ1, THBS1, TICAM1, UCN*

**Cell cycle**Cell cycle arrest(UP)*APC, CDKN1A, CXCL8, DDIT3, FOXO4, GAS7, INHBA, PPP1R15A, RASSF1, SESN2, SMAD3, TGFβ1, THBS1*Cell cycle(DN)*AIF1, ANAPC11, APBB1, APBB2, ARHGEF2, AURKA, BCL2, CASP8AP2, CCNA1, CDK11B, CDK20, CENPA, CEP55, CHEK1, CHTF8, DSN1, FANCD2, FBXO43, GFI1, GTSE1, HAUS2, HBP1, ILF3, KIF11, KMT2E, MAPK13, MLF1, MLH1, NEK6, OIP5, PBRM1, PIWIL4, PKD1, PLK1, PML, PRC1, PSMA4, PSMB8, PSRC1, RAD17, RAD51B, RAD51D, RAD54L, RASSF1, RCC1, SEPT9, SESN3, SIRT2, SKA2, SKP1, SKP2, SMARCB1, SMC4, SUN2, TERF1, TRNP1, TTK, ZNF655*[^1] As a matter of fact, many viruses, including HCMV, attempt to inhibit the apoptotic pathway as to create a favorable environment for viral replication([@bb0300], [@bb0090], [@bb0025], [@bb0030], [@bb0040], [@bb0165]). In our study, we found that 25 mRNAs that could result in apoptosis were down-regulated (especially *CASP3*, 6.49-fold), and 22 mRNAs encoding anti-apoptotic proteins were up-regulated, suggesting that complex adjustments in apoptotic signals may occur during HCMV latent infection. Sarkar et al. ([@bb0220]) demonstrated that cytokines (IL1β, TNF-α and IFN-γ) upregulate several anti-apoptotic genesthrough NF-κB-mediated signalling, including BCL2A1, BIRC3, TNFIAP3,CFLAR and TRAF1. In our study, we found IL1β and BCL2A1 upregulate 7.53 and 3.67 foldchange, respectively. 3.6. Inflammatory response in THP-1 cells via HCMV infection {#s0080} ------------------------------------------------------------ Many genes with a role in the immune system were perturbed by viral infection ([Table 4](#t0020){ref-type="table"}). Assignment of GO terms included positive regulation of response to stimulus, positive regulation of immune system process, positive regulation of immune response and activation of immune response. In the course of HCMV infection in the host, on the one hand, host can induce non-specific and specific immune response to clear HCMV. We found that several genes encoding regulators of NF-κB function were up-regulated, such as *IL-1β*, *RELA*, *ICAM1* and *CARD8* (7.53-, 2.09-, 4.21- and 2.06-fold, respectively). NF-κB is one of the key players in stimulating the transcription of genes involved in cellular immuneresponse, as well as cell adhesion. Thus, it plays an important role in the antiviral defense ([@bb0085]). A previous study revealed that NF-κB pathway could be induced by lytic HCMV infection([@bb0040]). *IL-1β* is an important mediator of the inflammatory response and it is one of the known inducers of *RELA* activity, supporting its participation in immune response. *RELA*, also known as p65, is a REL-associated protein involved in NF-κB heterodimer formation, nuclear translocation and activation. In general, *RELA* participates in adaptive immunity and responses to invading pathogens via NF-κB activation ([@bb0135]). *CARD8*, is involved in pathways leading to activation of caspases or nuclear factor kappa-B (NF-κB) in the context of apoptosis or inflammation, respectively. NF-κB triggers transcription of various genes that were critical to inflammation, such as cytokines, chemokines and cell adhesion molecules including *ICAM1* ([@bb0050], [@bb0125]). Our results show that the expression levels of *ICAM1* was elevated. *ICAM1* can mediate leukocyte adhesion, and then transmigrate leukocytes into tissues resulting in inflammation([@bb0280]). P2RX7 increasedin our study (3.13 foldchange). The product ofP2RX7 gene belongs to the family of purinoceptors for ATP.P2X7 receptors play an important role in regulating inflammatory responses during acute viral infection (adenoviral vector) in ATP-mediatedinflammation ([@bb0130]). Continuous interactions between viruses and hosts have made the viruses evade host immune defenses during their co-evolution. Consequently, viruses have manipulated host immune-control mechanisms to facilitate their propagation. HCMV is well-known for its ability to evade normal immune response pathways such as antigen presentation([@bb0270], [@bb0170], [@bb0140]). Functional studies using purified complement components demonstrated that up-regulation of *CD55* (5.26-fold) suppressed the activity of cell-associated C3 convertases on HCMV-infected cells. Furthermore, increased *CD55* expression protected infected cells from complement-mediated lysis, an effect which directly correlated with the length of HCMV infection([@bb0235], [@bb0240]). These negatively impacting cellular functions in the immune system, would severely compromise the host response to the infection. 3.7. Cell cyclearrestby HCMV infection in THP-1 cells {#s0085} ----------------------------------------------------- Cell cycle and checkpoint control are intimately related to the outcome of herpesvirus infection. The complexity inherent in this virus-host interaction is becoming even more apparent ([@bb0095]). Seventy-three mRNAs encoding proteins with likely roles in cell cycle regulation were regulated by the virus ([Table 4](#t0020){ref-type="table"}). The activation of protein kinases-cyclin dependent kinases (cdks) could trigger progression through the successive phases of the cell cycle. Whereas, the activities and specificities of these kinases are determined by their association with various cyclins and inhibitors which are differentially expressed during the cell cycle. Prevailing hypothesis proposes that proper temporal order of cell cycle was regulated by two major checkpoints([@bb0105], [@bb0065], [@bb0205]). One occurs at the G1/S transition and controls the initiation of DNA replication, and the other occurs at the G2/M transition just prior to mitosis and cell division. Some previous studies indicated that THP-1 cells infected with HCMV undergo cell cycle arrest. Lu M et al. ([@bb0150]) have revealed a block in late G1, and Jault et al.([@bb0115]) described a block in G2/M. In our study, HCMV infection tends to inhibit cell cycle progression at multiple points, including the transition from G1 to S and G2 to M, consistent with previous research results. Cyclins expressed during the G1 phase promote the progression from G1 to S phase and include mainly the D-type cyclins and cyclin E([@bb0205]). Many genes encode cell cycle regulators, especially cyclin-dependent kinase inhibitor 1. Cyclin-dependent kinase inhibitor 1 (*CDKN1A*, 2.11-fold), also known as p21, is a cyclin-dependent kinase inhibitor that inhibits the complexes of *CDK2* and *CDK1* and up-regulated more than 2 fold change. p21 is a potent cyclin-dependent kinase inhibitor (CKI). The p21 (CIP1/WAF1) protein binds to and inhibits the activity of cyclin-CDK2, -CDK1, and -CDK4/6 complexes, and thus functions as a regulator of cell cycle progression at G1 and S phase ([@bb0080]). *Cyclin A* (3.03-fold) plays an important role in promoting the progression from G2 to M phase. In our study, *TGFβ1* (2.51-fold) were up-regulated. The inhibited expression of cyclin A was primarily modulated by *TGFβ1* and may result in a block in G2/M ([@bb0055]). HCMV mediated perturbations that result in inhibition of cell cycle progression at multiple points to maximize its benefits in the host cell. 3.8. New lncRNA discovery {#s0090} ------------------------- We further focus on the long noncoding RNAs to address the influence in non-coding region. Finally, 2,107 novel lncRNAs were found to be expressed ([Supplementary Table S13](#ec0050){ref-type="supplementary-material"}) and the numbers of exon distribution among novel lncRNAs are shown in [Fig. 4](#f0020){ref-type="fig"} . Regulation role of lncRNAs was recently observed in infection with severe acute respiratory syndrome coronavirus (SARS-CoV), HIV-1 and influenza virus, and have been suggested to impact host defenses and innate immunity ([@bb0200], [@bb0290]). Further studies to identify the functions of these differently expressed lncRNAs during HCMV infection may well provide novel insights into the virus-host molecular interface as well as possible therapeutic targets.Fig. 4The numbers of exon distribution among novel lncRNAs.Fig. 4. 4. Conclusion {#s0095} ============= In this study, we profiled the expression of mRNAs and lncRNAs in host cell using the emerging RNA-seq to investigate the transcriptional changes in HCMV latent infection. A great number of genes (mRNAs and lncRNAs) that were differentially expressed upon HCMV infection were obtained and functionally annotated. As our findings are mainly derived from bioinformatic analysis which may not be adequate to provide solid evidence, their functional relevance would need to be further established at molecular and cellular levels experimentally. Given the global analysis of changes in lncRNAs and mRNA levels provides a catalog of genes that are modulated as a result of the host-virus interaction, it is highly likely that further study of these genes may lead to breakthroughs in the understanding and treatment of cytomegalovirus-related diseases. Competing interests {#s0100} =================== The authors have declared that no competing interests exist. Author contributions {#s0105} ==================== X-QZ, QZ and B-HG conceived and designed the experiments, M-ML and Y-YL performed the experiments and QZ, H-YW and X-QZ analyzed the data. All authors reviewed the manuscript. The following are the supplementary data related to this article.Supplementary TablesImage 1 Supplementary Table S5The Foldchange of up-expressed coding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S5. Supplementary Table S6The Foldchange of down-expressed coding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S6. Supplementary Table S7The Foldchange of up-expressed known-noncoding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S7. Supplementary Table S8The Foldchange of down-expressed known-noncoding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S8. Supplementary Table S9The Foldchange of up-expressed novel-noncoding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S9. Supplementary Table S10The Foldchange of down-expressed novel-noncoding transcripts in latent-infected THP-1 cells libraries compared to mock-infected group.Supplementary Table S10. Supplementary Table S11Functional annotation chart (GO categories and KEGG pathways) of up-regulated genes.Supplementary Table S11. Supplementary Table S12Functional annotation chart (GO categories and KEGG pathways) of down-regulated genes.Supplementary Table S12. Supplementary Table S13The information of novel lncRNA.Supplementary Table S13. We thank Prof. JinyuWu for comments and critical reading of this manuscript. This project was supported by grants from the National Science Foundation of China (No. 81071365) and the Zhejiang Provincial Natural Science Foundation of China (NO. LY13H190006). [^1]: UP denotes up-regulated genes; ^b^ DN means down-regulated genes.
{ "pile_set_name": "PubMed Central" }
// Copyright (c) 2006-2008 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "skia/ext/platform_device.h" #include "skia/ext/bitmap_platform_device.h" #import <ApplicationServices/ApplicationServices.h> #include "skia/ext/skia_utils_mac.h" #include "third_party/skia/include/core/SkMatrix.h" #include "third_party/skia/include/core/SkPath.h" #include "third_party/skia/include/core/SkTypes.h" #include "third_party/skia/include/core/SkUtils.h" namespace skia { CGContextRef GetBitmapContext(SkDevice* device) { PlatformDevice* platform_device = GetPlatformDevice(device); if (platform_device) return platform_device->GetBitmapContext(); return NULL; } CGContextRef PlatformDevice::BeginPlatformPaint() { return GetBitmapContext(); } void PlatformDevice::EndPlatformPaint() { // Flushing will be done in onAccessBitmap. } // Set up the CGContextRef for peaceful coexistence with Skia void PlatformDevice::InitializeCGContext(CGContextRef context) { // CG defaults to the same settings as Skia } // static void PlatformDevice::LoadPathToCGContext(CGContextRef context, const SkPath& path) { // instead of a persistent attribute of the context, CG specifies the fill // type per call, so we just have to load up the geometry. CGContextBeginPath(context); SkPoint points[4] = { {0, 0}, {0, 0}, {0, 0}, {0, 0} }; SkPath::Iter iter(path, false); for (SkPath::Verb verb = iter.next(points); verb != SkPath::kDone_Verb; verb = iter.next(points)) { switch (verb) { case SkPath::kMove_Verb: { // iter.next returns 1 point CGContextMoveToPoint(context, points[0].fX, points[0].fY); break; } case SkPath::kLine_Verb: { // iter.next returns 2 points CGContextAddLineToPoint(context, points[1].fX, points[1].fY); break; } case SkPath::kQuad_Verb: { // iter.next returns 3 points CGContextAddQuadCurveToPoint(context, points[1].fX, points[1].fY, points[2].fX, points[2].fY); break; } case SkPath::kCubic_Verb: { // iter.next returns 4 points CGContextAddCurveToPoint(context, points[1].fX, points[1].fY, points[2].fX, points[2].fY, points[3].fX, points[3].fY); break; } case SkPath::kClose_Verb: { // iter.next returns 1 point (the last point) break; } case SkPath::kDone_Verb: // iter.next returns 0 points default: { SkASSERT(false); break; } } } CGContextClosePath(context); } // static void PlatformDevice::LoadTransformToCGContext(CGContextRef context, const SkMatrix& matrix) { // CoreGraphics can concatenate transforms, but not reset the current one. // So in order to get the required behavior here, we need to first make // the current transformation matrix identity and only then load the new one. // Reset matrix to identity. CGAffineTransform orig_cg_matrix = CGContextGetCTM(context); CGAffineTransform orig_cg_matrix_inv = CGAffineTransformInvert( orig_cg_matrix); CGContextConcatCTM(context, orig_cg_matrix_inv); // assert that we have indeed returned to the identity Matrix. SkASSERT(CGAffineTransformIsIdentity(CGContextGetCTM(context))); // Convert xform to CG-land. // Our coordinate system is flipped to match WebKit's so we need to modify // the xform to match that. SkMatrix transformed_matrix = matrix; SkScalar sy = matrix.getScaleY() * (SkScalar)-1; transformed_matrix.setScaleY(sy); size_t height = CGBitmapContextGetHeight(context); SkScalar ty = -matrix.getTranslateY(); // y axis is flipped. transformed_matrix.setTranslateY(ty + (SkScalar)height); CGAffineTransform cg_matrix = gfx::SkMatrixToCGAffineTransform( transformed_matrix); // Load final transform into context. CGContextConcatCTM(context, cg_matrix); } // static void PlatformDevice::LoadClippingRegionToCGContext( CGContextRef context, const SkRegion& region, const SkMatrix& transformation) { if (region.isEmpty()) { // region can be empty, in which case everything will be clipped. SkRect rect; rect.setEmpty(); CGContextClipToRect(context, gfx::SkRectToCGRect(rect)); } else if (region.isRect()) { // CoreGraphics applies the current transform to clip rects, which is // unwanted. Inverse-transform the rect before sending it to CG. This only // works for translations and scaling, but not for rotations (but the // viewport is never rotated anyway). SkMatrix t; bool did_invert = transformation.invert(&t); if (!did_invert) t.reset(); // Do the transformation. SkRect rect; rect.set(region.getBounds()); t.mapRect(&rect); SkIRect irect; rect.round(&irect); CGContextClipToRect(context, gfx::SkIRectToCGRect(irect)); } else { // It is complex. SkPath path; region.getBoundaryPath(&path); // Clip. Note that windows clipping regions are not affected by the // transform so apply it manually. path.transform(transformation); // TODO(playmobil): Implement. SkASSERT(false); // LoadPathToDC(context, path); // hrgn = PathToRegion(context); } } } // namespace skia
{ "pile_set_name": "Github" }
require('../../modules/es6.math.cosh'); module.exports = require('../../modules/$.core').Math.cosh;
{ "pile_set_name": "Github" }
Cookie Policy Updated May 14, 2018 Cookies are small pieces of text used to store information on web browsers. Cookies are used to store and receive identifiers and other information on computers, phones, and other devices. Other technologies, including data we store on your web browser or device, identifiers associated with your device, and other software, are used for similar purposes. In this policy, we refer to all of these technologies as “cookies.” This policy explains how we use cookies. Cookies help us provide, protect and improve the Dhound.io. While the cookies that we use may change from time to time as we improve and update the Dhound.io, we use them for the following purposes: Authentication We use cookies to verify your account and determine when you’re logged in so we can make it easier for you to access the Dhound.io and show you the appropriate experience and features. For example: We use cookies to keep you logged in as you navigate between Dhound.io pages. Cookies also help us remember your browser so you do not have to keep logging into Dhound.io. Security, site and product integrity We use cookies to help us keep your account, data and the Dhound.io safe and secure. For example: Cookies can help us identify and impose additional security measures when someone may be attempting to access a Dhound account without authorization, for instance, by rapidly guessing different passwords. We also use cookies to store information that allows us to recover your account in the event you’ve forgotten your password or to require additional authentication if you tell us your account has been hacked. We also use cookies to combat activity that violates our policies or otherwise degrades our ability to provide the Dhound.io. For example: Cookies help us fight spam and phishing attacks by enabling us to identify computers that are used to create large numbers of fake Dhound accounts. We also use cookies to detect computers infected with malware and to take steps to prevent them from causing further harm. Analytics and research We use cookies to better understand how people use the Dhound.io so that we can improve it. For example: Cookies can help us understand how people use the Dhound.io, analyze which parts of the Dhound.io people find most useful and engaging, and identify features that could be improved. How can you control Dhound’s use of cookies? Your browser or device may offer settings that allow you to choose whether browser cookies are set and to delete them. For more information about these controls, visit your browser or device's help material. Certain parts of the Dhound.io may not work properly if you have disabled browser cookie use.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'A detailed characterization of PPT states, both in the Heisenberg and in the Schr[ö]{}dinger picture, is given. Measures of entanglement are defined and discussed in details. Illustrative examples are provided.' address: - 'Institute of Theoretical Physics and Astrophysics, Gda[ń]{}sk University, Wita Stwosza 57, 80-952 Gda[ń]{}sk, Poland' - 'Faculty of Management of Administration and Information, Tokyo University of Science, Suwa; Toyohira 5001, Chino City, Nagano 391-0292, Japan' - 'Department of Information Science, Tokyo University of Science, Yamazaki 2641, Noda City, Chiba 278-8510, Japan' author: - 'W[ł]{}adys[ł]{}aw A. Majewski' - Takashi Matsuoka - Masanori Ohya title: Characterization of PPT states and measures of entanglement --- Preliminaries ============= In this section we compile some basic facts on the theory of positive maps on [[[$\hbox{\bf C}^*$]{}]{}]{}-algebras. To begin with, let ${{\mathcal A}}$ and ${{\mathcal B}}$ be [[[$\hbox{\bf C}^*$]{}]{}]{}-algebras (with unit), ${{\mathcal A}}_h = \{ a \in {{\mathcal A}}; a = a^* \}$ - the set of all selfadjoint elements in ${{\mathcal A}}$, ${{\mathcal A}}^+ = \{ a \in {{\mathcal A}}_h; a \ge 0 \}$ - the set of all positive elements in ${{\mathcal A}}$, and ${{\mathcal S}}({{\mathcal A}})$ the set of all states on ${{\mathcal A}}$, i.e. the set of all linear functionals $\varphi$ on ${{\mathcal A}}$ such that $\varphi(1) = 1$ and $\varphi(a)\geq0$ for any $a \in {{\mathcal A}}^+$. In particular $$({{\mathcal A}}_h, {{\mathcal A}}^+)\quad is \quad an \quad ordered \quad Banach \quad space.$$ We say that a linear map $\alpha : {{\mathcal A}}\to {{\mathcal B}}$ is positive if $\alpha({{\mathcal A}}^+) \subset {{\mathcal B}}^+$. The theory of positive maps on non-commutative algebras can be viewed as a jig-saw-puzzle with pieces whose exact form is not well known. On the other hand, as we address this paper to a readership interested in quantum mechanics and quantum information theory, in this section, we will focus our attention on some carefully selected basic concepts and fundamental results in order to facilitate access to main problems of that theory. Furthermore, the relations between the theory of positive maps and the entanglement problem will be indicated. We begin with a very strong notion of positivity: the so called complete positivity (CP). Namely, a linear map $\tau : {{\mathcal A}}\to {{\mathcal B}}$ is CP iff $$\label{CP} \tau_n : M_n({{\mathcal A}}) \to M_n({{\mathcal B}}); [a_{ij}] \mapsto [\tau(a_{ij})]$$ is positive for all $n$. Here, $M_n({{\mathcal A}})$ stands for $n \times n$ matrices with entries in ${{\mathcal A}}$. To explain the basic motivation for that concept we need the following notion: [*an operator state of [[[$\hbox{\bf C}^*$]{}]{}]{}-algebra ${{\mathcal A}}$ on a Hilbert space ${{\mathcal K}}$, is a CP map $\tau : {{\mathcal A}}\to {{\mathcal B}}({{\mathcal K}})$*]{}, where ${{\mathcal B}}({{\mathcal K}})$ stands for the set of all bounded linear operators on $\mathcal K$. Having this concept we can recall the Stinespring result, [@Sti], which is a generalization of GNS construction and which was the starting point for a general interest in the concept of complete positivity. ([@Sti]) [ For an operator state $\tau$ there is a Hilbert space ${{\mathcal H}}$, a $^*$-representation ($^*$-morphism) $\pi : {{\mathcal A}}\to {{\mathcal B}}({{\mathcal H}})$ and a partial isometry $V : {{\mathcal K}}\to {{\mathcal H}}$ for which]{} $$\label{Stinspring} \tau(a) = V^* \pi(a) V.$$ A nice and frequently used criterion for CP can be extracted from Takesaki book [@Tak]: \[kryt1\] Let ${{\mathcal A}}$ and ${{\mathcal B}}$ be [[[$\hbox{\bf C}^*$]{}]{}]{}-algebras. A linear map $\phi : {{\mathcal A}}\to {{\mathcal B}}$ is CP if and only if $$\sum_{i,j=1}^n y^*_i \phi(x^*_i x_j)y_j \geq 0$$ for every $x_1,...,x_n \in {{\mathcal A}}$, $y_1,...,y_n \in {{\mathcal B}}$, and every $n \in {{\mathbb{N}}}$. Up to now we considered linear positive maps on an algebra without entering into the (possible) complexity of the underlying algebra. The situation changes when one is dealing with composed systems (for example in the framework of open system theory). Namely, there is a need to use the tensor product structure. At this point, it is worth citing Takesaki’s remark [@Tak]:“...Unlike the finite dimensional case, the tensor product of infinite dimensional Banach spaces behaves mysteriously.” He had in mind “topological properties of Banach spaces” , i.e.: “ cross norms in the tensor product are highly non-unique.” But from the point of view of composed systems the situation is, even, more mysterious as finite dimensional cases are also obscure. To explain this point, let us consider positive maps defined on the tensor product of two [[[$\hbox{\bf C}^*$]{}]{}]{}-algebras, $\tau : {{\mathcal A}}\otimes {{\mathcal B}}\to {{\mathcal A}}\otimes {{\mathcal B}}$. But now the question of order is much more complicated. Namely, there are various cones determining the order structure in the tensor product of algebras (cf. [@Witt]) $${{\mathcal C}}_{inj} \equiv ({{\mathcal A}}\otimes {{\mathcal B}})^+ \supseteq, ..., \supseteq {{\mathcal C}}_{\beta} \supseteq,..., \supseteq {{\mathcal C}}_{pro}\equiv conv({{\mathcal A}}^+\otimes{{\mathcal B}}^+)$$ and correspondingly in terms of states (cf [@I]) $${{\mathcal S}}({{\mathcal A}}\otimes {{\mathcal B}}) \supseteq,..., \supseteq {{\mathcal S}}_{\beta} \supseteq, ..., \supseteq conv({{\mathcal S}}({{\mathcal A}})\otimes{{\mathcal S}}({{\mathcal B}})).$$ Here, ${{\mathcal C}}_{inj}$ stands for the injective cone, ${{\mathcal C}}_{\beta}$ for a tensor cone, while ${{\mathcal C}}_{pro}$ for the projective cone. The tensor cone ${{\mathcal C}}_{\beta}$ is defined by the property: the canonical bilinear mappings $\omega :{{\mathcal A}}_h \times {{\mathcal B}}_h \to ({{\mathcal A}}_h \otimes {{\mathcal B}}_h, {{\mathcal C}}_{\beta})$ and $\omega^* : {{\mathcal A}}^*_h \times {{\mathcal B}}^*_h \to ({{\mathcal A}}^*_h \otimes {{\mathcal B}}^*_h, {{\mathcal C}}_{\beta}^*)$ are positive. The cones ${{\mathcal C}}_{inj}, C_{\beta}, C_{pro}$ are different unless either ${{\mathcal A}}$, or ${{\mathcal B}}$, or both ${{\mathcal A}}$ and ${{\mathcal B}}$ are abelian (so a finite dimension does not help very much!). This feature is the origin of various positivity concepts for non-commutative composed systems and it was Stinespring who used the partial transposition (transposition tensored with identity map) for showing the difference between $C_{\beta}$ and ${{\mathcal C}}_{inj}$ and ${{\mathcal C}}_{pro}$ (see [@Witt], also [@Hor]). Clearly, in dual terms, the mentioned property corresponds to the fact that the set of separable states $conv({{\mathcal S}}({{\mathcal A}})\otimes{{\mathcal S}}({{\mathcal B}}))$ is different from the set of all states and that there are various special subsets of states if both subsystems are truly quantum. In his pioneering work on Banach spaces, Grothendieck [@Gro] observed the links between tensor products and mapping spaces. A nice example of such links was provided by St[ø]{}rmer [@St1]. To present this result we need a little preparation. Let $\mathfrak{A}$ denote a norm closed self-adjoint subspace of bounded operators on a Hilbert space $\mathcal K$ containing the identity operator on $\mathcal K$. $\mathfrak T$ will denote the set of trace class operators on ${{\mathcal B}}({{\mathcal H}})$. $x \to x^t$ denotes the transpose map of ${{\mathcal B}}({{\mathcal H}})$ with respect to some orthonormal basis. The set of all linear bounded (positive) maps $\phi: \mathfrak{A} \to {{\mathcal B}}({{\mathcal H}})$ will be denoted by ${{\mathcal B}}(\mathfrak{A}, {{\mathcal B}}({{\mathcal H}}))$ (${{\mathcal B}}(\mathfrak{A}, {{\mathcal B}}({{\mathcal H}}))^+$ respectively). Finally, we denote by ${\mathfrak A} \odot {\mathfrak T}$ the algebraic tensor product of $\mathfrak A$ and $\mathfrak T$ (algebraic tensor product of two vector spaces is defined as its $^*$-algebraic structure when the factor spaces are $^*$-algebras; so the topological questions are not considered) and denote by ${\mathfrak A} \hat{\otimes} \mathfrak T$ its Banach space closure under the projective norm defined by $$||x|| = \inf \{ \sum_{i=1}^n ||a_i|| ||b_i||_1: x = \sum_{i=1}^n a_i \otimes b_i, \ a_i \in {\mathfrak A}, \ b_i \in {\mathfrak T} \},$$ where $|| \cdot ||_1$ stands for the trace norm. Now, we are in a position to give (see [@St1]) \[pierwszy lemat\] There is an isometric isomorphism $\phi \to \tilde{\phi}$ between ${{\mathcal B}}({\mathfrak A}, {{\mathcal B}}({{\mathcal H}}))$ and $({\mathfrak A} \hat{\otimes} {\mathfrak T})^*$ given by $$(\tilde{\phi})(\sum_{i=1}^n a_i\otimes b_i) = \sum_{i=1}^nTr(\phi(a_i)b^t_i),$$ where $\sum_{i=1}^n a_i\otimes b_i \in {\mathfrak A}\odot {\mathfrak T}$. Furthermore, $ \phi \in {{\mathcal B}}({\mathfrak A}, {{\mathcal B}}({{\mathcal H}}))^+$ if and only if $\tilde{\phi}$ is positive on ${\mathfrak A}^+ \hat{\odot} {\mathfrak T}^+$. To comment this result we make 1. There is not any restriction on the dimension of Hilbert space. In other words, this result can be applied to true quantum systems. 2. In [@St2], St[ø]{}rmer showed that in the special case when ${\mathfrak A} = M_n({{\mathbb{C}}})$ and ${{\mathcal H}}$ has dimension equal to $n$, the above Lemma is a reformulation of Choi result [@Ch1], [@Ch3], (see also [@Jam]). 3. One should note that the positivity of a functional is defined by the projective cone ${\mathfrak A}^+ \hat{\odot} {\mathfrak T}^+$. 4. A generalization of the Choi result (mentioned in 4.2) was also obtained by Belavkin and Staszewski [@Slawa]. We will also need the concept of co-CP maps. A map $\phi$ is co-CP if its composition with the transposition is a CP map. To see that this is not a trivial condition it is enough to note that the transposition is not even a 2-positive (2-positivity means that the condition given in (\[CP\]) is satisfied for $n$ equal to $1$ and $2$ only). A larger class of positive maps is formed by decomposable maps. A map $\phi$ is called decomposable if it can be written as a sum of CP and co-CP maps. Equivalently, if in (\[Stinspring\]) one replaces $^*$-morphism by Jordan morphism (i.e. a linear map which preserves anticommutator) then the canonical form of a decomposable map is obtained. Turning to states, it was mentioned that $conv({{\mathcal S}}({{\mathcal A}})\otimes{{\mathcal S}}({{\mathcal B}}))$ are called separable states. The subset of states ${{\mathcal S}}({{\mathcal A}}\otimes {{\mathcal B}}) \setminus conv({{\mathcal S}}({{\mathcal A}})\otimes{{\mathcal S}}({{\mathcal B}}))$ is called the set of entangled states. We will be interested in the special subset of states: $${{\mathcal S}}({{\mathcal A}}\otimes {{\mathcal B}})_{PPT} \equiv {{\mathcal S}}_{PPT} = \{ \varphi \in {{\mathcal S}}({{\mathcal A}}\otimes {{\mathcal B}}); \varphi \circ (t \otimes id) \in {{\mathcal S}}({{\mathcal A}}\otimes {{\mathcal B}}) \}$$ where $t$ stands for transposition. Such states are called PPT states. It is worth observing that the condition in the definition of ${{\mathcal S}}_{PPT}$ is non-trivial; namely the partial transposition does not need to be a positive map! Clearly $${{\mathcal S}}\supseteq {{\mathcal S}}_{PPT}\supseteq {{\mathcal S}}_{sep}.$$ Among entangled states , the states called maximally entangled are of special interest. They can be defined as those for which the state reduced to a subsystem is maximally chaotic (in the entropic sense). A nice example of such states is given by EPR (Einstein-Podolsky-Rosen) states. The aim of this paper is to give a general characterization of PPT states. We shall present two approaches. The first one is based on the structure of positive maps and as the starting point we will take a modification of Lemma \[pierwszy lemat\]. The second approach employs the Hilbert space geometry. This equivalent description will offer rather strikingly very simple definitions of entanglement measures. Entanglement mappings and PPT states ==================================== In this Section we present a modification of Belavkin-Ohya approach [@BO], [@BO2], [@Matsuoka], and [@BD] to the characterization of entanglement. The basic concept of this approach is the entangling operator $H$. The aim of this section is to provide explicit formulas for both entangling operator $H$ and entanglement mapping $\phi^*$ as well as to give the first characterization of PPT states. Let us consider a composed system $\sum $ consisting of two subsystems $\sum_{1}$, $\sum_{2}$. We assume that $\sum_{1}$ is defined by the pair $\left( \mathcal{H},\mathcal{B}(\mathcal{H})\right) $ while $\sum_{2}$ by the pair $ \left( \mathcal{K},\mathcal{B}(\mathcal{K})\right) $ respectively, where $\mathcal{H}$ $(\mathcal{K)}$ is a separable Hilbert space. Let $\omega $ be a normal compound state on $\sum $, i.e. $\omega $ is a normal state on $\mathcal{B}\left( \mathcal{H}\otimes \mathcal{K}\right) $. Thus $$\omega \left( a\otimes b\right) =Tr\rho_{\omega} \left( a\otimes b\right)$$with $a\in \mathcal{B}\left( \mathcal{H}\right) $, $b\in \mathcal{B}\left( \mathcal{K}\right) $. $\rho_{\omega} \equiv \rho$ is a density matrix with the spectral resolution $\rho =\underset{i}{\sum }\lambda _{i}\left\vert e_{i}\right\rangle \left\langle e_{i}\right\vert $. Define a linear bounded operator $T_{\zeta }:\mathcal{K}\rightarrow \mathcal{H}\otimes \mathcal{K}$ by$$T_{\zeta }\eta =\zeta \otimes \eta$$where $\zeta \in \mathcal{H}$, $\eta \in \mathcal{K}$. Note that the adjoint operator $T_{\zeta }^{\ast }:\mathcal{H}\otimes \mathcal{K}$ $\rightarrow \mathcal{K}$ is given by$$T_{\zeta }^{\ast }\zeta ^{\prime }\otimes \eta ^{\prime }=\left( \zeta ,\zeta ^{\prime }\right) \eta ^{\prime }.$$ Now we wish, following B-O scheme, to define the operator $$H:\mathcal{H}\rightarrow \mathcal{H}\otimes \mathcal{K}\otimes \mathcal{K}$$by the formula: $$H\zeta =\underset{i}{\sum }\lambda _{i}^{\frac{1}{2}}\left( J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}\zeta }^{\ast }\right) e_{i}\otimes e_{i}$$where $J_{\mathcal{H}\otimes \mathcal{K}}$ is a complex conjugation defined by $J_{\mathcal{H}\otimes \mathcal{K}}f\equiv J_{\mathcal{H}\otimes \mathcal{K}}\underset{i}({\sum }\left( e_{i}^{\cdot },f\right) e_{i}^{\cdot })=\underset{i}{\sum }\overline{\left( e_{i}^{\cdot },f\right) }e_{i}^{\cdot }$ where $\left\{ e_{i}^{\cdot }\right\} $ is any CONS extending (if necessary) the orthogonal system $\left\{ e_{i}\right\} $ determined by the spectral resolution of $\rho $. ( $J_{\mathcal{H}}$ is defined analogously with the spectral resolution given by $H^{\ast }H$; using the explicit form of $H$, easy calculations show that the spectrum of $H^*H$ is discrete.) We wish to show \[pierwsze\] The normal state $\omega $ can be represented as$$\omega \left( a\otimes b\right) =Tr_{\mathcal{H}}a^{t}H^{\ast }\left( 1\otimes b\right) H$$where a$^{t}=J_{\mathcal{H}}a^{\ast }J_{\mathcal{H}}$. $$\begin{aligned} Tra^{t}H^{\ast }\left( 1\otimes b\right) H &=&\underset{k}{\sum }\left( h_{k},a^{t}H^{\ast }\left( 1\otimes b\right) Hh_{k}\right) \\ &=&\underset{k}{\sum }\left( h_{k},J_{\mathcal{H}}a^{\ast }J_{\mathcal{H}}H^{\ast }\left( 1\otimes b\right) Hh_{k}\right) \\ &=&\underset{k}{\sum }\left( HJ_{\mathcal{H}}aJ_{\mathcal{H}}h_{k},\left( 1\otimes b\right) Hh_{k}\right) \\ &=&\underset{k}{\sum }\underset{i,j}{\sum }\left( \lambda _{i}^{\frac{1}{2}}\left( J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}J_{\mathcal{H}}aJ_{\mathcal{H}}h_{k}}^{\ast }\right) e_{i}\otimes e_{i},\left( 1\otimes b\right) \lambda _{j}^{\frac{1}{2}}\left( J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}h_{k}}^{\ast }\right) e_{j}\otimes e_{j}\right) \\ &=&\underset{k,i,j}{\sum }\lambda _{i}^{\frac{1}{2}}\lambda _{j}^{\frac{1}{2}}\left( e_{i}\otimes T_{aJ_{\mathcal{H}}h_{k}}^{\ast }e_{i},\left( 1\otimes b\right) e_{j}\otimes T_{J_{\mathcal{H}}h_{k}}^{\ast }e_{j}\right). \end{aligned}$$ Note that $\left\{ h_{k}\right\} $ can be chosen in such the way that it is CONS used in the definition of $J_{\mathcal{H}}$. So$$\begin{aligned} Tra^{t}H^{\ast }\left( 1\otimes b\right) H &=&\underset{k,i,j}{\sum }\lambda _{i}^{\frac{1}{2}}\lambda _{j}^{\frac{1}{2}}\left( e_{i}\otimes T_{ah_{k}}^{\ast }e_{i},\left( 1\otimes b\right) e_{j}\otimes T_{h_{k}}^{\ast }e_{j}\right) \\ &=&\underset{k,i}{\sum }\lambda _{i}\left( T_{ah_{k}}^{\ast }e_{i},bT_{h_{k}}^{\ast }e_{i}\right) .\end{aligned}$$Let $\left\{ v_{m}\otimes w_{n}\right\} $ be a CONS in $\mathcal{H}\otimes \mathcal{K}$. Then$$\begin{aligned} &&Tra^{t}H^{\ast }\left( 1\otimes b\right) H \\ &=&\underset{k,i,m,n,p,r}{\sum }\lambda _{i}\left( e_{i},v_{m}\otimes w_{n}\right) \left( T_{ah_{k}}^{\ast }v_{m}\otimes w_{n},bT_{h_{k}}^{\ast }v_{p}\otimes w_{r}\right) \left( v_{p}\otimes w_{r},e_{i}\right) \\ &=&\underset{k,i,m,n,p,r}{\sum }\lambda _{i}\left( e_{i},v_{m}\otimes w_{n}\right) \left( v_{p}\otimes w_{r},e_{i}\right) \overline{\left( ah_{k},v_{m}\right) }\left( h_{k},v_{p}\right) \left( w_{n},bw_{r}\right). \end{aligned}$$As $\left\{ v_{m}\right\} $ is a CONS in $\mathcal{H}$ we can take $\left\{ v_{m}\right\} =\left\{ h_{m}\right\} $. So that$$\begin{aligned} &&Tra^{t}H^{\ast }\left( 1\otimes b\right) H \\ &=&\underset{k,i,m,n,p,r}{\sum }\lambda _{i}\left( e_{i},h_{m}\otimes w_{n}\right) \left( h_{p}\otimes w_{r},e_{i}\right) \overline{\left( ah_{k},h_{m}\right) }\left( h_{k},h_{p}\right) \left( w_{n},bw_{r}\right) \\ &=&\underset{k,i,m,n,r}{\sum }\lambda _{i}\left( e_{i},h_{m}\otimes w_{n}\right) \left( h_{k}\otimes w_{r},e_{i}\right) \left( h_{m},ah_{k}\right) \left( w_{n},bw_{r}\right) \\ &=&\underset{k,i,m,n,r}{\sum }\lambda _{i}\left( e_{i},h_{m}\otimes w_{n}\right) \left( h_{m}\otimes w_{n},\left( a\otimes b\right) h_{k}\otimes w_{r}\right) \left( h_{k}\otimes w_{r},e_{i}\right) \\ &=&\underset{i}{\sum }\lambda _{i}\left( e_{i},\left( a\otimes b\right) e_{i}\right) =Tr\rho \left( a\otimes b\right) =\omega \left( a\otimes b\right) .\end{aligned}$$ As it was mentioned in the previous Section, in his fundamental paper on topological linear spaces, Grothendieck emphasized the importance of relating mapping space to tensor products. Another nice example of such relations is given by the following modification of Størmer’s result. \[drugi lemat\] (1) Let $\mathcal{B}\left[ \ \mathcal{B}\left( \mathcal{H}\right) ,\mathcal{B(K)}_{\ast }\right] $ stand for the set of all linear, bounded, normal (so weakly$^*$-continuous) maps from ${{\mathcal B}}({{\mathcal H}})$ into ${{\mathcal B}}({{\mathcal H}})_{\ast}$. There is an isomorphism $\psi \longmapsto \Psi $ between $\mathcal{B}\left[ \ \mathcal{B}\left( \mathcal{H}\right) ,\mathcal{B(K)}_{\ast }\right] $ and $\left( \mathcal{B(H)\otimes B(K)}\right)_{\ast }$ given by $$\Psi \left( \sum_{i}a_{i}\otimes b_{i}\right) =\sum_{i}Tr_{\mathcal{K}}\psi \left( a_{i}\right) b_{i}^{t},\text{\ \ }a_{i}\in \mathcal{B}\left( \mathcal{H}\right) ,\text{ }b_{i}\in \mathcal{B}\left( \mathcal{K}\right) .$$The isomorphism is isometric if $\Psi$ is considered on $\mathcal{B(H)\hat{\otimes} B(K)}$. Furthermore $\Psi $ is positive on $({{\mathcal B}}({{\mathcal H}})\otimes {{\mathcal B}}({{\mathcal K}}))^+$ iff $\psi $ is complete positive. \(2) There is an isomorphism $\phi \longmapsto \Phi $ between $\mathcal{B}\left[ \ \mathcal{B}\left( \mathcal{H}\right) ,\mathcal{B(K)}_{\ast }\right] $ and $\left( \mathcal{B(H)\otimes B(K)}\right)_{\ast }$ given by $$\Phi \left( \sum_{i}a_{i}\otimes b_{i}\right) =\sum_{i}Tr_{\mathcal{K}}\phi \left( a_{i}\right) b_{i},\text{\ \ }a_{i}\in \mathcal{B}\left( \mathcal{H}\right) ,\text{ }b_{i}\in \mathcal{B}\left( \mathcal{K}\right) .$$The isomorphism is isometric if $\Phi$ is considered on $\mathcal{B(H)\hat{\otimes} B(K)}$. Furthermore $\Phi $ is positive on $({{\mathcal B}}({{\mathcal H}})\otimes {{\mathcal B}}({{\mathcal K}}))^+$ iff $\phi $ is complete co-positive. A repetition of modified St[ø]{}rmer’s and standard arguments (cf \[7\].2-3 below). We want to comment this lemma with \[7\] 1. Firstly, one should note the basic difference between Lemma \[pierwszy lemat\] and Lemma [\[drugi lemat\]]{}. In Lemma \[pierwszy lemat\], the order is defined by the projective cone while in Lemma \[drugi lemat\], the order is defined by the injective cone. 2. Secondly, as $\mathcal{B(K)_*}$ is isomorphic to the set of all trace class operators $\mathfrak{T}\equiv \mathfrak{T}_{\mathcal{K}}$ on ${{\mathcal K}}$, $\mathcal{B(B(H),B(K)_*)}$ can be considered as ${{\mathcal B}}({{\mathcal B}}({{\mathcal H}}), \mathfrak{T})$. 3. Thirdly, let ${{\mathcal A}}$ and ${{\mathcal B}}$ be $C^*$-algebras. Lemma \[pierwszy lemat\] and Lemma \[drugi lemat\] stem from the standard identification $\Psi \to \psi$ of $({{\mathcal A}}\odot {{\mathcal B}})^d$ with the set $Hom({{\mathcal A}}, {{\mathcal B}}^d)$ of linear maps from ${{\mathcal A}}$ to ${{\mathcal B}}^d$ where $[\psi(a)](b) = \Psi(a\otimes b)$. Here $({{\mathcal A}}\odot {{\mathcal B}})^d$ (${{\mathcal B}}^d$) stands for the algebraic dual of ${{\mathcal A}}\odot {{\mathcal B}}$ (${{\mathcal B}}$ respectively), see [@Bla] for details. 4. Finally, Lemma \[drugi lemat\] gains in interest if we realize that the operator $H$ defined in the first part of this section can be used for a definition of the entanglement mapping $\phi :\mathcal{B}\left( \mathcal{K}\right) \rightarrow \mathcal{B}\left( \mathcal{H}\right) _{\ast }$. Let us define$$\label{la} \phi \left( b\right) =\left( H^{\ast }\left( 1\otimes b\right) H\right) ^{t}=J_{\mathcal{H}}H^{\ast }\left( 1\otimes b\right) ^{\ast }HJ_{\mathcal{H}.}$$Then \[PPT\] The entanglement mapping \(i) $\phi ^{\ast }:\mathcal{B}\left( \mathcal{H}\right) \rightarrow \mathcal{B}\left( \mathcal{K}\right) _{\ast }$ has the following explicit form$$\phi ^{\ast }\left( a\right) =Tr_{\mathcal{H}\otimes \mathcal{K}}Ha^{t}H^{\ast }$$ \(ii) The state $\omega $ on $\mathcal{B}\left( \mathcal{H}\otimes \mathcal{K}\right) $ can be written as $$\omega \left( a\otimes b\right) =Tr_{\mathcal{H}}a\phi \left( b\right) =Tr_{\mathcal{K}}b\phi ^{\ast }\left( a\right)$$ where $\phi$ was defined in (\[la\]). For $f$, $g\in \mathcal{K}$ and $h\in \mathcal{H}$$$\begin{aligned} Tr_{\mathcal{K}}\phi ^{\ast }\left( a\right) \left\vert f\right\rangle \left\langle g\right\vert &=&\left( g,\phi ^{\ast }\left( a\right) f\right) \\ &=&\underset{i}{\sum }\left( e_{i}\otimes g,Ha^{t}H^{\ast }e_{i}\otimes f\right), \end{aligned}$$where as before $\{ e_i \}$ is a CONS in ${{\mathcal H}}\otimes {{\mathcal K}}$. Note:$$\begin{aligned} \left( h,H^{\ast }e_{i}\otimes f\right) &=&\left( Hh,e_{i}\otimes f\right) \\ &=&\underset{k}{\sum }\left( \lambda _{k}^{\frac{1}{2}}\left( J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}h}^{\ast }\right) e_{k}\otimes e_{k},e_{i}\otimes f\right) \\ &=&\underset{k}{\sum }\lambda _{k}^{\frac{1}{2}}\left( e_{k}\otimes T_{J_{\mathcal{H}}h}^{\ast }e_{k},e_{i}\otimes f\right) \\ &=&\underset{k,m,n}{\sum }\lambda _{k}^{\frac{1}{2}}\left( e_{k},v_{m}\otimes w_{n}\right) \left( e_{k}\otimes \left( J_{\mathcal{H}}h,v_{m}\right) w_{n},e_{i}\otimes f\right) \end{aligned}$$where $\left\{ v_{m}\right\} $ is a CONS in $\mathcal{H}$ such that $J_{\mathcal{H}}$ is defined w.r.t this basis, and $\left\{ w_{n}\right\} $ is a CONS in $\mathcal{K},$$$\begin{aligned} &=&\underset{m,n}{\sum }\lambda _{i}^{\frac{1}{2}}\left( e_{i},v_{m}\otimes w_{n}\right) \overline{\left( J_{\mathcal{H}}h,v_{m}\right) }\left( w_{n},f\right) \\ &=&\underset{m,n}{\sum }\lambda _{i}^{\frac{1}{2}}\left( e_{i},v_{m}\otimes w_{n}\right) \left( v_{m},J_{\mathcal{H}}h\right) \left( w_{n},f\right) \\ &=&\underset{m,n}{\sum }\lambda _{i}^{\frac{1}{2}}\left( e_{i},v_{m}\otimes w_{n}\right) \left( v_{n}\otimes w_{n},J_{\mathcal{H}}h\otimes f\right) \\ &=&\lambda _{i}^{\frac{1}{2}}\left( e_{i},J_{\mathcal{H}}h\otimes f\right). \end{aligned}$$In particular, putting $h^{\prime}=\left( a^{t}\right) ^{\ast }H^{\ast }e_{i}\otimes g$ one has$$\begin{aligned} \left( h^{\prime},v_{m}\right) &=&\left( H^{\ast }e_{i}\otimes g,a^{t}v_{m}\right) \\ &=&\lambda _{i}^{\frac{1}{2}}\left( J_{\mathcal{H}}a^{t}v_{m}\otimes g,e_{i}\right). \end{aligned}$$Hence $$\begin{aligned} Tr_{\mathcal{K}}\phi ^{\ast }\left( a\right) \left\vert f\right\rangle \left\langle g\right\vert &=&\underset{i}{\sum }\left( (a^{t})^*H^{\ast }e_{i}\otimes g,H^{\ast }e_{i}\otimes f\right) \\ &=&\underset{i,m,n}{\sum }((a^{t})^*H^{\ast }e_{i}\otimes g,v_{m})(v_{m},H^{\ast }e_{i}\otimes w_{n})(w_{n},f) \\ &=&\underset{i}{\sum }\lambda _{i}^{\frac{1}{2}}\left( J_{\mathcal{H}}a^{t}v_{m}\otimes g,e_{i}\right) \lambda _{i}^{\frac{1}{2}}\left( e_{i},v_{m}\otimes w_{n}\right) (w_{n},f) \\ &=&\underset{i,m}{\sum }\lambda _{i}\left( J_{\mathcal{H}}a^{t}v_{m}\otimes g,e_{i}\right) \left( e_{i},v_{m}\otimes f\right) \\ &=&Tr\rho _{\omega }\left( \underset{m}{\sum }\left\vert v_{m}\otimes f\right\rangle \left\langle J_{\mathcal{H}}a^{t}v_{m}\otimes g\right\vert \right) \\ &=&Tr\rho _{\omega }\left( \underset{m}{\sum }\left\vert v_{m}\otimes f\right\rangle \left\langle a^*v_{m}\otimes g\right\vert \right) \\ &=&Tr\rho _{\omega }\left( a\otimes \left\vert f\right\rangle \left\langle g\right\vert \right) =\omega \left( a\otimes \left\vert f\right\rangle \left\langle g\right\vert \right) \end{aligned}$$Thus $$Tr_{\mathcal{K}}b\phi ^{\ast }\left( a\right) =\omega \left( a\otimes b\right).$$ The rest follows from Theorem \[pierwsze\]. Theorem \[pierwsze\], Lemma \[drugi lemat\] and Proposition \[PPT\] lead to PPT states are completely characterized by entanglement mappings $\phi^{\ast}$ which are both CP and co-CP. This conclusion can be rephrased in the following way (cf [@Matsuoka]): Entanglement mapping $\phi^{\ast}$ which is not CP will be called $q$-entanglement. The set of all $q$-entanglements will be denoted by ${\mathcal E}_q$. Then PPT criterion can be formulated as: A state is PPT if and only if its associated entanglement mapping $\phi^{\ast}$ is not in ${\mathcal E}_q$. Examples ======== To illustrate the strategy of B-O entanglement maps as well as to get better understanding of positive maps we present some examples. **Example 1**: Let $\omega :\mathcal{B}\left( \mathcal{H}\otimes \mathcal{K}\right) \rightarrow \mathbb{C}$ be **a pure product state**, i.e. $$\begin{aligned} \omega \left( a\otimes b\right) &=&\omega _{x\otimes y}\left( a\otimes b\right) \\ &\equiv &\left( x\otimes y,\left( a\otimes b\right) x\otimes y\right) \\ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } &=&\left( x,ax\right) (y,by)\end{aligned}$$where $x\in {{\mathcal H}},y\in {{\mathcal K}}$ and $\left\Vert x\right\Vert =1=\left\Vert y\right\Vert $. Then$$\begin{aligned} H\zeta &=&J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}\zeta }^{\ast }\left( x\otimes y\right) \otimes \left( x\otimes y\right) \\ &=&J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes \left( J_{\mathcal{H}}\zeta ,x\right) y.\end{aligned}$$For $f\in \mathcal{H}\otimes \mathcal{K}$, $g\in \mathcal{K}$, $h\in \mathcal{H}$ we have$$\begin{aligned} \left( h,H^{\ast }f\otimes g\right) &=&\left( Hh,f\otimes g\right) \\ &=&\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes \left( J_{\mathcal{H}}h,x\right) y,f\otimes g\right) \\ &=&\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) ,f\right) \left( h,J_{\mathcal{H}}x\right) \left( y,g\right) \\ &=&\left( h,\left( y,g\right) \left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) ,f\right) J_{\mathcal{H}}x\right) \\ &=&\left( h,\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y,f\otimes g\right) J_{\mathcal{H}}x\right) .\end{aligned}$$so that$$H^{\ast }f\otimes g=\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y,f\otimes g\right) J_{\mathcal{H}}x.$$Let $v$, $z\in \mathcal{K}$ and $\left\{ e_{i}\right\} $ is a CONS in $\mathcal{H}\otimes \mathcal{K}$ then, using the above calculation, we have$$\begin{aligned} &&\left( v,\phi ^{\ast }\left( a\right) z\right) \\ &=&\left( v,Tr_{{{\mathcal H}}\otimes {{\mathcal K}}}Ha^{t}H^{\ast }z\right) \\ &=&\underset{i}{\sum }\left( e_{i}\otimes v,Ha^{t}H^{\ast }e_{i}\otimes z\right) \\ &=&\underset{i}{\sum }\left( H^{\ast }e_{i}\otimes v,a^{t}H^{\ast }e_{i}\otimes z\right) \\ &=&\underset{i}{\sum }\left( \left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y,e_{i}\otimes v\right) J_{\mathcal{H}}x,a^{t}\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y,e_{i}\otimes z\right) J_{\mathcal{H}}x\right) \\ &=&\underset{i}{\sum }\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y,e_{i}\otimes z\right) \left( e_{i}\otimes v,J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \otimes y\right) \left( J_{\mathcal{H}}x,a^{t}J_{\mathcal{H}}x\right).\end{aligned}$$Note that $$\begin{aligned} \left( J_{\mathcal{H}}x,a^{t}J_{\mathcal{H}}x\right) &=&\left( J_{\mathcal{H}}x,J_{\mathcal{H}}a^{\ast }x\right) \\ &=&\left( a^{\ast }x,J_{\mathcal{H}}J_{\mathcal{H}}x\right) \\ &=&\left( x,ax\right). \end{aligned}$$Thus$$\begin{aligned} &&\left( v,\phi ^{\ast }\left( a\right) z\right) \\ &=&\left( J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) ,J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \right) \left( y,z\right) \left( v,y\right) \left( x,ax\right) \\ &=&\left( v,\left\Vert J_{\mathcal{H}\otimes \mathcal{K}}\left( x\otimes y\right) \right\Vert ^{2}\left( y,z\right) \left( x,ax\right) y\right), \end{aligned}$$so that$$\begin{aligned} \phi ^{\ast }\left( a\right) z &=&\left\Vert x\otimes y\right\Vert ^{2}\left( y,z\right) \left( x,ax\right) y \\ &=&\left( x,ax\right) \left\vert y\right\rangle \left\langle y\right\vert \cdot z.\text{ \ (because of }\left\Vert x\right\Vert =1=\left\Vert y\right\Vert .)\end{aligned}$$Put $P_{y}=\left\vert y\right\rangle \left\langle y\right\vert $. Then, we have$$\phi ^{\ast }\left( a\right) =\left( x,ax\right) P_{y}.$$ To analyse CP and co-CP property let us observe that (we are applying Criterion \[kryt1\]): $\forall $ $w\in \mathcal{K}$$$\begin{aligned} \left( w,\underset{i,j}{\sum }b_{i}^{\ast }\phi ^{\ast }\left( a_{i}^{\ast }a_{j}\right) b_{j}w\right) &=&\underset{i,j}{\sum }\left( x,a_{i}^{\ast }a_{j}x\right) \left( w,b_{i}^{\ast }y\right) \left( b_{j}^{\ast }y,w\right) \\ &=&\left( \underset{i}{\sum }\overline{\lambda _{i}}a_{i}x,\underset{j}{\sum }\overline{\lambda _{j}}a_{j}x\right) \geq 0,\end{aligned}$$where $\lambda _{i}=\left( w,b_{i}^{\ast }y\right) .$ Also$$\begin{aligned} \left( w,\underset{i,j}{\sum }b_{i}^{\ast }\phi ^{\ast }\left( a_{j}^{\ast }a_{i}\right) b_{j}w\right) &=&\underset{i,j}{\sum }\left( x,a_{j}^{\ast }a_{i}x\right) \left( w,b_{i}^{\ast }y\right) \left( b_{j}^{\ast }y,w\right) \\ &=&\left( \underset{j}{\sum }\lambda _{j}a_{j}x,\underset{i}{\sum }\lambda _{i}a_{i}x\right) \geq 0.\end{aligned}$$So $\phi ^{\ast }$ is both CP and co-CP. This was expected because any pure separable state is a PPT state. **Example 2**: **Separable states**: Let $\omega =\underset{i}{\sum }\lambda _{i}\omega _{x_{i}\otimes y_{i}}$. One has$$\begin{aligned} \omega \left( a\otimes b\right) &=&\underset{i}{\sum }\lambda _{i}\omega _{x_{i}\otimes y_{i}}\left( a\otimes b\right) \\ &=&\underset{i}{\sum }\lambda _{i}Tr_{\mathcal{K}}b\phi _{i}^{\ast }\left( a\right) \\ &=&Tr_{\mathcal{K}}b\underset{i}{\sum }\lambda _{i}\phi _{i}^{\ast }\left( a\right) \\ &=&Tr_{\mathcal{K}}b\phi ^{\ast }\left( a\right)\end{aligned}$$where $\phi ^{\ast }=\underset{i}{\sum }\lambda _{i}\phi _{i}^{\ast }$. But $\phi _{i}^{\ast }$ was described in Example 1 and is both CP and co-CP so $\phi ^{\ast }$ also has this property. Clearly, the conclusion given at the end of Example 1 is also valid here. **Example 3**: **A pure state**. Let $\omega $ be a pure state on $\mathcal{B}\left( \mathcal{H}\otimes \mathcal{K}\right) $. As any pure state on the factor I is a vector state so there exists $x\in \mathcal{H}\otimes \mathcal{K}$ such that $$\omega \left( a\otimes b\right) =\left( x,\left( a\otimes b\right) x\right) .$$Let $z\in \mathcal{H}\otimes \mathcal{K}$, $h$, $\zeta \in \mathcal{H}$, $g\in \mathcal{K}$ and $\left\{ v_{i}\right\} $ be CONS in $\mathcal{H}$, $\left\{ w_{j}\right\} $ be CONS in $\mathcal{K}$. Then $$H\zeta =\left( J_{\mathcal{H}\otimes \mathcal{K}}\otimes T_{J_{\mathcal{H}}\zeta }^{\ast }\right) \left( x\otimes x\right) =J_{\mathcal{H}\otimes \mathcal{K}}x\otimes T_{J_{\mathcal{H}}\zeta }^{\ast }x.$$Also$$\left( h,H^{\ast }z\otimes y\right) =\left( Hh,z\otimes y\right) =\left( J_{\mathcal{H}\otimes \mathcal{K}}x,z\right) \left( T_{J_{\mathcal{H}}h}^{\ast }x,y\right)$$where $$\begin{aligned} T_{J_{\mathcal{H}}h}^{\ast }x &=&\underset{i,j}{\sum }T_{J_{\mathcal{H}}h}^{\ast }\left( v_{i}\otimes w_{j},x\right) v_{i}\otimes w_{j} \\ &=&\underset{i,j}{\sum }\left( v_{i}\otimes w_{j},x\right) \left( J_{\mathcal{H}}h,v_{i}\right) w_{j}.\end{aligned}$$Thus$$\begin{aligned} \label{5} \left( h,H^{\ast }z\otimes y\right) &=&\underset{i,j}{\sum }\left( J_{\mathcal{H}\otimes \mathcal{K}}x,z\right) \left( x,v_{i}\otimes w_{j}\right) \left( v_{i},J_{\mathcal{H}}h\right) \left( w_{j},y\right) \notag \\ &=&\underset{i,j}{\sum }\left( J_{\mathcal{H}\otimes \mathcal{K}}x,z\right) \left( x,v_{i}\otimes w_{j}\right) \left( v_{i}\otimes w_{j},J_{\mathcal{H}}h\otimes y\right) \notag \\ &=&\left( J_{\mathcal{H}\otimes \mathcal{K}}x,z\right) \overline{\left( J_{\mathcal{H}}h\otimes y,x\right) }.\end{aligned}$$Let $\left\{ e_{i}\right\} $ be CONS in $\mathcal{H}\otimes \mathcal{K}$ then, for $w$, $u\in \mathcal{K}$$$\begin{aligned} \left( w,\phi ^{\ast }\left( a\right) u\right) &=&\left( w,\left( Tr_{\mathcal{H\otimes K}}Ha^{t}H^{\ast }\right) u\right) \\ &=&\underset{n}{\sum }\left( e_{n}\otimes w,Ha^{t}H^{\ast }e_{n}\otimes u\right) \\ &=&\underset{n}{\sum }\left( H^{\ast }e_{n}\otimes w,a^{t}H^{\ast }e_{n}\otimes u\right).\end{aligned}$$Let us use (\[5\]), i.e. put $h=a^{t}H^{\ast }e_{n}\otimes u$, $z=e_{n}$, $y=w$. Then one gets$$\begin{aligned} &&\left( w,\phi ^{\ast }\left( a\right) u\right) \\ &=&\underset{n}{\sum }\left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \left( J_{\mathcal{H}}\left( a^{t}H^{\ast }e_{n}\otimes u\right) \otimes w,\underset{i,j}{\sum }\left( v_{i}\otimes w_{j},x\right) v_{i}\otimes w_{j}\right) \\ &=&\underset{n,i,j}{\sum }\left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \left( v_{i}\otimes w_{j},x\right) \left( J_{\mathcal{H}}\left( a^{t}H^{\ast }e_{n}\otimes u\right) ,v_{i}\right) \left( w,w_{j}\right) \\ &=&\underset{n,i,j}{\sum }\left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \overline{\left( x,v_{i}\otimes w_{j}\right) }\overline{\left( w_{j},w\right) }\overline{\left( \left( a^{t}H^{\ast }e_{n}\otimes u\right) ,J_{\mathcal{H}}v_{i}\right) } \\ &=&\underset{n,i}{\sum }\left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \left( v_{i}\otimes w,x\right) \left( \left( a^{t}\right) ^{\ast }J_{\mathcal{H}}v_{i},H^{\ast }e_{n}\otimes u\right)\end{aligned}$$Again, using (\[5\]), i.e. putting $h=\left( a^{t}\right) ^{\ast }J_{\mathcal{H}}v_{i}$, $z=e_{n}$, $y=u$, one gets $$\begin{aligned} &&\left( w,\phi ^{\ast }\left( a\right) u\right) \\ &=&\underset{n,i}{\sum }\left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \left( v_{i}\otimes w,x\right) \left( J_{\mathcal{H}\otimes \mathcal{K}}x,e_{n}\right) \overline{\left( J_{\mathcal{H}}\left( a^{t}\right) ^{\ast }J_{\mathcal{H}}v_{i}\otimes u,x\right) } \\ &=&\underset{n,i}{\sum }\left( J_{\mathcal{H}\otimes \mathcal{K}}x,e_{n}\right) \left( e_{n},J_{\mathcal{H}\otimes \mathcal{K}}x\right) \left( v_{i}\otimes w,x\right) \left( x,av_{i}\otimes u\right) \text{ \ }(\text{because of }\left( a^{t}\right) ^{t}=a) \\ &=&\underset{i}{\sum }\left( x,av_{i}\otimes u\right) \left( v_{i}\otimes w,x\right) \text{\ \ \ \ \ \ (as }\omega _{x}\text{ is a state, }\left\Vert x\right\Vert =1) \\ &=&\left( x,\left( a\otimes 1\right) \underset{i}{\sum }\left\vert v_{i}\right\rangle \left\langle v_{i}\right\vert \otimes \left\vert u\right\rangle \left\langle w\right\vert x\right) \\ &=&\left( x,\left( a\otimes \left\vert u\right\rangle \left\langle w\right\vert \right) x\right).\end{aligned}$$ Consequently $$Tr_{\mathcal{K}}\phi ^{\ast }\left( a\right) \left\vert u\right\rangle \left\langle w\right\vert =Tr_{\mathcal{H}\otimes \mathcal{K}}\left( a\otimes \left\vert u\right\rangle \left\langle w\right\vert \right) P_{x}.$$This means that: $$\label{stanczysty} \phi ^{\ast }\left( a\right) =Tr_{\mathcal{H}}\left( a\otimes 1\right) P_{x}.$$ Turning to the analysis of CP and co-CP we begin with co-CP property. To this end let $\left\{ v_{k}\right\} $ be CONS in $ \mathcal{H}$ then $$\begin{aligned} \left( w,\underset{i,j}{\sum }b_{i}^{\ast }\phi ^{\ast }\left( a_{j}^{\ast }a_{i}\right) b_{j}w\right) &=&\underset{i,j}{\sum }\left( x,\left( a_{j}^{\ast }a_{i}\otimes \left\vert b_{j}w\right\rangle \left\langle b_{i}w\right\vert \right) x\right) \\ &=&\underset{i,j}{\sum }\left( x,\left( a_{j}^{\ast }\left( \underset{k}{\sum }\left\vert v_{k}\right\rangle \left\langle v_{k}\right\vert \right) a_{i}\otimes \left\vert b_{j}w\right\rangle \left\langle b_{i}w\right\vert \right) x\right) \\ &=&\underset{i,j,k}{\sum }\left( x,a_{j}^{\ast }v_{k}\otimes b_{j}w\right) \left( a_{i}^{\ast }v_{k}\otimes b_{i}w,x\right) \\ &=&\underset{k}{\sum }\left( x,\underset{j}{\sum }a_{j}^{\ast }v_{k}\otimes b_{j}w\right) \left( \underset{i}{\sum }a_{i}^{\ast }v_{k}\otimes b_{i}w,x\right) \geq 0.\end{aligned}$$Thus $\phi ^{\ast }$ is a co-CP map. Now, consider CP condition: Let $\left\{ v_{k}\right\} $ be CONS in $\mathcal{H}$ and $\left\{ z_{l}\right\} $ be CONS in $\mathcal{K}$. Assume that $x$ is given by $$x=\underset{k}{\sum }\lambda _{k}v_{k}\otimes z_{k}\text{\ \ }\left( \lambda _{k}\in \mathbb{C},\underset{k}{\sum }\left\vert \lambda _{k}\right\vert ^{2}=1\right)$$where at least two elements of $\left\{ \lambda _{k}\right\} $ are non-zero. In order to show the non-CP of $\phi ^{\ast }$ some preliminaries are necessary. We recall that $M_n({{\mathcal A}})$ denotes the [[[$\hbox{\bf C}^*$]{}]{}]{}-algebra of $ n \times n$ matrices with entries in ${{\mathcal A}}$. Let $\{ e_{ij}\}$ be the canonical basis for $M_n({{\mathbb{C}}})\equiv M_n $, i.e. the $ n \times n$ matrices with a $``1''$ in row $i$, column $j$, and zeros elsewhere. It is well known that every element $y$ in ${{\mathcal A}}\odot M_n$ can be written $$\label{22} y = \sum a_{ij} \otimes e_{ij}$$ where the $a_{ij}$’s (being in ${{\mathcal A}}$) are unique. The map $$\label{23} \Theta: {{\mathcal A}}\odot M_n \to M_n({{\mathcal A}}): \sum a_{ij} \otimes e_{ij} \mapsto \{a_{ij} \}$$ is linear, multiplicative, $^*$-preserving, and bijective. Therefore, it should be clear that the complete positivity of $\phi ^{\ast }$ is equivalent to the positivity of operator $\sum_{i,j=1}^n e_{ij} \otimes \phi ^{\ast }(a^*_i a_j)$, for any $n$. Let $\left\{ e_{i}\right\} $ be CONS in $\mathbb{C}^{n}$. One has$$\begin{aligned} \phi ^{\ast }\left( a_{i}^{\ast }a_{j}\right) &=&Tr_{\mathcal{H}}\left( a_{i}^{\ast }a_{j}\otimes 1\right) P_{x} \\ &=&\underset{k.l}{\sum }Tr_{\mathcal{H}}\left( a_{i}^{\ast }a_{j}\otimes 1\right) \left\vert \lambda _{k}v_{k}\otimes z_{k}\right\rangle \left\langle \lambda _{l}v_{l}\otimes z_{l}\right\vert \\ &=&\underset{k.l}{\sum }\lambda _{k}\overline{\lambda _{l}}Tr_{\mathcal{H}}\left( a_{i}^{\ast }a_{j}\left\vert v_{k}\right\rangle \left\langle v_{l}\right\vert \right) \left\vert z_{k}\right\rangle \left\langle z_{l}\right\vert \\ &=&\underset{k.l}{\sum }\lambda _{k}\overline{\lambda _{l}}\left( v_{l},a_{i}^{\ast }a_{j}v_{k}\right) \left\vert z_{k}\right\rangle \left\langle z_{l}\right\vert\end{aligned}$$Thus $$\underset{i,j=1}{\overset{n}{\sum }}\left\vert e_{i}\right\rangle \left\langle e_{j}\right\vert \otimes \phi ^{\ast }\left( a_{i}^{\ast }a_{j}\right) = \underset{i,j=1}{\overset{n}{\sum }}\underset{k.l}{\sum }\lambda _{k}\overline{\lambda _{l}}\left( v_{l},a_{i}^{\ast }a_{j}v_{k}\right) \left\vert e_{i}\right\rangle \left\langle e_{j}\right\vert \otimes \left\vert z_{k}\right\rangle \left\langle z_{l}\right\vert .$$Put $a_{i}=\left\vert y\right\rangle $ $\left\langle v_{i}\right\vert $  ( $y\in \mathcal{H}$, $\left\Vert y\right\Vert =1$) then$$\{ \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \} \cong \underset{i,j=1}{\overset{n}{\sum }}\lambda _{j}\overline{\lambda _{i}}\left\vert e_{i}\right\rangle \left\langle e_{j}\right\vert \otimes \left\vert z_{j}\right\rangle \left\langle z_{i}\right\vert$$The positivity of $\{ \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \}$ means the positivity of $\left( \Psi ,(\sum \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \otimes e_{ij})\Psi \right) $ for any $\Psi \in \mathbb{C}^{n}\otimes \mathcal{K}$. Let us take $\Psi _{\pm }$ in the form $$\Psi _{\pm }=e_{k}\otimes z_{l}\pm e_{l}\otimes z_{k}.$$and assume that $k\neq l$. Then$$\left( \Psi _{\pm },(\sum \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \otimes e_{ij})\Psi _{\pm }\right) =\pm 2{Re}\lambda _{k}\overline{\lambda _{l}}\in \mathbb{R}\text{.}$$If $2Re\lambda _{k}\overline{\lambda _{l}}$ is positive then $$\left( \Psi _{-},(\sum \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \otimes e_{ij})\Psi _{-}\right) =-2{Re}\lambda _{k}\overline{\lambda _{l}}<0.$$Also if $2{Re}\lambda _{k}\overline{\lambda _{l}}$ is negative then$$\left( \Psi _{+},(\sum \phi ^{\ast }\left( \left\vert v_{i}\right\rangle \left\langle v_{j}\right\vert \right) \otimes e_{ij})\Psi _{+}\right) =2{Re}\lambda _{k}\overline{\lambda _{l}}<0.$$This means that $\phi ^{\ast }$ is non-CP. The above example can be readily generalized (cf Example 2). Namely, a normal state $\omega$ on ${{\mathcal B}}({{\mathcal H}}\otimes {{\mathcal K}})$ can be written as $$\omega(a \otimes b) = Tr_{{{\mathcal H}}\otimes {{\mathcal K}}} \varrho \ a\otimes b,$$ where $\varrho$ is the corresponding density matrix. But, the spectral representation of $\varrho$ implies $$\omega(a \otimes b) = Tr_{{{\mathcal H}}\otimes {{\mathcal K}}} ( \sum_i\lambda_i P_{x_i} a \otimes b) = \sum_i \lambda_i \omega_{x_i}(a \otimes b).$$ Consequently, the entanglement mapping $\phi^{\ast}_{\omega}$ associated with the state $\omega$ would have the form $$\phi^{\ast}_{\omega} = \sum_i \lambda_i \phi^{\ast}_{\omega_{x_i}} = \sum_i Tr_{{{\mathcal H}}}(a \otimes I) P_{x_i} = Tr_{{{\mathcal H}}} (a \otimes I) \varrho,$$ where the second equality follows from (\[stanczysty\]). Obviously, the analysis of CP and co-CP for $\phi^{\ast}_{\omega}$ is, in general, much more complicated. Finally, to get a better understanding of the difference between CP and co-CP given in Example 3, let us consider the very particular case of this example. **Example 4.** Let in Example 3, ${{\mathcal H}}$ and ${{\mathcal K}}$ be three dimensional Hilbert spaces. Further, put in the place of $x$ vectors giving maximally entangled pure states, i.e. $$x_1 = {\frac{1}{\sqrt{3}}} (e_1\otimes f_2 - e_2\otimes f_3 - e_3\otimes f_1)$$ and $$x_2 = {\frac{1}{\sqrt{3}}} (e_1\otimes f_1 + e_2\otimes f_2 + e_3\otimes f_3)$$ where $\{e_i\}_1^3$ ($\{f_i \}_1^3$) is a CONS in ${{\mathcal H}}$ (in ${{\mathcal K}}$ respectively). Easy calculations, which are left to the reader, lead to the following maps $$\label{dodat} [a_{ij}]_{i,j=1}^3 \mapsto {\frac{1}{3}} \left( \begin{array}{ccc} a_{33} & - a_{13} & a_{23}\\ -a_{31} & a_{11} & - a_{21} \\ a_{32} & - a_{12} & a_{22}\\ \end{array} \right)$$ for $x_1$, and $$[a_{ij}]_{i,j=1}^3 \mapsto {\frac{1}{3}} ([a_{ij}]_{i,j=1}^3)^t.$$ for $x_2$. Here, $([a_{ij}]_{i,j=1}^3)^t$ stands for the transposed map. Clearly, transposition is not even 2-positive, so not CP. Now, non-CP observed of Example 3 should be well understood. The maps (\[dodat\]) will be useful in the last Section. Tomita’s scheme for partial transposition (see [@II], [@Majppt]) ================================================================ Let $\mathcal{H}$ be a (separable) Hilbert space. Using an invertible density matrix $\rho $ we can define a faithful state $\omega $ on $\mathcal{B}\left( \mathcal{H}\right) $ as $\omega \left( a\right) = Tr\rho a$ for $a\in \mathcal{B}\left( \mathcal{H}\right) $. Let us consider the GNS triple $\left( \mathcal{H}_{\pi },\pi ,\Omega \right) $ associated with $\left( \mathcal{B}\left( \mathcal{H}\right) \text{, }\omega \right) $. Such triple is given by: - GNS Hilbert space: $\mathcal{H}_{\pi }=\overline{\left\{ a\Omega \text{ };a\in \mathcal{B}\left( \mathcal{H}\right) \right\} }^{\left( \cdot ,\cdot \right) }$ with $\left( a,b\right) =Tra^{\ast }b$ for $a,b\in \mathcal{B}\left( \mathcal{H}\right) .$ - cyclic vector: $\Omega =\rho ^{1/2}$ . - representation: $\pi \left( a\right) \Omega =a\Omega .$ In the considered GNS representation, the modular conjugation $J_{m}$ is just the hermitian involution $J_{m}a\rho ^{1/2}=\rho ^{1/2}a^{\ast }$, and the modular operator $\Delta $ is equal to the map $\rho \cdot \rho ^{-1}$. However, some remarks are necessary here. As we have assumed that ${{\mathcal H}}$ is a separable Hilbert space then $\rho^{-1}$ is, in general, an unbounded operator. Hence, the domain of $\Delta$ should be described. To this end we note that: i) $\{A\rho ^{1/2}; A \in {{\mathcal B}}({{\mathcal H}}) \}$ is a dense subset in the set of all Hilbert-Schmidt operators ${{\mathcal F}}_{HS}({{\mathcal H}})$ on the Hilbert space ${{\mathcal H}}$, ii) $\alpha_t(\sigma) = \rho^{it} \sigma \rho^{-it}$ is an one parameter group of automorphisms on ${{\mathcal F}}_{HS}({{\mathcal H}})$. so, there exists (cf [@BR]) the set of entire analytic elements ${{\mathcal F}}_{HS}^0({{\mathcal H}})$ of $\alpha_t(\cdot)$. Thus $\Delta \sigma = \alpha_t(\sigma)|_{t = - i} = \rho \sigma \rho^{-1}$ is well defined for $\sigma \in {{\mathcal F}}_{HS}^0({{\mathcal H}})$. In particular, the polar decomposition of Tomita’s operator (cf. [@Takesaki]) is also well defined $$SA\Omega = A^* \Omega = J_m \Delta^{1/2} A \Omega$$ Note, that $ \{ A \Omega; A \in {{\mathcal B}}({{\mathcal H}}) \} \subseteq D(\Delta^{1/2})$, $D(\cdot)$ stands for the domain. In order to discuss the transposition on $\pi \left( \mathcal{B}\left( \mathcal{H}\right) \right) $ we introduce the following two conjugations: $J_{c}$ on $\mathcal{H}$ and $J$ on $\mathcal{H}_{\pi }.$ Thanks to the faithfulness of $\omega $ the eigenvectors $\left\{ e_{i}\right\} $ of $\rho $ form an orthogonal basis in $\mathcal{H}$. Hence we can define$$J_{c}x=\sum_{i}\overline{\left\langle e_{i},x\right\rangle }e_{i}$$for every $x\in \mathcal{H}$. Due to the fact that $\left\{ E_{ij}=\left\vert e_{i}\right\rangle \left\langle e_{j}\right\vert \right\} $ form an orthogonal basis in $\mathcal{H}_{\pi }$ we can also define a conjugation $J$ on $\mathcal{H}_{\pi }$$$Ja\Omega =\sum_{i}\overline{\left( E_{ij},a\Omega \right) }E_{ij}$$with $J\Omega =\Omega $. Following the construction presented in [@II] and [@Majppt] let us define a transposition on $\mathcal{B}\left( \mathcal{H}\right) $ as the map $a\in \mathcal{B}\left( \mathcal{H}\right) \mapsto a^{t}\equiv J_{c}a^{\ast }J_{c}$. By $\tau _{0}$ we will denote the map induced on $\mathcal{H}_{\pi }$ by the transposition, i.e.$$\tau _{0}a\Omega =a^{t}\Omega .$$Here are the main properties of $\tau _{0}$: (cf [@II]) \[28\] (1) Let $a\in \mathcal{B}\left( \mathcal{H}\right) $ and $\xi \in \mathcal{H}_{\pi }$. Then$$a^{t}\xi =Ja^{\ast }J\xi .$$(2) The map $\tau _{0}$ has a polar decomposition, i.e.$$\tau _{0}=U\Delta ^{1/2}$$where $U$ is an unitary operator on $\mathcal{H}_{\pi }$ defined by $U=\sum_{ij}\left\vert E_{ij}\right) \left( E_{ji}\right\vert .$ where the sum defining the operator $U$ is understood in the weak operator topology. In the above setting we can introduce the natural cone $\mathcal{P}$ (cf [@Araki], [@Co]) associated with $\left( \pi \left( \mathcal{B}\left( \mathcal{H}\right) \right) ,\Omega \right) $:$$P=\overline{\left\{ \Delta ^{1/4}a\Omega :a\geq 0,a\in \pi \left( \mathcal{B}\left( \mathcal{H}\right) \right) \right\} }^{\left( \cdot ,\cdot \right) }.$$The relationship between the Tomita-Takesaki scheme and transposition has the following form: \[3.7\] (see [@II]) Let $\xi \mapsto \omega _{\xi }$ be the homeomorphism between the natural cone $\mathcal{P}$ and the set of normal states on $\pi \left( \mathcal{B}\left( \mathcal{H}\right) \right) $, such that$$\omega _{\xi }\left( a\right) =\left( \xi ,a\xi \right) ,\text{ }a\in \mathcal{B}\left( \mathcal{H}\right) .$$For every state $\omega $ define $\omega ^{\tau }\left( a\right) =\omega \left( a^{t}\right) $. If $\xi \in \mathcal{P}$ then the unique vector in $\mathcal{P}$ mapped into the state $\omega _{\xi }^{\tau }$ by the homeomorphism described above, is equal to $U\xi ,i.e.$$$\omega _{\xi }^{\tau }\left( a\right) =\left( U\xi ,aU\xi \right) ,\text{ }a\in \mathcal{B}\left( \mathcal{H}\right) .$$ PPT states, a Hilbert space approach ==================================== Let $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$ be finite dimensional Hilbert spaces. We want to emphasize that finite dimensionality of Hilbert spaces is assumed only in the proof of Theorem \[TheoremPPT\]. More precisely, due to technical questions concerning the domain of modular operator $\Delta$ we were able to prove this Theorem only for finite dimensional case (see [@II]). On the other hand, we emphasize that the description of compound system based on Tomita’s approach is very general. It relies on the construction of tensor product of standard forms of von Neumann algebras and this description can be done in very general way (so infinite dimensional case is included, cf [@Cu]). Again let us consider a composite system $A+B$. Suppose that the subsystem $A$ is described by $\mathcal{A=}$ $\mathcal{B}\left( \mathcal{H}_{A}\right) $ and is equipped with a faithful state $\omega _{A}$ given by an invertible density matrix $\rho _{A}$ as $\omega _{A}\left( a\right) \equiv Tr\rho _{A}a$. Similarly, let $\mathcal{B=}$ $\mathcal{B}\left( \mathcal{H}_{B}\right) $ define the subsystem $B$, $\rho _{B}$ be an invertible density matrix in $\mathcal{B}\left( \mathcal{H}_{B}\right) $ and $\omega _{B}$ be a state on $\mathcal{B}$ such that $\omega _{B}\left( b\right) \equiv Tr\rho _{B}b$ for $b\in \mathcal{B}$. By $\left( \mathcal{K},\pi ,\Omega \right) $, $\left( \mathcal{K}_{A},\pi _{A},\Omega _{A}\right) $ and $\left( \mathcal{K}_{B},\pi _{B},\Omega _{B}\right) $ we denote the GNS representations of $\left( \mathcal{A}\otimes \mathcal{B}\text{, }\omega _{A}\otimes \omega _{B}\right) $, $\left( \mathcal{A}\text{, }\omega _{A}\right) $ and $\left( \mathcal{B}\text{, }\omega _{B}\right) $ respectively. Then the triple $\left( \mathcal{K},\pi ,\Omega \right) $ can be given by the following identifications (cf [@Cu], [@MajOSID]):$$\mathcal{K}=\mathcal{K}_{A}\otimes \mathcal{K}_{B}\text{, }\pi =\pi _{A}\otimes \pi _{B}\text{, }\Omega =\Omega _{A}\otimes \Omega _{B}.$$With these identifications we have$$J_{m}=J_{A}\otimes J_{B}\text{, }\Delta =\Delta _{A}\otimes \Delta _{B}$$where $J_{m}$, $J_{A}$, $J_{B}$ are modular conjugations and $\Delta $, $\Delta _{A}$, $\Delta _{B}$ are modular operators for $\left( \pi \left( \mathcal{A}\otimes \mathcal{B}\right) ^{\prime \prime }\text{, }\Omega \right) ,\left( \pi _{A}\left( \mathcal{A}\right) ^{\prime \prime }\text{, }\Omega _{A}\right) $, $\left( \pi _{B}\left( \mathcal{B}\right) ^{\prime \prime }\text{, }\Omega _{B}\right) $ respectively. Due to the finite dimensionality of the corresponding Hilbert spaces, just to simplify our notation, we will identify $\pi _{A}\left( \mathcal{A}\right) ^{\prime \prime }$ and $\pi _{A}\left( \mathcal{A}\right) $, etc. Moreover we will also write $a\Omega _{A}$ and $b\Omega _{B}$ instead of $\pi _{A}\left( a\right) \Omega _{A}$ and $\pi _{B}\left( b\right) \Omega _{B}$ for $a\in \mathcal{A}$, $b\in \mathcal{B}$ when no confusion can arise. Furthermore we denote the finite dimension of $\mathcal{H}_{B}$ by $n$. Thus $\mathcal{B}\left( \mathcal{H}_{B}\right) \equiv \mathcal{B}\left( \mathbb{C}^{n}\right) \equiv M_{n}\left( \mathbb{C}\right) $. To put some emphasis on the dimensionality of the “reference” subsystem $B$, we denote by $\mathcal{P}_{n}$ the natural cone for $\left( M_{n}^{\pi }\left( \mathcal{A}\right) ,\omega _{A}\otimes \omega _{0}\right) $, where $\pi \left( \mathcal{A}\otimes M_{n}\left( \mathbb{C}\right) \right) $ is denoted by $M_{n}^{\pi }\left( \mathcal{A}\right) $ and $\omega _{0}$ is a faithful state on $M_{n}\left( \mathbb{C}\right) $. In order to characterize the set of PPT states we need the notion of the “transposed cone” $\mathcal{P}_{n}^{\tau }=\left( I\otimes U\right) \mathcal{P}_{n}$, where $\tau $ is the transposition on $M_{n}\left( \mathbb{C}\right) $ and  $U$ is the unitary operator given in Proposition \[28\] with the eigenvectors of density matrix $\rho _{0}$ corresponding to $\omega _{0}$. Then the construction of $\mathcal{P}_{n}$ and $\mathcal{P}_{n}^{\tau }$ may be realized as follows:$$\mathcal{P}_{n}=\overline{\left\{ \Delta ^{1/4}\left[ a_{ij}\right] \Omega :\left[ a_{ij}\right] \in M_{n}^{\pi }\left( \mathcal{A}\right) ^{+}\right\} },$$$$\mathcal{P}_{n}^{\tau }=\overline{\left\{ \Delta ^{1/4}\left[ a_{ji}\right] \Omega :\left[ a_{ij}\right] \in M_{n}^{\pi }\left( \mathcal{A}\right) ^{+}\right\} }.$$ Consequently, we arrived to \[TheoremPPT\] (see [@II]) In the finite dimensional case $$\mathcal{P}_{n}^{\tau }\cap \mathcal{P}_{n}=\left\{ \Delta ^{1/4}\left[ a_{ij}\right] \Omega :\left[ a_{ij}\right] \geq 0,\left[ a_{ji}\right] \geq 0\right\} .$$ \[14\] (1) There is one to one correspondence between the set of PPT states and $\mathcal{P}_{n}^{\tau }\cap \mathcal{P}_{n}.$ \(2) There is one to one correspondence between the set of separable states and $\mathcal{P}_{A}\otimes \mathcal{P}_{B}$ (cf [@MajOSID]). The correspondence given in Corollary \[14\].2 holds for a general case. Thus, the above characterization is applicable to a true quantum system. We wish to close this Section with the following remark. Also, here, in the Hilbert space approach we met many “cones”: ${\mathcal P}_n$, ${\mathcal P}_n^{\tau}$, ${\mathcal P}_{{{\mathcal A}}} \otimes {\mathcal P}_{M_n}$. All these cones, as we have seen, play the crucial role in the description of important classes of states: all states, PPT states, and separable states respectively . This should be considered as another manifestation of “mysterious behavior” of tensor products (see Section 1). Equivalence between two types of characterization of PPT states =============================================================== In this Section, we wish to discuss the relation between the Hilbert space description of PPT states and B-O characterization. Firstly, we note that Tomita’s approach leads to the following representation of the compound state $\omega $: $$\omega \left( \sum_{i}a_{i}\otimes b_{i}\right) = \sum_i (\xi, a_i \otimes b_i \xi) =\sum_{i}\varphi _{\xi ,a_{i}}\left( b_{i}\right)$$where $\varphi _{\xi ,a_{i}}\left( b_{i}\right) \equiv \left( \xi ,\left( a_{i}\otimes b_{i}\right) \xi \right)$ and $\xi \in {{\mathcal P}}_n.$ We have used here the well known result from Tomita-Takesaki theory saying that for any normal state $\omega$ on a von Neumann algebra with cycling and separating vector $\Omega$ there is **unique** vector $\xi$ in the natural cone ${{\mathcal P}}_n$ such that $\omega (a) = (\xi, a \xi)$. Let us observe that for $a \in {{\mathcal A}}$, $a \geq0$, and any $b \in {{\mathcal B}}$ $$\begin{aligned} \omega(a \otimes b^t) & =& (\xi, a \otimes b^t \xi) \\ &=& Tr_{{{\mathcal K}}} |\xi><\xi| a^{\frac{1}{2}} \otimes 1 \cdot a^{\frac{1}{2}} \otimes 1 \cdot 1 \otimes b^t\\ &=& Tr_{{{\mathcal K}}_A} Tr_{{{\mathcal K}}_B} a^{\frac{1}{2}} \otimes 1 \cdot|\xi><\xi| \cdot a^{\frac{1}{2}} \otimes 1 \cdot 1 \otimes b^t\\ &=& Tr_{{{\mathcal K}}_B} \Bigl(Tr_{{{\mathcal K}}_A} a^{\frac{1}{2}} \otimes 1 \cdot|\xi><\xi| \cdot a^{\frac{1}{2}} \otimes 1\Bigr) \cdot b^t\\ &=& Tr_{{{\mathcal K}}_B} ({\rho}_{A \xi} b^t) = (\chi_{\rho, \xi}, b^t \chi_{\rho, \xi})\\ &=& (U\chi_{\rho, \xi}, b U\chi_{\rho, \xi}) = (\chi_{\rho, \xi}, UbU\chi_{\rho, \xi})\\ &=& Tr_{{{\mathcal K}}_B} \rho_{A,\xi} UbU = \omega(a \otimes UbU)\end{aligned}$$ where $\chi_{\rho, \xi}$ is Tomita’s representation of $Tr_{{{\mathcal K}}_B}( \rho_{A, \xi} \cdot)$ and we have used the notation given in Section 5, Proposition \[3.7\] and that the fact the partial trace $Tr_{{{\mathcal K}}_A} (\cdot)$ is well defined conditional expectation. As $\omega(a \otimes b)$ is linear in $a$, and any $a$ can be written as a sum of four positive elements (Jordan decomposition) the previous result can be extended to $$\label{U} \omega(a \otimes b^t) = \omega(a \otimes UbU)$$ for any $a \in {{\mathcal A}}$ and $b \in {{\mathcal B}}$. Now we are in position to compare the strategy given by Lemma \[drugi lemat\] and B-O approach with the Hilbert space description of PPT states. Firstly we note (cf Lemma \[drugi lemat\]) that maps $\varphi _{\xi , \cdot}\left( \cdot \right)$ can be considered as $${{\mathcal B}}({{\mathcal K}}_A) \ni a \mapsto \varphi _{\xi ,a}\left( \cdot \right) \in {{\mathcal B}}({{\mathcal K}}_B)_*$$ Secondly, note that the positivity used in Lemma \[drugi lemat\](2) implies $$\begin{aligned} 0 &\leq &\omega \left( \sum_{i,j}a_{i}^{\ast }a_{j}\otimes b_{i}^{\ast }b_{j}\right) \\ &=&\sum_{i,j}\varphi _{\xi ,a_{i}^{\ast }a_{j}}\left( b_{i}^{\ast }b_{j}\right) = \sum_{i,j}\left( \xi ,\left( a_{i}^{\ast }a_{j}\otimes b_{i}^{\ast }b_{j}\right) \xi \right). \end{aligned}$$ By using the same vector $\xi \in \mathcal{P}_{n}$ let us define $\omega ^{\tau }\in \left( \mathcal{A}\otimes \mathcal{B}\right)_* $ $$\omega ^{\tau }\left( \sum_{i}a_{i}\otimes b_{i}\right) \equiv \sum_{i}\varphi _{\xi ,a_{i}}^{\tau }\left( b_{i}\right)$$ where $\varphi _{\xi ,a_{i}}^{\tau }\left( b_{i}\right) \equiv \left( \xi ,\left( a_{i}\otimes b_{i}^{t}\right) \xi \right) .$ The positivity used in Lemma \[drugi lemat\](1) implies $$\begin{aligned} \omega ^{\tau }\left( \sum_{i,j}a_{i}^{\ast }a_{j}\otimes b_{i}^{\ast }b_{j}\right) &=&\sum_{i,j}\varphi _{\xi ,a_{i}^{\ast }a_{j}}^{\tau }\left( b_{i}^{\ast }b_{j}\right) \\ &=& \sum_{i,j}\left( \xi ,\left( a_{i}^{\ast }a_{j}\otimes \left( b_{i}^{\ast }b_{j}\right) ^{t}\right) \xi \right) \\ &=& \sum_{i,j}\left( \xi ,\left( a_{i}^{\ast }a_{j}\otimes b_{j}^{t}\left( b_{i}^{\ast }\right) ^{t}\right) \xi \right)\\ &=& \sum_{i,j}\left(I\otimes U \xi ,\left( a_{i}^{\ast }a_{j}\otimes b_{j}\left( b_{i}^{\ast }\right) \right)I\otimes U \xi \right) \geq 0,\\\end{aligned}$$ where in the last equality we have used (\[U\]). Hence, CP and co-CP of entangling mapping is equivalent to $\xi \in \mathcal{P}_{n}^{\tau }\cap \mathcal{P}_{n}$. Consequently, we conclude that The description of PPT states by $\mathcal{P}_{n}^{\tau }\cap \mathcal{P}_{n} $ can be recognized as the dual description of PPT states by $\mathcal{E}/\mathcal{E}_{q}.$ In the base of the above equivalence of two types of description of PPT states we may discuss the effectiveness of such characterizations from different points of view. This will be the topic of next Sections. We will start with an analysis of decomposable maps (cf [@Mst]). On decomposable maps ===================== In [@St3] St[ø]{}rmer gave the following characterization of decomposable maps: [([@St3])]{} \[Stormer\] Let $\phi: {{\mathcal A}}\to B({{\mathcal H}})$ be a positive map. A map $\phi$ is decomposable if and only if for all $n \in {{\mathbb{N}}}$ whenever $[x_{ij}]$ and $[x_{ji}]$ belong to $M_n({{\mathcal A}})^+$ then $[\phi(x_{ij})]\in M_n(B({{\mathcal H}}))^+$. As our aim is to discuss effectiveness of description of PPT states given in Section 5, we again assume finite dimensionality of Hilbert space ${{\mathcal H}}$. Further, recall (see Criterion \[kryt1\]) that the positivity of the matrix $[\phi(x_{ij})]$ (with operator entries!) is equivalent to $$\label{dwa} \sum_{ij} y^*_i \phi(x_{ij}) y_j \geq 0$$ where $\{ y_i\}$ are arbitrary elements of $B({{\mathcal H}})$. Furthermore, any positive matrix $[x_{ij}]$ can be written as (cf [@Tak]) $$\label{hoho} [x_{ij}] = \sum_k [(v^{(k)}_{i})^* v^{(k)}_{j}]$$ Hence, applying condition (\[dwa\]) to matrices of the form $[a^*_i a_j]$ with the choice of $y_i$ such that all $y_i = 0$ except for $i_0$ and $j_0$, then changing the numeration in such way that $y_{i_0} = y_1$ and $y_{j_0} = y_2$ we arrive to study the positivity of the following matrix $$\label{jeden} \left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) \geq 0$$ and its transposition. On the other hand, block matrix techniques leads to necessary and sufficient conditions for positivity of such matrices. Namely, let $A,B,C$ be $d \times d$ matrices. Then [(see [@XZ])]{} \[Zhan\] \[XZ\] $$\Big[ \begin{array}{cc} A & B \\ B^* & C \\ \end{array} \Big] \geq 0$$ if and only if $A\geq 0$, $C\geq 0$ and there exists a contraction $W$ such that $B = A^{\frac{1}{2}} W C^{\frac{1}{2}}$. Assume, if necessary, that $a_1$ and $a_2$ have inverses, otherwise $a_i^{-1}$ is understood to be generalized inverse of $a_i$. Then, application of Lemma \[XZ\] to the St[ø]{}rmer condition leads to the following question: When $|a_1|^{-1} a^*_2 a_1 |a_2|^{-1}$ is a contraction? But an operator $T \in B({{\mathcal H}})$ is a contraction if and only if $||T||\leq 1$ what is equivalent to $||Tx||^2 \leq ||x||^2$. This can be written as $$\label{contraction} (x,T^*Tx) \leq (x,x)$$ what is equivalent to $$\label{contraction2} T^*T\leq \bf 1$$ Consequently, (\[contraction2\]) and Zhan’s lemma \[XZ\] give (see also [@A1] and [@Ch2]) $$a^*_1a_2|a_1|^{-2} a_2^* a_1 \leq |a_2|^2$$ Hence $$\forall_f \quad (f,a^*_1a_2 (a_1^*a_1)^{-1} a^*_2a_1 f) \leq (f, a_2^* a_2 f)$$ So, putting $f = a_1^{-1} g$ one gets $$\forall_g \quad ||(a^*_1)^{-1} a^*_2 g|| \leq ||a_2 a_1^{-1} g||$$ This means hyponormality of operators $(a_2 a_1^{-1})^*$ (cf. [@H], and [@Stamp]). But, as considered operators are defined on a finite dimensional Hilbert space, in particular, they are completely continuous. Therefore, hyponormality of $(a_2 a_1^{-1})^*$ implies normality (see [@A], [@B], and [@Stamp]). Consequently, $a_2 a^{-1}_1$ is a normal operator. This means that there is a unitary operator $U$ (equivalently unitary matrix as finite dimensions are assumed) such that $$U a_2 a_1^{-1} U^* = diag(\lambda_i)$$ where $\lambda_i \in {{\mathbb{C}}}$. This can be rewritten as $$a_2a_1^{-1} = \sum_i \lambda_i Q_i$$ where $\lambda_i \in {{\mathbb{C}}}$ and $\{Q_i\}$ is the resolution of identity. Hence, putting $Q_i \equiv |e_i><e_i|$ where $\{ e_i \}$ is a CONS in the Hilbert space ${{\mathcal H}}$ on which operators $\{ a_i \}$ act and defining rank one operators $|f><g|z \equiv (g,z) |f>$, one gets $$\label{lala} a_2 = \sum_i \lambda_i |e_i><a^*_1e_i|$$ Thus we proved: \[proposition\] For any matrix $\left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right)$ satisfying the St[ø]{}rmer condition, $a_2$ is of the form (\[lala\]). Using the Ando-Choi inequality (see [@A1], [@Ch2]) one gets analogous formula for $a_1$ in terms of $a_2$. As a next step we note that (\[lala\]) and St[ø]{}rmer condition lead to the following form of the matrix $\left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right)$: $$\left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) = \sum_i \left( \begin{array}{cc} 1 & \lambda_i \\ \bar{\lambda_i} & |\lambda_i|^2 \\ \end{array} \right) \dot {\Big( \begin{array}{cc} |a_1^*e_i><a_1^*e_i| & 0 \\ 0 & |a_1^*e_i><a_1^* e_i| \\ \end{array} \Big)}$$ To rewrite the above equality in a more compact form, let us denote the norm of the vector $|a_1^*e_i>$ by $\alpha_i$ and the normalized vector $\frac{1}{\alpha_i} |a_1^*e_i>$ by $\varphi_i$. Then $$\label{baba1} \left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) = \sum_i \alpha_i^2 \left( \begin{array}{cc} 1 & \lambda_i \\ \bar{\lambda_i} & |\lambda_i|^2 \\ \end{array} \right) \dot {\Big( \begin{array}{cc} |\varphi_i><\varphi_i| & 0 \\ 0 & |\varphi_i><\varphi_i| \\ \end{array} \Big)}$$ or symbolically $$\label{chocho1} \left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) = \sum_i \alpha_i^2 \cdot \Lambda_i \cdot R_i$$ where $\Lambda_i$ are “matrix” coefficients while $R_i$ are “matrix” projectors (not mutually orthogonal!). This leads to: \[21\] (\[chocho1\]) implies “separability” for $[a_i^*a_j]$ satisfying the St[ø]{}rmer condition. Namely, using the identification $M_2({{\mathcal B}}({{\mathcal H}}))\cong M_2({{\mathbb{C}}})\otimes {{\mathcal B}}({{\mathcal H}})$ (cf discussion concerning equations (\[22\]) and (\[23\])) and noting that $(1 + |\lambda_i|^2)^{-\frac{1}{2}} \Lambda_i \equiv P_i$ is a projector one can write $$\left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) = \sum_i \alpha_i^2 (1 + |\lambda_i|^2)(P_i \otimes \bf{1})(\bf{1}\otimes |\varphi_i><\varphi_i|).$$ Hence $$\left( \begin{array}{cc} a^*_1a_1 & a^*_1a_2 \\ a^*_2 a_1 & a^*_2 a_2 \\ \end{array} \right) \in M_2({{\mathbb{C}}})^+\otimes {{\mathcal B}}({{\mathcal H}})^+.$$ Therefore, it is important to realize that non-triviality of St[ø]{}rmer condition follows from the fact that when a positive matrix $[x_{ij}]$ [(]{}$= \sum_k [(v^{(k)}_{i})^* v^{(k)}_{j}]$[)]{} satisfies the St[ø]{}rmer condition some of its summand(s) $[(v^{(k)}_{i})^* v^{(k)}_{j}]$ may not. We end this Section with Formula (\[baba1\]) can serve as a part of recipe for producing PPT states and some non-decomposable maps on matrix algebras (see next Section). Effectiveness of the description of PPT states ============================================== Now we are able to discuss the question of effectiveness of the construction of ${{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$. In other words we are interested in the following question: Can one provide a canonical form for a vector in ${{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$? We begin with the remark that the structure of ${{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$ given by Theorem \[TheoremPPT\] reflects the St[ø]{}rmer characterization of decomposable maps (see Theorem \[Stormer\]). Hence the posed problem seems to be equivalent to the question whether the given characterization of decomposable maps is an effective one in the sense that we wish to know the canonical form of matrices $[a_{ij}]$ such that $[a_{ij}]\geq 0$ and $[a_{ji}]\geq 0$. The important point to note here is the Tomiyama characterization of positive transpositions (see [@Tom2]). Let $\mathcal A$ be a $C^*$-algebra. The transposition $\tau$ on the set of matrices $[a_{ij}]$ with $a_{ij} \in {{\mathcal A}}$ is a positive map if and only if $\mathcal A$ is abelian. This result suggests that the condition $f \in {{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$ reflects a kind of “local commutativity”. Let us elaborate briefly this point. Firstly, we note : $\Lambda_i$ (see formula \[chocho1\]) is a matrix with complex entries and the transposition on such matrices is a positive map. Secondly, $R_i$ is a matrix with operator entries but this matrix is diagonal. Thus, for transposition, $R_i$ is a fixed point. Furthermore, $\Lambda_i$ commutes with $R_i$. We emphasize that all these remarks stem from (\[baba1\]), (\[chocho1\]) - so this is a “local” property as we singled out two indices only. Nevertheless we can conclude : any summand of a positive matrix $[x_{ij}]$ in (\[hoho\]) satisfying the St[ø]{}rmer condition has “local-commutativity” which guarantees the nice behavior (positivity) of the transposition. But not every summand in (\[hoho\]) has this property (see Corollary \[21\])! Finally, we are able to discuss the question of effectiveness of the description of PPT states. To this end we recall that Theorem \[TheoremPPT\] says: PPT states are characterized (uniquely) by vectors of the form $[a_{ij}] \Omega = \sum_k [(a_{i}^{(k)})^*a_{j}^{(k)})] \Omega$ with $[a_{ji}]\geq0$, where the last equality follows from (\[hoho\]). However, we would like to note here the important point (cf the discussion following (\[chocho1\]) ): some summands $[(a_{i}^{(k)})^*a_{j}^{(k)})] \Omega$ may not be in ${{\mathcal P}}_n\cap {{\mathcal P}}_n^{\tau}.$ Consequently, some vectors in the subcone ${{\mathcal P}}_n\cap {{\mathcal P}}_n^{\tau}$ which represent non-trivial (that is non-separable) PPT states can be obtained as a convex hull of vectors in such way that some summand(s) is (are) not necessarily in this subcone. This can be expected as ${{\mathcal P}}_n \cap {{\mathcal P}}_n^{\tau}$ is a convex set which could be “far” from being a simplex. In other words, a convex decomposition of a vector in ${{\mathcal P}}_n \cap {{\mathcal P}}_n^{\tau}$ is far from being the unique one. Concluding, the presented arguments suggest that the universally effective prescription for a vector representing PPT state is not available. However, the above discussion provides some recipe for construction of concrete vectors in ${{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$. Measures of entanglement. ========================== In [@M2] and [@M3] using the $C^*$-algebraic approach to Quantum theory, we have introduced the degree of quantum correlations. The basic idea is to describe how a given quantum system is close to the “classical” world. We wish to repeat this idea but now in the context of Hilbert spaces (cf [@MajTok]). For that purpose we will employ the geometry of Hilbert spaces. Let $\xi$ be a vector in the natural cone ${{\mathcal P}}$ corresponding to a normal state of a composite system $A+B$ (cf Proposition \[3.7\]). Then 1. Degree of entanglement (or [*quantum correlations*]{}) is given by: $$D_e(\xi) = inf_{\eta} \{ ||\xi - \eta||; \eta \in {{{\mathcal P}}}_A \otimes {{{\mathcal P}}}_B \}$$ 2. Degree of genuine entanglement (or [*genuine quantum correlations*]{}) is defined as $$D_{ge}(\xi) = inf_{\eta} \{ ||\xi - \eta||; \eta \in {{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n \}$$ We will briefly discuss the geometric idea behind this definitions. The key to the argument is the concept of convexity (in Hilbert spaces). Namely, we observe 1. ${{\mathcal P}}\supset{{{\mathcal P}}}_A \otimes {{{\mathcal P}}}_B $ is a convex subset, 2. ${{\mathcal P}}\supset{{\mathcal P}}_n \cap {{\mathcal P}}^{\tau}_n$ is a convex subset, 3. The theory of Hilbert spaces says: $\exists! \ \xi_0 \in {{{\mathcal P}}}_A \otimes {{{\mathcal P}}}_B$, such that $D_e(\xi) = || \xi - \xi_0||$, 4. Analogously, $\exists! \ \eta_0 \in {{{\mathcal P}}} \cap {{{\mathcal P}}}^{\tau}$, such that $D_{ge}(\xi) = || \xi - \eta_0||$. The important point to note here is that we used the well known property of convex subsets in a Hilbert space: a closed convex subset $W$ in a Hilbert space ${{\mathcal H}}$ contains the unique vector with the smallest norm. This ensures the existence of vectors $\xi_0$ and $\eta_0$ introduced in 3. and 4. respectively. It is expected that any well defined entanglement measure $D(\cdot)$ should, at least, satisfy the following requirements (see [@Hor3], [@Ozawa], [@Ved], [@Vid], and [@Wei]): 1. $D(\xi)\geq 0$, 2. $D(\xi) = 0$ if $\xi$ is not entangled, 3. $D(U_A\otimes U_B \xi) = D(\xi)$ where $U_A$ ($U_B$) are unitary operators representing local symmetry for subsystem A (B respectively), 4. convexity, i.e. $\sum_i \alpha_i D(\xi_i) \geq D(\sum_i \alpha_i \xi_i)$, 5. continuity. Clearly, $D_e$ satisfies all above listed requirements, while for $D_{ge}$ 1-2 and 4-5 hold. Note that the failure of 3 for $D_{ge}$ is not surprising. Namely, recall that for any automorphism on a von Neumann algebra in the standard form there exists the unique unitary operator on the Hilbert space which leaves the natural cone globally invariant (again this is a result of Tomita-Takesaki theory, see also Section 4).Thus unitary operators appearing in 3 can describe a local symmetry. On the other hand it is hard to expect that the set of PPT states has such general symmetry. Another desirable property of degree (measure) of entanglement would be the monotonicity with respect to arbitrary nonselective operations (see [@Ved2]). Here, nonselective operations are understood as CP maps on the set of observables. If such a map $\psi$ leaves the selected vector state $\omega_{\Omega}$ invariant (in the GNS construction $\Omega$ is interpreted either as the vacuum (field theoretic interpretation) or as equilibrium (statistical interpretation); so this assumption is natural) then, by the generalized Schwarz-Kadison inequality, $\psi$ induces the contraction $\hat{\psi}$ on the GNS space. Consequently such condition in our framework is obviously satisfied provided that $\hat{\psi}$ leaves ${{\mathcal P}}_1 \otimes {{\mathcal P}}_2$ globally invariant. We end this review of properties of entanglement measures with the remark that the idea of measuring entanglement of vectors in terms of their distance to separable vectors appeared in the papers cited in this Section (see also [@Ar]). **BUT** our approach is carried out in a very different setting and our concept of degree of entanglement stems from the definition given in [@M3]. In particular, Ozawa arguments on Hilbert-Schmidt distance are not applicable here (cf [@Ozawa]). To illustrate our measures of entanglement we present the example which could be considered as a continuation of Example 4 and it is based on a modification of Kadison-Ringrose arguments (cf [@KR]) with Tomita-Takesaki theory (cf [@BR]): **Example 5** Let $\{ e_1, e_2, e_3 \}$ ($\{f_1, f_2, f_3 \}$) be an orthonormal basis in the three dimensional Hilbert space ${{\mathcal H}}$ (${{\mathcal K}}$ respectively). By $P$ we denote the following rank one orthogonal projector $$P = \frac{1}{3} |e_1\otimes f_1 + e_2 \otimes f_2 + e_3 \otimes f_2>< e_1\otimes f_1 + e_2 \otimes f_2 + e_3 \otimes f_2| \equiv |x_2><x_2| \in {{\mathcal B}}({{\mathcal H}}\otimes {{\mathcal K}})^+$$ Let $S$ be an operator of the form $$S = \sum_{i=1}^{k<\infty} a_i \otimes b_i$$ where $a_i \in {{\mathcal B}}({{\mathcal H}})^+$ and $b_i \in {{\mathcal B}}({{\mathcal K}})^+$. It can be shown (see [@KR]) that $$||P - S || \geq \frac{1}{6}$$ where $||\cdot||$ stands for the operator norm. Any separable state on ${{\mathcal B}}({{\mathcal H}})\otimes {{\mathcal B}}({{\mathcal K}})$ can be expressed in the form $$\varrho_0 = \sum_{i=1}^{l< \infty} \omega_{z_i}\otimes \omega_{y_i}$$ where $z_1, ...,z_l \in {{\mathcal H}}$, $y_1,...,y_l \in {{\mathcal K}}$, and the vector state $\omega_z$ is defined as $\omega_z(a) \equiv (z, az)$. Then, again, following Kadison-Ringrose exercise one can show that $$||\omega_{x_2} - \varrho_0|| \geq \frac{1}{6}$$ On the other hand (see [@BR]), if $\xi$ (a vector $\eta) \in {{\mathcal P}}$ defines the normal positive form $\omega_{\xi}$ ($\omega_{\eta}$ respectively) then one has $$||\xi - \eta||^2 \leq || \omega_{\xi} - \omega_{\eta}|| \leq ||\xi - \eta|||| \xi + \eta||$$ Consequently $$D_e(\varrho^{\frac{1}{4}}P\varrho^{\frac{1}{4}})\geq \frac{1}{12}.$$ Obviously, $\varrho \equiv \varrho_{{{\mathcal H}}} \otimes \varrho_{{{\mathcal K}}}$, etc (cf Sections 4 and 5). We end this example with a remark that the same arguments applied to $x^{\prime}_2 = \frac{1}{\sqrt{2}}(e_1\otimes f_1 + e_2 \otimes f_2)$ in 2D case lead to $$D_e(\varrho^{\frac{1}{4}}P\varrho^{\frac{1}{4}})\geq \frac{1}{8}.$$ Concluding this Section, for **an entangled (non-PPT state) we are able to find the best approximation among separable states (PPT states, respectively)**. Moreover, this approach offers a classification of entanglement (genuine entanglement, respectively). Final remarks ============= In Section 1, we emphasized that we should deal with many cones. In other words, many types of “positivity” should be taken into account. Furthermore, Lemmas \[pierwszy lemat\] and \[drugi lemat\] employ various “orders”: (plain) positivity for the first lemma and CP for the second. On the other hand, we recall that there are many examples on non-CP maps. Consequently, there appears natural question how to understand the difference between Lemmas \[pierwszy lemat\] and \[drugi lemat\] in this context. To clarify these subtleties, in this Section we will argue that following the scheme offered by Lemma \[pierwszy lemat\] one can use (very) non-CP maps to describe states on $({{\mathcal A}}\odot {{\mathcal B}}, {{\mathcal A}}^+ \otimes {{\mathcal B}}^+)$ which could have very strong correlations (cf [@Sudershan]). To make our presentation as simple as possible we restrict ourselves to 3 dimensional case. We recall that 3 dimensional models are the simplest cases with non-decomposable maps (see [@Ch1], [@W], and [@Ch]). We begin with recalling Cho, Kye and Lee [@Cho] results. They studied the following family of maps ${{\mathbb{C}}}^3 \to {{\mathbb{C}}}^3$ $$\phi[a,b,c](x)= \psi[a,b,c](x) - x$$ $$\psi[a,b,c](x_{ij})= \left( \begin{array}{ccc} ax_{11}+ bx_{22} +c x_{33} & 0 &0 \\ 0 & ax_{22}+bx_{33} + c x_{11} & 0 \\ 0 & 0 & a x_{33} +b x_{11} + c x_{22}\\ \end{array} \right)$$ The properties of these maps are collected in the following Theorem (see [@Cho]) \[CHO\] 1. $\phi[2,0,\mu]$ for $\mu \geq 1$ are indecomposable 2. $\phi[2,0,1]$ is atom 3. $\phi[a,b,c]$ is positive if and only if $a\geq 1,\ a+b+c \geq 3,$ and $ bc \geq(2 - a)^2$ if $1 \leq a\leq 2$ 4. $\phi[a,b,c]$ is CP f and only if $a\geq3$ 5. $\phi[a,b,c]$ is decomposable if and only if $a \geq 1$, $bc \geq ({\frac{3 -a}{2}})^2$ if $1 \leq a \leq 3$ For some choices of the parameters $a,b,c$ one can arrive to maps very similar to that given by (\[dodat\]) (see Section 3). But, as Theorem \[CHO\] is saying, there are many concrete very non-CP maps. We can use them and Lemma \[pierwszy lemat\] to produce very “quantum” functionals on compound systems - note that Lemma \[drugi lemat\] always deals with CP maps and states (normalized positive functionals with respect to the cone $({{\mathcal A}}\otimes {{\mathcal B}})^+$!) on [[[$\hbox{\bf C}^*$]{}]{}]{}-algebraic tensor product. Therefore, following Lemma \[pierwszy lemat\], we will define states $\omega$ by $$\label{Kwantowy funcjonal} \omega(a\otimes b)= Tr \phi(a)b^t$$ where $\phi$ is any map described by Theorem \[CHO\]. Any $a,b \in {{\mathcal B}}({{\mathbb{C}}}^3)$ can be written as $$a = \sum a_{ij} E_{ij}, \ b = \sum b_{kl} F_{kl}$$ where $a_{ij}, b_{kl} \in {{\mathbb{C}}}$ while $E_{ij}$, $F_{kl}$ are basis in ${{\mathcal B}}({{\mathbb{C}}}^3)$. Hence $$\label{53} \sum_{ijkl} a_{ij} b_{kl} \omega(E_{ij} \otimes F_{kl}) = Tr \phi[abc](a) b^t = \sum_{ijkl} a_{ij} b_{kl} Tr \phi[abc](E_{ij}) F_{lk}$$ But $$\phi[abc](E_{ij}) = \psi[abc](E_{ij}) - E_{ij}$$ Let $i\neq j$. Then $$\label{55} \phi[abc](E_{ij}) = - E_{ij}$$ To consider the case $i=j$ define $$f_k(l) = a_{kl}$$ where $$[a_{kl}]= \left( \begin{array}{ccc} a & b & c \\ c & a & b \\ b & c & a\\ \end{array} \right)$$ Then $$\label{58} \phi[abc](E_{ii}) = -E_{ii} + \left( \begin{array}{ccc} f_1(i) & 0 & 0 \\ 0 & f_2(i) & 0 \\ 0 & 0 & f_3(i)\\ \end{array} \right)$$ Taking $a=2$, $b=0$, and $c>1$ , equations (\[53\]), (\[55\]), and (\[58\]) give very “quantum” functionals, positive on the projective cone (so admitting negative values on ${{\mathcal C}}_{inj} \setminus {{\mathcal C}}_{pro}$) and showing another difference between Lemmas \[pierwszy lemat\] and \[drugi lemat\]. Moreover, this shows how powerful “machinery” was proposed in Section 2 as well as gives another explanation of question studied in [@Sudershan]. Acknowledgments =============== We are grateful to V. P. Belavkin and M. Marciniak for fruitful discussions on positive maps. W.A.M and T. M. would like to acknowledge the supports of QBIC grant and W.A.M. acknowledges also the partial support of BW grant 5400-5-0089-8. [99]{} T. Ando, Concavity of certain maps on positive defined matrices and applications to Hadamard products, *Linear algebra and its applications*, **26** (1979) 203-241 T. Ando, On hyponormal operators, *Proc. Amer. Math. Soc.* **14** (1963) 290-291 H. Araki, Some properties of modular conjugation operator of a von Neumann algebra and non-commutative Radon-Nikodym theorem with a chain rule, *Pac, J. Math* **50** (1974), 309. W. Arveson, Maximal vectors in Hilbert space and quantum entanglement, *J. Funct. Analysis* **256**, 1476-1510 (2009) arXiv:0804.1140\[math.OA\] V. P. Belavkin, X. Dai, An operational algebraic approach to quantum channel capacity, *International Journal of Quantum Informations*, **6** (2008), 981-996; arXiv:quant-phys/0702098 V. P. Belavkin, M. Ohya, Quantum entropy and information in discrete entangled states, [*Infinite analysis, quantum probability and related topics*]{}, [**4**]{} 137 (2001) V. P. Belavkin, M. Ohya, Quantum entanglement and entangled mutual entropy, [*Proc. Royal. Soc. London A*]{}, [**458**]{} 209 (2002) V. P. Bielavkin, P. Staszewski, A Radon-Nikodym theorem for completely positive maps, *Rep. Math. Phys.* **24**, 44-55 (1986) S. Berberian, A note on hypornormal operators, *Pacif. J. Math.* **12** (1962) 1171-1175 B. Blackadar, [*Operator Algebras. Theory of $C^*$-algebras and von Neumann algebras.*]{} Springer Verlag, 2006. Section II.9 Bratteli O., and Robinson D.W., [*Operator Algebras ans Quantum Statistical Mechanics I*]{}, Springer Verlag, 1987 Sung Je Cho, Seung-Hyeok Kye and Sa Ge Lee, Generalized Choi maps in $3$-dimensional matrix algebras, *Linear Algebra and its applications*. **171** (1992), 213-224. Choi M.-D., Positive linear maps, *Proc. Sympos. Pure. Math.* [**38**]{}, 583-590 (1982) M.- D. Choi, Positive semidefinite biquadratic forms, [*Lin. Alg. Appl.*]{} **12** 95–100 (1975) Choi M.-D., Completely Positive Maps on Complex Matrices [*Lin. Alg. Appl.*]{} **10** (1975), 285–290. Choi M.-D., Some assorted inequalities for positive linear maps on $C^*$-algebras, [*J. Operator Th.*]{} **4** (1980), 271–285. A. Connes, Characterization des espaces vectoriels ordonnés sous-jacents aux algébres de von Neumann, [*Ann. Inst. Fourier, Grenoble*]{} [**24**]{} 121-155 (1974) I. Cuculescu, Some remarks on tensor products of standard forms of von Neumann algebras, [*Bolletino U. M. I.*]{}, [**7-B**]{}, 907-919 (1993) A. Grothendieck, Products tensoriels topologiques et espaces nuclearies, *Memoirs of the American Mathematical Society*, **16**, Providence, Rhode Island, 1955. R. P. Halmos, Normal dilations and extensions of operators, *Summa Bras. Math.* **2** (1950), 124-134 Horodecki M., Horodecki P. and Horodecki R., Separability of mixed states: necessary and sufficient conditions, [*Phys. Lett. A*]{} **223**, 1-8 (1996) M. Horodecki, P. Horodecki, R. Horodecki, Limits for entanglement mesures, *Phys. Rev. Lett* [**8**4]{} 2014 - 2017 (2000) A. Jamiolkowski, Linear transformations which preserves trace and positive semidefiniteness of operators, *Rep. Math. Phys*. **3**, 275 (1972) R. V. Kadison, J. R. Ringrose, *Fundamentals of the Theory of Operator Algebras*, vol. **IV**, Advanced theory- an exercise approach; Birkh[ä]{}user, 1986 L. E. Labuschagne, W. A. Majewski, M. Marciniak, On k-decomposability of positive maps, [*Expo. Math.*]{} [**24**]{}, 103-125 (2006); math-ph/0306017 W. A. Majewski, On a characterization of PPT states; arXiv:0708.3980 W. A. Majewski, Separable and entangled states of composite quantum systems - rigorous description, [*Open Sys. $\&$ Inf. Dyn.*]{} [**6**]{}, 79-86 (1999) W. A. Majewski, A note on St[ø]{}rmer condition for decomposability of positive maps, arXiv:0806.3235 W. A. Majewski, “On entanglement of states and quantum correlations”,in [*Operator algebras and Mathematical Physics*]{}, Eds. J.M. Combes, J. Cuntz, G.A. Elliott, G. Nenciu, H. Siedentop, S. Stratila; pp. 287-297, [*Theta*]{}, Bucharest, 2003. e-print, LANL math-ph/0202030; W. A. Majewski, “On quantum correlations and positive maps”, [*Lett. Math. Physics,*]{} [**67**]{}, 125-132 (2004); e-print, LANL math-ph/0403024 W. A. Majewski, Measures of entanglement- a Hilbert space approach, [*Quantum Prob. and White Noise Analysis*]{} [**XXIV**]{} pp. 127-138, 2009 W. A. Majewski, M. Marciniak, On a characterization of positive maps, [*J. Phys. A*]{} [**34**]{} 5863-5874 (2001) T. Matsuoka, Some characterization of PPT states and their relations, [*Quantum Prob. and White Noise Analysis*]{} [**XXIV**]{} pp. 139-150, 2009 M. Ozawa, Entanglement measures and Hilbert-Schmidt distance, arXiv:quant-ph/0002036 A. Shaji and E. C. Sudershan, Who’s afraid of not completely positive maps? *Phys. Lett. A* **341** 48 - 54 (2005) J. G. Stampfli, Hypornormal operators, *Pacif. J. Math.* **12** 1453-1458 W. F. Stinespring, Positive functions on [[[$\hbox{\bf C}^*$]{}]{}]{}-algebras, *Proc. Amer. Math. Soc.* **6** (1955) 211–216 St[ø]{}rmer E., Extension of positive maps, [*J. Funct. Analysis.*]{} **66** (1986), 235–254. St[ø]{}rmer E., Cones of positive maps, [*Contemporary Mathematics*]{} **62** (1987), 345–356. St[ø]{}rmer E., Decomposable positive maps on $C^*$-algebras, [*Proc. Amer. Math. Soc.*]{} **86** (1980), 402–404. Takesaki M., *Tomita’s theory of modular Hilbert algebras and its applications*, Lecture Notes in Mathematics [**128**]{} Springer Verlag, Berlin, 1970. M. Takesaki, *Theory of operator algebras I,* Springer Verlag, 1979 Tomiyama J., On the transpose map of matrix algebras, [*Proc. Amer. Math. Soc.*]{} **88** (1983), 635–638. V. Verdal and M. B. Plenio, Entanglement measures and purification procedures, *Phys. Rev. A*, **57** 1619 - 1633 (1998) V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight, Quantifying entanglement, *Phys. Rev. Lett.* **78** 2275 - 2279 (1997) G. Vidal, Entanglement monotones, *J. Mod. Opt.* **47** 355 - 376 (2000) T-Ch Wei and P. M. Goldbart, Geometric measure of entanglement and applications to bipartite and multipartite quantum states, *Phys. Rev. A* **68** 042307 (2003) G. Wittstock, Ordered normed tensor products, in [*Foundation of Quantum Mechanics and Ordered Linear Spaces*]{}, Springer Verlag,1974; pp 67–84 S. L. Woronowicz, Positive maps of low dimensional algebras, *Rep. Math. Phys.* **10** (1976) 165–183 Xingzhi Zhan, [*Matrix Inequalities*]{}, Springer Verlag, 2002
{ "pile_set_name": "ArXiv" }
// Copyright (c) 2012 Google Inc. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, world!\n"); return 0; }
{ "pile_set_name": "Github" }
Holy Viral Marketing Batman! In January, I pondered the future of The Dark Knight’s viral marketing/PR campaign after the tragic loss of Heath Ledger. As you may recall, a Hollywood pal of mine confided that he hoped it would continue as planned since, as he said, it would knock my socks off. At that point, all we’d seen were some teaser posters and texts from Ledger’s character, The Joker. Well, the second phase of The Dark Knight’s campaign has rolled out and I have to hand it to the folks at 42 Entertainment (to whom Warners subcontracted the marketing), this is undeniably the most comprehensive viral marketing campaign I have ever seen. It was precisely crafted for the fanboy/comic book geek crowd and they are eating it up. The media coverage has been staggering and I imagine it will continue to the film’s release this summer. Here’s a run-down the rabbit-hole that is The Dark Knight’s promotional strategy: Faux Politics The new issue of The Gotham Times has been posted and the Harvey Dent campaign website has announced that Harvey is running for DA. The campaign site lacks any references to Batman. In fact, as someone not familiar with the depths of Batman lore, when I first saw the graffiti-laden posters in a theater window display a few months ago, I thought Dent was a movie in its own right and went online seeking it out. The faux political site urges “concerned Gotham citizens” to “take back Gotham City” by backing Dent and organizing faux grass-roots rallies, filming videos and coming out to meet the Dentmobile touring target cities. Rowdy Real-World Rallies Further blending the lines between fact and fiction, on March 12, a rally for the fictional DA candidate was broken up by very real and very confused police. Fans had come out to meet the Dentmobile and when police arrived to remove the crowds, the cops seemed genuinely bewildered by volunteers handing out Harvey Dent bumper stickers, buttons and T-shirts. Harvey Dent and The Joker are using text messages and voicemails to communicate with their supporters. ARGs Alternate Reality Games (ARGs) including scavenger-hunts and role-playing are also in the mix: a page appeared at whysoserious.com/steprightup with a hammer game and some teddy bear toys. Each toy had an address on it located in a number of cities around the US. The note on the game told people to go to that address and say their name was “Robin Banks” (robin’ banks, that’s pretty funny!) and to await further instructions. What they were given was a cake with a phone number written on it. Now here’s the best part: inside the cake was an evidence bag (complete with Gotham City Police printing) that contained a cell phone, a charger, a Joker playing card and a note with instructions. Red Herrings Various red herring sites have launched to throw people off the trail in the ARGs. I don’t know if these are created by the 42 Entertainment or by fans who are playing the game; it’s just part of the beauty of the whole thing! Plot-Point Sites Another website, www.ibelieveinharveydenttoo.com provides teasers about some connection between the Joker and Two Face that I assume will be explained in the film. (Maybe it’s already known. Again, I’m rusty on my comic book lore.) ComicCon Tie-Ins Well aware of their core audience, the marketers put it all out there at San Diego’s Comicon with specially defaced dollar bills with yet another web site’s url. On the site The Joker offered fans the chance to become his henchmen with special prizes for those willing to carry out his demands. These players gathered at a set location (offline) to obtain a phone number that was written in the sky by a plane! From there, they embarked on an elaborate scavenger hunt around the city. Faux Kidnappings The Comicon promo ended with a fan being abducted by “thugs” in a Cadillac Escalade and getting symbolically “murdered” by armed men who mistook the player for the Joker. Whew! So, fellow PR pro/marketer — what did YOU do today? Some colleagues have said this is overkill and that by the time the movie hits this summer, people will be sick of it after all this hype. (The campaign began nearly a year ago.) But, the power here is that you have to seek these sites out. You have to be the kind of person who wants to run around town in a text-based scavenger hunt and look up in the sky for clues. For the comic book audience, I cannot imagine a better fantasy come true than to play with the superheroes and villains they love so much. Well done 42 Entertainment! You’ve set the bar into the stratosphere and made The Blair Witch campaign look like a pageboy hollering, “extra! extra!” on the street corner. Jennifer Jones-Mitchell A global leader in social media marketing, Jennifer Jones-Mitchell has been at the center of traditional and digital PR since the mid 90s and has helped launch some of the world's most loved brands. She has been blogging about PR and social media marketing since 2007. Learn more at: www.jenniferjonesmitchell.com
{ "pile_set_name": "Pile-CC" }
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.cxf.ws.security.trust; import org.apache.neethi.Policy; import org.apache.wss4j.policy.SPConstants; import org.apache.wss4j.policy.model.AlgorithmSuite; import org.apache.wss4j.policy.model.EncryptionToken; import org.apache.wss4j.policy.model.ProtectionToken; import org.apache.wss4j.policy.model.SignatureToken; import org.apache.wss4j.policy.model.SymmetricBinding; /** * @author $Author$ * @version $Revision$ $Date$ */ public class DefaultSymmetricBinding extends SymmetricBinding { public DefaultSymmetricBinding(SPConstants.SPVersion version, Policy nestedPolicy) { super(version, nestedPolicy); } public void setEncryptionToken(EncryptionToken encryptionToken) { super.setEncryptionToken(encryptionToken); } public void setSignatureToken(SignatureToken signatureToken) { super.setSignatureToken(signatureToken); } public void setProtectionToken(ProtectionToken protectionToken) { super.setProtectionToken(protectionToken); } public void setOnlySignEntireHeadersAndBody(boolean onlySignEntireHeadersAndBody) { super.setOnlySignEntireHeadersAndBody(onlySignEntireHeadersAndBody); } public void setProtectTokens(boolean protectTokens) { super.setProtectTokens(protectTokens); } public void setIncludeTimestamp(boolean includeTimestamp) { super.setIncludeTimestamp(includeTimestamp); } public void setAlgorithmSuite(AlgorithmSuite algorithmSuite) { super.setAlgorithmSuite(algorithmSuite); } }
{ "pile_set_name": "Github" }
Porcella Porcella is an album by Canadian indie rock band The Deadly Snakes, released in 2005 on In the Red Records and licensed for Canadian distribution by Paper Bag Records. The album's single "Gore Veil" was named one of the ten best Canadian songs of 2005 by CBC Radio 3. The album was also subsequently shortlisted for the 2006 Polaris Music Prize. It was, however, the band's final album. In interviews around the time of the band's breakup in 2006, keyboardist Max "Age of Danger" McCabe-Lokos described Porcella's creative process as a difficult one, marked by creative tension between him and bandleader André Ethier. Ethier had released a solo album in 2004, and McCabe-Lokos admitted that his own response to that was to assert greater creative control over Porcella than he had on past Snakes albums, taking on both production and mixing duties. Track listing "Debt Collection" – 3:01 "200 Nautical Miles" – 2:16 "Sissy Blues" – 2:09 "High Prices Going Down" – 2:47 "Gore Veil" – 4:34 "So Young & So Cruel" – 3:25 "Let It All Go" – 2:57 "Work" – 3:32 "Oh Lord, My Heart" – 3:13 "I Heard Your Voice" – 3:29 "By Morning, It's Gone" – 1:54 "Banquet" – 2:42 "A Bird in the Hand Is Worthless" – 3:11 Porcella - A Bird in the Hand is Worthless (Double LP - vinyl only/alternate artwork) "Debt Collection" – 3:01 "Sissy Blues" – 2:09 "High Prices Going Down" – 2:47 "Oh Lord, My Heart" – 3:13 "Ambulance Man" - 4:19 "She's Going Home With Him" - 2:58 "Break Up Conversation" - 3:27 "Bound To Get Lonely" - 2:45 "By Morning, It's Gone" – 1:54 "Veronica Brown" - 4:41 "I Heard Your Voice" – 3:29 "Don River Jail" - 2:19 "Let It All Go" – 2:57 "A Bird in the Hand Is Worthless" – 3:11 "No Sympathy" - 4:15 "Gore Veil" – 4:34 "So Young & So Cruel" – 3:25 "Work" – 3:32 "200 Nautical Miles" – 2:16 "Banquet" – 2:42 References Category:2005 albums Category:The Deadly Snakes albums Category:Paper Bag Records albums
{ "pile_set_name": "Wikipedia (en)" }
Treatment of platelet alloimmunization with intravenous immunoglobulin. Two case reports and review of the literature. Two pediatric patients with severe aplastic anemia, elevated antiplatelet antibody levels, refractoriness to human lymphocyte antigen-matched platelet transfusions, and sustained bleeding problems were treated with intravenous immunoglobulin (IVIG), pH 4.25, for three to over nine months. Improved responses to platelet infusions and improved hemostasis were demonstrated in both patients. A review of the published literature analyzing the role of IVIG in the treatment of platelet alloimmunization is presented.
{ "pile_set_name": "PubMed Abstracts" }
Q: Symfony2.1: The CSRF token is invalid. Only in browser, PHPUnit WebTestCase works without error How can I debug why in all my browsers I got The CSRF token is invalid error, but when I test same form with Functional test it works? A: When I commented my config.yml as below, everything started working? so new question is: What is wrong with commented part of this configuration? session: auto_start: true # cookie_lifetime: 86400 # cookie_path: \ # cookie_domain: example.com # cookie_secure: true # cookie_httponly: true
{ "pile_set_name": "StackExchange" }
1. Introduction =============== The current review focuses on the vulnerable plaques observed in the HSS regions. Evidence is provided to support that ACS and ischemic strokes occur at or near the proximal region of the stenosis. Arterial diseases such as acute coronary syndrome (ACS) and ischemic strokes are the leading causes of death worldwide \[[@rbw021-B1]\]. ACS and ischemic strokes are frequently caused by rupture of vulnerable plaque leading to thrombus formation and distal cessation of blood flow. The morpho-mechanical characteristics of vulnerable plaques are critical for their tendency to rupture \[[@rbw021-B2], [@rbw021-B3]\]. ACS and ischemic strokes often occur at sites where the stenosis level is lower than 50% \[[@rbw021-B4], [@rbw021-B5]\]. Atherosclerotic plaque rupture or damage of the vascular surface leads to incomplete or complete obstructive thrombus formation and ultimately cause ACS or ischaemic strokes \[[@rbw021-B6]\]. Vulnerable plaques have the following pathological characteristics: (1) A huge lipoprotein core being larger than 40% of the plaque volume; (2) A thin fibrous cap \[[@rbw021-B9]\]; (3) High content of inflammatory cells (including macrophages, T lymphocytes and mast cells) \[[@rbw021-B10], [@rbw021-B11], [@rbw021-B13]\]; (4) Reduced number of vascular smooth muscle cells (VSMCs); and (5) Plenty of new born blood vessels in the plaques \[[@rbw021-B12], [@rbw021-B13]\]. Shear stress participates in the formation of atherosclerosis, vascular remodelling, plaque stability, and restenosis after stent implantation and in intimal hyperplasia after blood vessel grafting \[[@rbw021-B14], [@rbw021-B15]\]. The magnitude and spatial distribution of SS change with the development of the plaque \[[@rbw021-B16]\] ([Table 1](#rbw021-T1){ref-type="table"}). When plaques protrude into the lumen, high shear stress (HSS) is formed at the proximal end of the stenosis whereas low shear stress (LSS) is formed at the distal part \[[@rbw021-B16], [@rbw021-B19] and [@rbw021-B20]\] ([Fig. 1](#rbw021-F1){ref-type="fig"}). Figure 1.When plaques protrude into the lumen, high shear stress (HSS) is formed at the proximal end of the stenosis also whereas low shear stress (LSS) is formed at the distal part ^\[[@rbw021-B16]\]^. Table 1.The magnitude of HSS and LSSTermLocationMagnitudeThe relationship with atherosclerosisReferenceHigh shear stress (HSS)The proximal region of plaque\>25dyn/cm^2^Proathero-sclerotic plaque rupture\[[@rbw021-B16]\]\[[@rbw021-B17]\]\[[@rbw021-B18]\]\[[@rbw021-B32]\]Low shear stress (LSS)The distal region of plaque\<10-15dyn/cm^2^Proathero-sclerosis 2. Vulnerable Plaque Animal Model for Shear Stress Research =========================================================== Change of SS is a critical external factor for the plaque characteristics \[[@rbw021-B21]\]. Therefore, a proper experimental model of atherosclerotic vulnerable plaques is fundamental for our understanding of SS-mediated vulnerable plaque formation \[[@rbw021-B22]\]. One model used perivascular carotid collar placement, which rapidly induced atherosclerosis in apolipoprotein E-deficient or low-density lipoprotein receptor-deficient mice \[[@rbw021-B23]\]. Our group has previously demonstrated the efficacy of this model in studying the development of atherosclerotic plaques induced by SS \[[@rbw021-B24], [@rbw021-B25], and [@rbw021-B28]\]. The collar develops HSS in the proximal region and LSS in the distal region of the plaque similar to the plaques intruding into the lumen \[[@rbw021-B26]\]. Cheng and coworkers improved the perivascular SS modifier that induces regions of lowered, increased and lowered/oscillatory shear stresses in mouse carotid arteries \[[@rbw021-B27], [@rbw021-B28]\]. Another model is ligation of the left external and internal carotid branches. In this situation left carotid blood flow is reduced to flow via the occipital artery. In response to partial ligation of the left carotid artery (LCA), blood flow significantly decreased by 90% in the LCA and increased by 70% in the right carotid artery (RCA) \[[@rbw021-B29]\]. The major advantage of the first model is the similarity to well-defined plaques, the accelerated atherosclerosis formation and induction of at least two kinds of SS stimulations simultaneously. HSS occurs inside the stenosis and low and/or oscillatory SS is localized to the distal region of the stenosis. However, the change of SS in the proximal and the distal regions is more serious with the ligation model which can induce very LSS in the ligated LCA. However, the ligation model is not localized, i.e. the SS is changed throughout the vessel. 3. High Shear Stress Induces Vascular Outward Remodelling ========================================================= Vascular remodelling encompasses chronic changes of the vascular lumen size and morphology, vessel wall structure, and vascular function \[[@rbw021-B32]\]. SS induced vascular remodelling is a very complex process involving nitric oxide (NO) expression, extracellular matrix (ECM) synthesis and degradation, and VSMCs proliferation and migration. 3.1. High shear stress up-regulates the expression of NO -------------------------------------------------------- NO is an important vasodilator participant in vascular remodelling \[[@rbw021-B33]\]. Endothelial cells (ECs) are the main sensor of the SS and also the critical player in vascular remodelling \[[@rbw021-B34]\]. In the early stages of atherosclerosis, LSS occurs at the two sides of an arterial bifurcation and on the inside of vascular curvatures whereas HSS occurs at the apex of an arterial bifurcation \[[@rbw021-B35]\] and on the outside of the vascular curvature \[[@rbw021-B36]\]. In resistance arteries as well as in large blood vessels, chronic increase in blood flow enhances endothelial nitric oxide synthase (eNOS) expression and NO-dependent vasorelaxation \[[@rbw021-B28], [@rbw021-B32]\] whereas LSS decreases endothelial NO synthesis \[[@rbw021-B28]\]. Furthermore, reduction in blood flow induces inward remodelling and reduced arteriolar contractility \[[@rbw021-B33], [@rbw021-B40]\]. Moreover, NO is essential for arterial outward hypertrophic remodelling after a chronic rise in flow \[[@rbw021-B33], [@rbw021-B41]\]. In addition, NO can induce ECM degradation through increasing the expression of matrix metalloproteinases (MMPs) \[[@rbw021-B33]\]. This remodelling allows the effect of altered SS on the vascular wall to be normalized \[[@rbw021-B42]\]. Therefore, HSS induces vascular outward remodelling through increasing NO expression \[[@rbw021-B43], [@rbw021-B44]\]. 3.2. High shear stress induces the degradation of ECM ----------------------------------------------------- ECM synthesis and degradation plays an important role in vascular wall remodelling \[[@rbw021-B45]\]. MMPs regulate vascular remodelling by ECM degradation \[[@rbw021-B46]\]. Hence, the study of SS regulating the expression of MMPs can clarify the understanding of vascular remodelling under SS \[[@rbw021-B47]\]. HSS induces MMPs expression and vascular outwards remodelling \[[@rbw021-B48]\]. The likely mechanism involves NO in MMPs expression where HSS induces NO synthesis \[[@rbw021-B28], [@rbw021-B36]\] and NO increases the expression of MMP \[[@rbw021-B49], [@rbw021-B50]\]. HSS also induces secretion of plasmin (a strong specific activator protein for MMP-specific precursors secreted by macrophages) \[[@rbw021-B51]\]. In addition, Pro-MMP-2, activated MMP-2, and proMMP-9 levels were modestly increased by high flow after 7 days \[[@rbw021-B51]\]. Therefore, HSS may induce high MMPs expression \[[@rbw021-B52]\]. MMPs promote plaque wall structural changes, severe internal elastic lamila (IEL) degradation. This provides a channel for inflammatory cell and SMC invasion which in turn produces intensive MMPs to degrade collagen and elastic fibres. These processes lead to severe wall and lumen expansion and may be the cause that the HSS region forms a thin fibrous cap in vulnerable plaque on the proximal side of the vascular stenosis \[[@rbw021-B19], [@rbw021-B53]\]. 3.3 High shear stress induces the apoptosis of VSMCs ---------------------------------------------------- Under physiological conditions, SS does not directly act on VSMCs. However, when medial VSMCs migrate into intima after endothelial injury, they become directly exposed to blood flow \[[@rbw021-B54]\]. The studies in apoE-deficient mice have revealed that VSMCs in atherosclerotic plaques are derived exclusively from the local vessel wall rather than from circulating progenitor cells \[[@rbw021-B55]\]. LSS induces VSMCs migration into the intima in the ECs-VSMCs coculture model \[[@rbw021-B56]\]. Therefore, LSS may be important for VSMCs proliferation and migration and for promoting blood vessel wall thickening which all are factors leading to atherosclerosis stenosis formation \[[@rbw021-B57]\]. LSS-associated intimal hyperplasia was dependent on platelet endothelial cell adhesion molecule-1 (PECAM-1) \[[@rbw021-B58]\], suggesting that PECAM-1 is necessary for flow-induced vascular remodelling. High laminar SS inhibits SMCs proliferation and promotes the apoptosis of VSMCs \[[@rbw021-B59]\]. This has been demonstrated as a direct factor for vulnerable plaque formation \[[@rbw021-B60]\]. The finding is consistent with the clinical finding that apoptosis of VSMCs is mainly localized in the HSS region of the stenosis \[[@rbw021-B61]\]. Therefore, vulnerable plaques are mainly found where SS is high because HSS induces apoptosis of VSMCs \[[@rbw021-B48], [@rbw021-B51], [@rbw021-B64]\]. In vessel grafts, increasing SS inhibits smooth muscle cell proliferation and reduces intimal hyperplasia \[[@rbw021-B65], [@rbw021-B66]\]. The mechanism could be linked with Bone morphogenetic protein 4 (BMP4) \[[@rbw021-B67]\] and NO \[[@rbw021-B68]\] signalling pathways. HSS promotes release of endothelial NO mediating apoptosis of VSMCs \[[@rbw021-B66], [@rbw021-B69]\]. HSS also upregulated the expressions of NF-kappa B phosphorylation and MMP2 and MMP9, facilitating vascular outward remodelling \[[@rbw021-B70]\]. SS induces vascular NADPH oxidase to comprise p47phox but not gp91phox. Generated Reactive oxygen species (ROS) interact with NO to produce peroxynitrite, which in turn activates MMPs and facilitates vessel remodelling \[[@rbw021-B71]\]. ECs have an important regulatory role in the biological behaviour of VSMCs \[[@rbw021-B72]\]. HSS promotes progressive arterial remodelling, which consequently causes blood vessel rupture \[[@rbw021-B35], [@rbw021-B73]\]. In summary, HSS induces adaptive and serious outward vascular remodelling through promoting apoptosis of VSMCs \[[@rbw021-B74]\]. 3.4 The remodelling process under shear stress ---------------------------------------------- Systemic factors such as hyperlipidemia, hyperglycemia, and hypertension and genetics \[[@rbw021-B75]\] exacerbate the local HSS and inflammatory response and may facilitate the transition of early atherosclerotic plaques into high-risk plaques. Vascular remodelling is governed to maintain the previous (normal) SS. For example the brachial artery remodels to maintain local SS despite the presence of cardiovascular risk factors \[[@rbw021-B76]\]. After plaque formation and protrusion into the lumen, HSS is mainly apparent in the proximal region of the stenosis whereas LSS is at the distal region \[[@rbw021-B19], [@rbw021-B77]\]. HSS leads to the expansive remodelling \[[@rbw021-B78], [@rbw021-B79]\], which is a compensatory process \[[@rbw021-B30], [@rbw021-B80]\]. Expansive remodelling in response to chronic or repetitive flow increase involves a coordinated sequence of events in the arterial wall as extensively reviewed by others \[[@rbw021-B81]\]. HSS induces aneurysmal remodelling through vascular expansive remodelling for maintaining the local SS \[[@rbw021-B35], [@rbw021-B36], [@rbw021-B84]\]. Research showed that HSS increased the vascular diameter by 23%, while LSS reduced the diameter 23% \[[@rbw021-B37], [@rbw021-B85]\]. Outward remodelling is the critical factor for high-risk plaque formation \[[@rbw021-B32], [@rbw021-B86], [@rbw021-B87]\]. NO release from ECs exposed to excessive shear is a fundamental step in the remodelling process. NO potentially triggers a cascade of events, including growth factor induction and MMP activation that together contribute to remodelling of the vessel wall \[[@rbw021-B88]\]. Furthermore, high flow rates not only induce HSS but also increase cyclic strains which are found to induce arterial expansive remodelling \[[@rbw021-B89]\]. Evaluation of vascular local SS and cyclic strain was used to predict vascular remodelling and plaque development \[[@rbw021-B90]\]. Although several reports show that LSS promotes vascular expansive remodelling \[[@rbw021-B27], [@rbw021-B75], [@rbw021-B91]\], both clinical and animal models prove that vascular expansive remodelling mainly localizes in the HSS region \[[@rbw021-B29], [@rbw021-B94]\]. The increased atherosclerotic wall thickness in HSS regions is associated with loss of compensatory remodelling \[[@rbw021-B95]\]. Vascular remodelling maintains luminal SS stability; hence excessive outward expansion is the direct way to reduce the local HSS. 4. High Shear Stress Induces the Vulnerable Plaque Formation ============================================================ In vivo colour mapping with intravascular ultrasound and magnetic resonance imaging (MRI) data show that coronary plaque rupture are localized in the arterial regions with elevated SS \[[@rbw021-B64], [@rbw021-B97]\] ([Table 2](#rbw021-T2){ref-type="table"}). Animal models confirm that vulnerable plaques mainly occur in the HSS region of the stenosis \[[@rbw021-B62], [@rbw021-B103]\]. Table 2.High shear stress induce rapture-prone plaque formation or rapture in clinical reportSampleProximalShear stressPhenomenonDevice (detected method)ReferenceTwenty patientsProximalHigh shear stress \>25dyn/cm^2^Increase necrosis areaVirtual histology-IVUS and CFD100A 67-year-old womanProximalHigh shear stress \>32dyn/cm^2^Lipid/necrotic core, intraplaque hemorrhageMRI at 10-month follow up9720 patientsProximal to the point of maximum stenosisBlood wall pressure was 82 ± 18 mm HgCoronary plaquerupture3-dimensional IVUS98119 patientsProximal to the point of maximum stenosisHigher than the distalUlcerationAngiographic ulceration10342 human carotid atherosclerotic plaquesProximal to the point of maximum stenosisHigher than the distalApoptosis in the distalImmunohistochemical (anti-CD31, anti-Ki-67)11012 patientsProximal38.9 versus 14.4 dyn/cm^2^Ruptured plaquesMRI10112 patientsProximal\>25dyn/cm^2^Angiography and IVUS102 A main difference between stable plaques and high-risk plaque is inflammatory cell accumulation \[[@rbw021-B96], [@rbw021-B97], [@rbw021-B104], [@rbw021-B105]\]. Inflammatory cell invasion into atherosclerotic plaques is modulated by ECs. The recruitment and infiltration of inflammatory cells into the endothelium are mediated by upregulating adhesion molecules, chemokines and integrins \[[@rbw021-B98], [@rbw021-B106], [@rbw021-B107]\]. The viewpoint that LSS induces vulnerable plaque formation is based on the high expression of inflammation-related proteins on ECs \[[@rbw021-B27], [@rbw021-B107], [@rbw021-B108]\]. However, LSS induces apoptosis of ECs and endothelial dysfunction \[[@rbw021-B64], [@rbw021-B109]\]. Hence, it is inconsistent with the established role of LSS in destabilizing atherosclerotic plaques regarding the expression and activity of MMPs \[[@rbw021-B113]\]. In addition, LSS induces the VSMCs proliferation, migration and ECMs synthesis \[[@rbw021-B114]\]. At present, the cross-sectional morphological characteristics of atherosclerotic plaque have been extensively investigated. However, less attention was paid to the axial distribution of plaques in the artery. Clinical pathology research shows that vascular plaque rupture mainly occurs in the proximal region of the stenosis, where macrophages aggregate and thrombosis is found under the endothelium \[[@rbw021-B62], [@rbw021-B103]\]. Connective tissue growth factor is released from platelets exposed to HSS and is differentially expressed in endothelium along atherosclerotic plaques \[[@rbw021-B115]\]. In vivo MRI 3D FSI studies show that 63.5 dyn/cm^2^ SS induces high-risk plaque formation \[[@rbw021-B116]\]. Taken together, these studies demonstrate that there is a high correlation between HSS and vulnerable plaque formation in the axial direction. 5. Angiogenesis May Be the Main Reason That High Shear Stress Induces Atherosclerotic Vulnerable Plaque Formation ================================================================================================================= A growing body of evidence shows that HSS prevails in the proximal region of atherosclerotic plaques protruding into the lumen \[[@rbw021-B28], [@rbw021-B117]\]. Significant differences in plaque morphology between the proximal and distal parts of plaques indicate a role in arterial flow in the distribution of different cell types \[[@rbw021-B28], [@rbw021-B53], [@rbw021-B62], [@rbw021-B98], [@rbw021-B117], [@rbw021-B118]\]. It was shown that 86% of ruptured plaques are located proximal to the stenosis \[[@rbw021-B118]\]. The reason that atherosclerotic plaque rupture occurs in this region is currently unknown. Oxidized low-density lipoprotein was proposed, because oxLDL activates/induces subsets of smooth muscle cells and macrophages to gelatinase production \[[@rbw021-B62]\]. However, it is well known that HSS is endothelium-protective and the endothelium may prevent the low-density lipoprotein (LDL) from entering into the vessel wall \[[@rbw021-B19]\]. Furthermore, some studies showed that oxidized low-density lipoprotein (ox-LDL) is mainly accumulated in the distal region where SS is low \[[@rbw021-B19], [@rbw021-B62], [@rbw021-B119]\]. Neovascularization in the vessel wall promotes the formation of atherosclerosis and vulnerable plaque development. The new vasa vasorum (VV) can transport cellular and soluble components such as red blood cells, inflammatory cells and lipid/lipoproteins into the vessel wall \[[@rbw021-B120]\]. A recent report showed that bFGF and VEGFR-2 overexpression in the adventitia induced development of VV and accelerated plaque progression \[[@rbw021-B122], [@rbw021-B123]\]. Furthermore, most microvessels in atherosclerotic arteries were immature with abnormality of intraplaque microvascular ECs with incomplete endothelial junctions and membrane detachment. This may link the association between the microvascular leakage and intraplaque haemorrhage in advanced human coronary atherosclerosis \[[@rbw021-B124], [@rbw021-B125]\]. HSS plays a critical role in the expression of vascular endothelial growth factor (VEGF) \[[@rbw021-B126]\] and endothelial NO synthesis \[[@rbw021-B28], [@rbw021-B34], [@rbw021-B68]\]. VEGF induces angiogenesis \[[@rbw021-B127]\] and also disrupts the vascular barrier function in diseased tissues \[[@rbw021-B128]\]. NO mediates shear-induced angiogenesis in ECs \[[@rbw021-B129]\] and increases vascular permeability \[[@rbw021-B130]\]. Furthermore, the highest concentration of NO is also critical for the loss of VSMCs and ECM \[[@rbw021-B131]\]. Thus, HSS causes the ECs to form tube-like structures and increases endothelial permeability by increasing the expression of VEGF and NO. The leaky vasculature with high endothelial permeability and without a restrictive basement membrane exhibits no adequate barrier function ([Fig. 2](#rbw021-F2){ref-type="fig"}). Figure 2.High shear stress induces atherosclerotic vulnerable plaque formation through angiogenesis. High shear stress promotes the expression of vascular endothelial growth factor (VEGF) and endothelial nitric oxide (NO), resulting in angiogenesis of endothelial cells (EC) that form vasa vasorum and increases the endothelial cell permeability. Furthermore, NO induces smooth muscle cell (SMC) apoptosis and matrix degradation, resulting in loss of mural cells and the basement membrane around newborn microvessels. This results in microvascular leakage. The leaky vasculature becomes entry points for inflammatory cells, red blood cells (RBC) and lipid/lipoproteins. This may result in inflammation, intra-plaque haemorrhage, lipid core accumulation and eventually plaque rupture. We propose that angiogenesis is the reason that vulnerable plaques are localized in HSS regions. Furthermore, NO induced smooth muscle cell apoptosis and matrix degradation. The result is loss of mural cell and basement membrane around newborn microvessels, causing microvascular leakage. The leaky vasculature becomes entry points for inflammatory cells, red blood cells and lipid/lipoproteins. This may result in inflammation, intra-plaque haemorrhage, lipid core accumulation and eventually plaque rupture. 6. The Mechanical Mechanism Underlying Plaque Rupture ===================================================== As pointed out above, SS in the proximal region of stenosis is significantly higher than in the distal region. HSS is critical for vulnerable plaque \[[@rbw021-B132]\]. Intraplaque haemorrhage is associated with higher SS and higher structural stresses in human atherosclerotic plaques as shown by *in vivo* examining MRI-based 3D fluid-structure interaction \[[@rbw021-B133]\]. Numerical simulation shows that the SS in the proximal region of stenosis may reach 50-60 dyn/cm^2^ when the stenosis degree is 50%. The SS in the proximal region does not exceed 20 dyn/cm^2^ once 70% stenosis is reached. This may precipitate the rupture of vulnerable plaque in the proximal regions when less than 50% stenosis \[[@rbw021-B130], [@rbw021-B134]\]. Although increased SS in the proximal region may lead to plaque fibrous cap rupture, 75% of the plaque rupture is believed not to be due to SS since the wall SS is much smaller than tensile stress during the cardiac cycle \[[@rbw021-B19]\]. The haemostatic system is a modulator of atherosclerosis \[[@rbw021-B135]\]. HSS induces intra-thrombus fibrin deposition and platelet adhesion to the arterial wall \[[@rbw021-B136]\]. HSS also promotes platelet aggregation \[[@rbw021-B139]\]. Hence the haemostatic dysregulation caused by HSS may contribute to our understanding of why ACS and ischemic strokes are located preferentially in the distal region of the stenosis. SS rate is the rate change of the local SS and it is an important factor for vulnerable plaque rupture \[[@rbw021-B136], [@rbw021-B140]\]. Microfluidics is an important tool for blood clotting \[[@rbw021-B141], [@rbw021-B142]\] where platelets preferentially adhere in low-shear zones downstream of the formed thrombus, with stabilization of aggregates dependent on the dynamic restructuring of membrane tethers \[[@rbw021-B143]\]. Under HSS conditions, blood pressure decreased and uniaxial tensile stress increased at the site of vascular injury. The magnitude of SS is smaller compared with the overall loading of plaques. Hence, pressure may be the main mechanical trigger for plaque rupture and risk stratification \[[@rbw021-B144]\]. 3D critical plaque wall stress in prior rupture plaques is 100% higher than that for plaques that do not rupture. However, flow SS is 92.94 dyn/cm^2^ for rupture plaque, which is 76% higher than that for non-rupture plaques (52.70 dyn/cm^2^) \[[@rbw021-B145]\]. Rupture sites in human atherosclerotic carotid plaques are associated with high structural stresses \[[@rbw021-B146]\]. Once the thin fibrous cap is formed, the internal stress increased 200% when the fibrous cap thickness decreased by 50% \[[@rbw021-B147]\]. These results demonstrate that intravascular haemodynamic factors are responsible for the progression of coronary atherosclerosis and development of vulnerable plaques \[[@rbw021-B148]\]. Autopsy data have shown that there are obvious difference between circumferential plaque stress and vulnerable plaques. The plaque rupture zone is associated with a high degree of stress concentration \[[@rbw021-B149]\]. Circumferential stress and Young\'s modulus are important direct factors for plaque rupture \[[@rbw021-B150], [@rbw021-B151]\]. Furthermore, plaque wall stress and flow SS may produce a significant uniaxial strain \[[@rbw021-B152]\]. Research results have shown that the small pressure difference in the order of 20 mmHg can generate quite a high uniaxial strain in 75 μm thick plaques. Eccentric plaques would be exposed to a more serious uniaxial strain \[[@rbw021-B153]\]. Hence HSS and the vessel wall thickness are also responsible for plaque rupture \[[@rbw021-B154]\]. In summary, increased wall SS, circumferential stress and pressure are all important for plaque rupture, especially the pressure of the plaque. However, SS is closely related to plaque formation and progression \[[@rbw021-B157]\]. 7. Research Perspective ======================= The current review focuses on the vulnerable plaques observed in the HSS regions. Evidence is provided to support that ACS and ischemic strokes occur at or near the proximal region of the stenosis. Having reviewed the published results in the literature, we noted that data on the relationship between SS and plaque rupture is contradictory and inconsistent. Previous research mainly focused on the biological function of SS, and less attention was paid to the mechanical properties of extracellular surroundings and the blood vessel itself \[[@rbw021-B158]\]. The roles of blood vessels, vessel wall thickness and elastic modulus factors have been somewhat ignored considering plaque rupture \[[@rbw021-B159]\]. From the literature reviews we can conclude that LSS is the main mechanical factor in plaque formation while HSS may be the main cause for the transition of stable plaques into inflamed lesions. Vascular mechanical stress may be the direct trigger for plaque rupture. How and when do those mechanical stresses function to regulate vulnerable plaque formation and destabilization? And what is the association between blood pressure and mechanical stresses? These issues remain uncertain, but it is quite necessary to further illuminate the molecular mechanisms underlying the plaque formation in response to SS \[[@rbw021-B159]\]. SS and chemical stimuli may synergistically regulate vascular remodelling \[[@rbw021-B165]\]. Currently, numerical analyses have been effectively used to simulate the physical and geometrical parameters characterizing the haemodynamics of various arteries during physiological and pathological conditions \[[@rbw021-B166]\]. Numerical analysis can contribute to reveal the mechanism for development of plaques and predict the tendency for a plaque to rupture \[[@rbw021-B169], [@rbw021-B170]\]. Moreover, clinical imaging techniques such as magnetic resonance or computed tomography (CT) combined with numerical analysis methods have assisted considerably in gaining a detailed patient-specific picture of blood flow and structure dynamics, which could effectively prevent and treat this disease \[[@rbw021-B171], [@rbw021-B172]\]. 8. Clinical Implications ======================== SS changes with the degree of stenosis, and the changed stress regulates the development of plaques into high risk plaques \[[@rbw021-B173]\]. Locally increased SS using a developed flow divider indicates that SS reduces in-stent neointimal formation by 50% \[[@rbw021-B174], [@rbw021-B175]\]. Attempts to increase SS to inhibit intimal hyperplasia are not applicable to atherosclerotic vulnerable plaque treatment \[[@rbw021-B68]\] because HSS is the critical factor for high-risk coronary plaque formation. After the treatment of stenosis with percutaneous transluminal coronary angioplasty (PTCA) balloon and stent, the SS increases, which promotes vascular outward remodelling. This eventually leads to restenosis or even vulnerable plaque formation \[[@rbw021-B176], [@rbw021-B177]\]. Besides SS, the average wall shear stress (AWSS), average wall shear stress gradient (AWSSG), oscillatory shear index (OSI) and relative residence time (RRT) are important parameters for reducing the number of false positives. AWSS identifies the largest number of plaques, but produces more false positives than OSI and RRT \[[@rbw021-B178]\]. It is necessary to increase the variety of detection methods, especially to pay attention to the proximal region of the vascular stenosis for detecting the SS \[[@rbw021-B98], [@rbw021-B179], [@rbw021-B180]\]. Evaluation of the volume of the plaque is also an indirect method for the SS around the plaque \[[@rbw021-B181]\]. A 3D fusion of intravascular ultrasound and coronary CT are useful for in-vivo wall SS analysis \[[@rbw021-B182], [@rbw021-B183]\]. It is necessary to combine optical CT tomography and coronary angioplasty *in vivo* for the evaluation of the connection between the SS and the characteristics of vulnerable plaques \[[@rbw021-B180]\]. Regarding drug development, the regulatory effects of drugs on the SS should be cautiously considered; otherwise it may lead to more serious vascular disease \[[@rbw021-B184], [@rbw021-B185]\]. Lipid-lowering drugs may change the characteristics of plaques and the thickness of blood vessel wall and elastic modulus \[[@rbw021-B186]\]. The vascular stiffness affects the sensitivity of ECs to SS and thereby participates in the regulation of vascular remodelling \[[@rbw021-B187]\]. Changes in vascular cyclic stress can also influence SS-mediated vascular remodelling of VSMCs \[[@rbw021-B188]\]. MRI assessment of plaque biomechanical properties including wall SS and internal plaque strain provides information on early plaque progression and vessel remodelling \[[@rbw021-B189], [@rbw021-B190]\]. More precise magnetic resonance, intravenous ultrasound (IVUS), CT and angiography were applied to analyse and predict plaque development and stability \[[@rbw021-B191]\]. Morphological and mechanical features should also be considered in an integrated way for more accurate assessment of plaque vulnerability, allowing for early identification of plaques with inflamed phenotypes \[[@rbw021-B191], [@rbw021-B192]\]. Critical plaque stress/strain conditions are affected considerably by stenosis severity, eccentricity, lipid pool size, shape and position, plaque cap thickness, axial stretch, pressure, and fluid-structure interactions. These variables may be used for plaque rupture predictions \[[@rbw021-B193]\]. If our hypothesis that angiogenesis is the main reason that high SS induces atherosclerotic vulnerable plaque formation is true, it may provide new perspectives for clinically predicting the location of plaques vulnerable to rupture and how to prevent plaque instability. Theoretical models could be developed to predict the relationship between the magnitude of SS and atherosclerosis plaque rupture. It also could be applied to arterial bypass grafting through selection of the most appropriate geometry to adjust the SS for reducing the formation of microvessels. Finally, previous studies have shown that plaque microvessels may serve as an interface for plaque expansion. Therefore, we can narrow the range of treatment strategy since plaque angiogenesis is primarily localized in the proximal plaque region. In summary, SS has been shown to play a role in plaque formation, progression and rupture. The underlying mechanism of plaque formation seems to differ from plaque rupture. Plaque formation is localized in the LSS region whereas plaque rupture occurs primarily in HSS region. HSS induces up-regulation of NO and VEGF of ECs in the proximal region, which leads to microvessel formation in the plaque from VV. Moreover, the pathological angiogenesis is an entry point for infiltration of inflammatory cells, deposition of lipoproteins and the occurrence of intra-plaque haemorrhage. Decreasing the angiogenesis or the leaky vasculature \[[@rbw021-B196], [@rbw021-B197]\] induced by HSS may establish a more favourable microenvironment, which can impede vulnerable plaque formation. This research program was supported by grants from the National Natural Science Foundation of China (31370949, 11332003, 81400329 and 11372364) and Chongqing Science and Technology Commission (cstc2013kjrc-ljrccj10003) as well as the Public Experiment Center of State Bioindustrial Base (Chongqing), China. *Conflict of interest statement.* None declared. [^1]: ^†^These two authors contributed equally to this work.
{ "pile_set_name": "PubMed Central" }
Lathyrus Lathyrus (commonly known as peavines or vetchlings) is a genus in the legume family Fabaceae and contains approximately 160 species. They are native to temperate areas, with a breakdown of 52 species in Europe, 30 species in North America, 78 in Asia, 24 in tropical East Africa, and 24 in temperate South America. There are annual and perennial species which may be climbing or bushy. This genus has numerous sections, including Orobus, which was once a separate genus. Uses Many species are cultivated as garden plants. The genus includes the garden sweet pea (Lathyrus odoratus) and the perennial everlasting pea (Lathyrus latifolius). Flowers on these cultivated species may be rose, red, maroon, pink, white, yellow, purple or blue, and some are bicolored. They are also grown for their fragrance. Cultivated species are susceptible to fungal infections including downy and powdery mildew. Other species are grown for food, including the Indian pea (L. sativus) and the red pea (L. cicera), and less commonly cyprus-vetch (L. ochrus) and Spanish vetchling (L. clymenum). The tuberous pea (L. tuberosus) is grown as a root vegetable for its starchy edible tuber. The seeds of some Lathyrus species contain the toxic amino acid oxalyldiaminopropionic acid and if eaten in large quantities can cause lathyrism, a serious disease. Diversity Species include: Lathyrus alpestris Lathyrus angulatus – angled pea Lathyrus annuus – red fodder pea Lathyrus aphaca – yellow pea Lathyrus aureus – golden pea Lathyrus basalticus Lathyrus bauhinii Lathyrus belinensis Lathyrus biflorus – twoflower pea Lathyrus bijugatus – drypark pea Lathyrus blepharicarpus – ciliate vetchling Lathyrus boissieri Lathyrus brachycalyx – Bonneville pea Lathyrus cassius Lathyrus chloranthus Lathyrus cicera – red pea Lathyrus ciliolatus Lathyrus cirrhosus Lathyrus clymenum – Spanish vetchling Lathyrus crassipes – arvejilla Lathyrus cyaneus Lathyrus davidii Lathyrus decaphyllus – prairie vetchling Lathyrus delnorticus – Del Norte pea Lathyrus digitatus Lathyrus eucosmus – semmly vetchling, bush vetchling Lathyrus filiformis Lathyrus gloeospermus Lathyrus gorgoni Lathyrus graminifolius – grassleaf pea Lathyrus grandiflorus – twoflower everlasting pea Lathyrus grimesii – Grimes' pea Lathyrus heterophyllus – Norfolk everlasting pea Lathyrus hirsutus – hairy vetchling Lathyrus hitchcockianus – Bullfrog Mountain pea Lathyrus holochlorus – thinleaf pea Lathyrus hygrophilus Lathyrus inconspicuus Lathyrus incurvus Lathyrus japonicus – sea pea, beach pea Lathyrus jepsonii – delta tule pea Lathyrus laetivirens – aspen pea Lathyrus laevigatus Lathyrus lanszwertii – Nevada pea Lathyrus latifolius – everlasting pea, perennial pea Lathyrus laxiflorus Lathyrus libani – Lebanon vetchling Lathyrus linifolius – bitter vetch, heath pea Lathyrus littoralis – silky beach pea Lathyrus macropus Lathyrus magellanicus Lathyrus nervosus – Lord Anson's blue pea Lathyrus nevadensis – Sierra pea Lathyrus niger – black pea Lathyrus nissolia – grass vetchling Lathyrus nudicaulis Lathyrus ochroleucus – cream pea Lathyrus ochrus – Cyprus-vetch Lathyrus odoratus – sweet pea Lathyrus palustris – marsh pea Lathyrus pannonicus Lathyrus pauciflorus – fewflower pea Lathyrus polyphyllus – leafy pea Lathyrus pratensis – meadow vetchling Lathyrus pseudocicera Lathyrus pubescens Lathyrus pusillus – tiny pea, singletary vetchling Lathyrus quinquenervius Lathyrus rigidus – stiff pea Lathyrus roseus Lathyrus sativus – Indian pea, white pea, chickling vetch Lathyrus sphaericus – grass pea Lathyrus splendens – pride of California Lathyrus sulphureus – snub pea Lathyrus sylvestris – flat pea Lathyrus szowitsii Lathyrus tingitanus – Tangier pea Lathyrus torreyi – Torrey's peavine Lathyrus tuberosus – tuberous pea Lathyrus undulatus – wavy pea Lathyrus vaniotii – Korean mountain vetchling Lathyrus venetus Lathyrus venosus – veiny pea, bushy vetchling Lathyrus vernus – spring pea Lathyrus vestitus – Pacific pea Lathyrus vinealis Lathyrus whitei Jewish law Lathyrus can be mixed with bitter peas without violating the Jewish law of Kilayim. Ecology Lathyrus species are used as food plants by the larvae of some Lepidoptera species, including the grey chi (Antitype chi) and the latticed heath (Chiasmia clathrata), both recorded on meadow vetchling (Lathyrus pratensis), and Chionodes braunella. Notes External links Calflora Database: Lathyrus species index Jepson Flora Project: Key to Lathyrus Category:Fabeae Category:Garden plants Category:Poisonous plants Category:Taxa named by Carl Linnaeus
{ "pile_set_name": "Wikipedia (en)" }
Lake Ellen (Minnesota) Lake Ellen is a lake in Chisago County, Minnesota, in the United States. Lake Ellen was named for an early settler. See also List of lakes in Minnesota References Category:Lakes of Minnesota Category:Lakes of Chisago County, Minnesota
{ "pile_set_name": "Wikipedia (en)" }
Q: the "microsoft ACE.OLEDB.12.5" provider is not registered on the local > machine I made small application to import excel files 2007+ to datagridview after copy the .exe file from the development pc to the working pc when I press button to load excel file I get this error message I tried to install AccessDatabaseEnginebut that is wont help any idea what is going on ? note working environment is windows 7 and windows xp SP2 the "microsoft ACE.OLEDB.12.5" provider is not registered on the local machine See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ************** Exception Text ************** System.InvalidOperationException: The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. at System.Data.OleDb.OleDbServicesWrapper.GetDataSource(OleDbConnectionString constr, DataSourceWrapper& datasrcWrapper) at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() at System.Data.Common.DbDataAdapter.QuietOpen(IDbConnection connection, ConnectionState& originalState) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at LoadExcel.Form1.LoadExcel(String FilePath) at LoadExcel.Form1.GetFilesList() at LoadExcel.Form1.B_Load_Excel_Click(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) ************** Loaded Assemblies ************** mscorlib Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.8762 (QFE.050727-8700) CodeBase: file:///C:/Windows/Microsoft.NET/Framework64/v2.0.50727/mscorlib.dll ---------------------------------------- LoadExcel Assembly Version: 1.0.0.0 Win32 Version: 1.0.0.0 CodeBase: file:///C:/Users/Internet/Desktop/LoadExcel.exe ---------------------------------------- System.Windows.Forms Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.5491 (Win7SP1GDR.050727-5400) CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Windows.Forms/2.0.0.0__b77a5c561934e089/System.Windows.Forms.dll ---------------------------------------- System Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.8770 (QFE.050727-8700) CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System/2.0.0.0__b77a5c561934e089/System.dll ---------------------------------------- System.Drawing Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.5495 (Win7SP1GDR.050727-5400) CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Drawing/2.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll ---------------------------------------- System.Data Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.8762 (QFE.050727-8700) CodeBase: file:///C:/Windows/assembly/GAC_64/System.Data/2.0.0.0__b77a5c561934e089/System.Data.dll ---------------------------------------- System.Xml Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.8773 (QFE.050727-8700) CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Xml/2.0.0.0__b77a5c561934e089/System.Xml.dll ---------------------------------------- System.Transactions Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.5483 (Win7SP1GDR.050727-5400) CodeBase: file:///C:/Windows/assembly/GAC_64/System.Transactions/2.0.0.0__b77a5c561934e089/System.Transactions.dll ************** JIT Debugging ************** To enable just-in-time (JIT) debugging, the .config file for this application or computer (machine.config) must have the jitDebugging value set in the system.windows.forms section. The application must also be compiled with debugging enabled. For example: When JIT debugging is enabled, any unhandled exception will be sent to the JIT debugger registered on the computer rather than be handled by this dialog box. A: I am quoting @Steve comment which was the answer of my question the AccessDatabaseEngine was correct and according to my operating system and the Microsoft office too is correct version, but I have to change my project platform from 64bit to AnyCPU It is a convoluted matter. AnyCPU (with Prefer 32bit) means that your app runs in 32bit even on 64bit OS. So you need to install the AccessDatabaseEngine for 32bit (but if you have already installed Office with 64bit then you need to remove the prefer 32 bit or install Office 32bit) I suggest you to search this site for the exact error message. There are a lot of duplicates. By the way, I am unaware of any 12.5 version of this provider. Are you sure about this number? I thougth the last one was version 12.0
{ "pile_set_name": "StackExchange" }
Q: How to change strokeColor by custom gradient I want to animate strokeColor of CAShapeLayer, but in CABasicAnimation I have two value (from and to). Is it only two color support while animation is fire? For example, on start I have strokeColor = [UIColor blueColor].CGColor; then CABasicAnimation *colorAnimation = [CABasicAnimation animationWithKeyPath:@"strokeColor"]; colorAnimation.duration = 3.0; // "animate over 3 seconds or so.." colorAnimation.repeatCount = 1.0; // Animate only once.. colorAnimation.removedOnCompletion = NO; // Remain stroked after the animation.. colorAnimation.fillMode = kCAFillModeForwards; colorAnimation.toValue = (id)[UIColor redColor].CGColor; colorAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn]; on half-way I have a dark purple color, but I need, for example, yellow color. Is it possible to add custom gradient to CABasicAnimation? A: I don't think you can do that with CABasicAnimation, but you can use a CAKeyframeAnimation to set intermediate values for your animation: CAKeyframeAnimation *colorAnimation = [CAKeyframeAnimation animationWithKeyPath:@"strokeColor"]; colorAnimation.values = @[(id)[[UIColor blueColor] CGColor], (id)[[UIColor yellowColor] CGColor], (id)[[UIColor redColor] CGColor]]; colorAnimation.duration = 3.0; // "animate over 3 seconds or so.." colorAnimation.repeatCount = 1.0; // Animate only once.. colorAnimation.removedOnCompletion = NO; // Remain stroked after the animation.. colorAnimation.fillMode = kCAFillModeForwards; colorAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn]; If you want a "across the spectrum" sort of feel, you could do: colorAnimation.values = @[(id)[[UIColor blueColor] CGColor], (id)[[UIColor greenColor] CGColor], (id)[[UIColor yellowColor] CGColor], (id)[[UIColor orangeColor] CGColor], (id)[[UIColor redColor] CGColor]]; Or if you want more of a simple blue to red, but avoiding that really dark purple, you could do: colorAnimation.values = @[(id)[[UIColor blueColor] CGColor], (id)[[UIColor colorWithRed:0.9 green:0.0 blue:0.9 alpha:1.0] CGColor], (id)[[UIColor redColor] CGColor]]; Lots of options.
{ "pile_set_name": "StackExchange" }
“The Kid”, which was Sean Brosnan’s directorial debut and now he is on a roll. Sean Brosnan and Sanja Banic set up Knight Marcher Films in 2012 to make films together. They are currently in Canada as Sean is acting in a film being shot on location in Toronto, and Ottawa. The timing could not be more perfect with a break in filming taking place the weekend of the Canadian Grand Prix. Sean Brosnan has not been to a Grand Prix since his dad Pierce Brosnan took him as a little boy, and Sanja has never been so they are about to get a taste of Montreal, and the spirit of one of the coolest events that takes place in Canada the Canadian Grand Prix in a city from where so many great Canadian film makers have emerged from Montreal. Sean and Sanja are not in bad company as one of the biggest fans of Formula 1 is none other than Pierce (No NOT Brosnan) but Pierce Handling the CEO of the number one film festival in the world the Toronto International Film Festival and the director of the festival Warren Spitz. Let it be known want to make films in Canada then we are ready and encourage you to come we will show you a good time and you will never want to leave. Look for Sean and Sanja rocking the paddock Sunday they are about to take Canada with a storm a looking to do a lot more work in Canada.
{ "pile_set_name": "Pile-CC" }
Two-photon fluorescence imaging and reactive oxygen species detection within the epidermis. Two-photon fluorescence microscopy is used to detect ultraviolet-induced reactive oxygen species (ROS) in the epidermis and the dermis of ex vivo human skin and skin equivalents. Skin is incubated with the nonfluorescent ROS probe dihydrorhodamine, which reacts with ROS such as singlet oxygen and hydrogen peroxide to form fluorescent rhodamine-123. Unlike confocal microscopic methods, two-photon excitation provides depth penetration through the epidermis and dermis with little photodamage to the sample. This method also provides submicron spatial resolution such that subcellular areas that generate ROS can be detected. In addition, comparative studies can be made to determine the effect of applied agents (drugs, therapeutics) upon ROS levels at any layer or cellular region within the skin.
{ "pile_set_name": "PubMed Abstracts" }
Q: Why is dictionary so much faster than list? I am testing the speed of getting data from Dictionary VS list. I've used this code to test : internal class Program { private static void Main(string[] args) { var stopwatch = new Stopwatch(); List<Grade> grades = Grade.GetData().ToList(); List<Student> students = Student.GetStudents().ToList(); stopwatch.Start(); foreach (Student student in students) { student.Grade = grades.Single(x => x.StudentId == student.Id).Value; } stopwatch.Stop(); Console.WriteLine("Using list {0}", stopwatch.Elapsed); stopwatch.Reset(); students = Student.GetStudents().ToList(); stopwatch.Start(); Dictionary<Guid, string> dic = Grade.GetData().ToDictionary(x => x.StudentId, x => x.Value); foreach (Student student in students) { student.Grade = dic[student.Id]; } stopwatch.Stop(); Console.WriteLine("Using dictionary {0}", stopwatch.Elapsed); Console.ReadKey(); } } public class GuidHelper { public static List<Guid> ListOfIds=new List<Guid>(); static GuidHelper() { for (int i = 0; i < 10000; i++) { ListOfIds.Add(Guid.NewGuid()); } } } public class Grade { public Guid StudentId { get; set; } public string Value { get; set; } public static IEnumerable<Grade> GetData() { for (int i = 0; i < 10000; i++) { yield return new Grade { StudentId = GuidHelper.ListOfIds[i], Value = "Value " + i }; } } } public class Student { public Guid Id { get; set; } public string Name { get; set; } public string Grade { get; set; } public static IEnumerable<Student> GetStudents() { for (int i = 0; i < 10000; i++) { yield return new Student { Id = GuidHelper.ListOfIds[i], Name = "Name " + i }; } } } There is list of students and grades in memory they have StudentId in common. In first way I tried to find Grade of a student using LINQ on a list that takes near 7 seconds on my machine and in another way first I converted List into dictionary then finding grades of student from dictionary using key that takes less than a second . A: When you do this: student.Grade = grades.Single(x => x.StudentId == student.Id).Value; As written it has to enumerate the entire List until it finds the entry in the List that has the correct studentId (does entry 0 match the lambda? No... Does entry 1 match the lambda? No... etc etc). This is O(n). Since you do it once for every student, it is O(n^2). However when you do this: student.Grade = dic[student.Id]; If you want to find a certain element by key in a dictionary, it can instantly jump to where it is in the dictionary - this is O(1). O(n) for doing it for every student. (If you want to know how this is done - Dictionary runs a mathematical operation on the key, which turns it into a value that is a place inside the dictionary, which is the same place it put it when it was inserted) So, dictionary is faster because you used a better algorithm. A: The reason is because a dictionary is a lookup, while a list is an iteration. Dictionary uses a hash lookup, while your list requires walking through the list until it finds the result from beginning to the result each time. to put it another way. The list will be faster than the dictionary on the first item, because there's nothing to look up. it's the first item, boom.. it's done. but the second time the list has to look through the first item, then the second item. The third time through it has to look through the first item, then the second item, then the third item.. etc.. So each iteration the lookup takes more and more time. The larger the list, the longer it takes. While the dictionary is always a more or less fixed lookup time (it also increases as the dictionary gets larger, but at a much slower pace, so by comparison it's almost fixed). A: When using Dictionary you are using a key to retrieve your information, which enables it to find it more efficiently, with List you are using Single Linq expression, which since it is a list, has no other option other than to look in entire list for wanted the item.
{ "pile_set_name": "StackExchange" }
1. Technical Field This disclosure relates generally to semiconductor devices, and more particularly to stackable die modules. 2. Description of the Related Art Conventional systems on a chip (SoC) package on the temperature sensor calibration methods correlate the sensor output (e.g., voltage) to reference temperature set-points to yield a sensor calibration curve. The sensor calibration curve includes a slope and offset for linear response temperature sensors. Such calibration is typically carried out in thermal chamber/liquid bath environment in order to achieve uniform ambient temperature. Power levels inside the silicon have to be minimized to eliminate any on die temperature gradient and junction to ambient temperature difference. The major disadvantage of this steady-state calibration approach is the slow throughput due to (1) the large thermal mass of the thermal chamber itself that results in a very large equilibration time to reach a given temperature set point, especially at high temperature ranges, and (2) a slow thermal equilibrium between SoC packages and ambient temperatures due to thermal mass of a package/PCB board assembly. The current methodology for calibration typically takes ˜30-60 minutes.
{ "pile_set_name": "USPTO Backgrounds" }
Q: How can I make deleteRowsAtIndexPaths: work with GenericTableViewController? I'm using Matt Gallagher's GenericTableViewController idea for controlling my UITableViews. My datasource is a NSFetchedResultsController. http://cocoawithlove.com/2008/12/heterogeneous-cells-in.html Everything is working fine, until I try to delete a cell. I have the following code in my View Controller: - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the managed object. NSManagedObjectContext *context = [wineryController managedObjectContext]; [context deleteObject:[wineryController objectAtIndexPath:indexPath]]; NSError *error; if (![context save:&error]) { // Handle the error. } [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } } The final line crashes with the rather verbose explanation in the console: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (5) must be equal to the number of rows contained in that section before the update (5), plus or minus the number of rows inserted or deleted from that section (0 inserted, 1 deleted).' OK, I understand what it is saying... a row is not getting deleted (I would assume) because I'm not forwarding some message to the right place (since I have moved some code from its 'normal' location)... anyone have any idea which one? I am totally stumped on this one. A: Well, bah. I just found this answer, which is not the same, but got me headed in the right direction. I'll leave this here for anyone in the future having similar troubles. The key is to wrap the deleteRowsAtIndexPaths with begin and end tags, and force the model to update within the same block, resulting in: [tableView beginUpdates]; [self constructTableGroups]; [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView endUpdates]; This caused the issue to go away, and the animations to work just perfectly.
{ "pile_set_name": "StackExchange" }
package classic import ( "context" "fmt" "github.com/hashicorp/go-oracle-terraform/compute" "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" ) type stepAttachVolume struct { Index int VolumeName string InstanceInfoKey string } func (s *stepAttachVolume) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*compute.Client) ui := state.Get("ui").(packer.Ui) instanceInfo := state.Get(s.InstanceInfoKey).(*compute.InstanceInfo) saClient := client.StorageAttachments() saInput := &compute.CreateStorageAttachmentInput{ Index: s.Index, InstanceName: instanceInfo.Name + "/" + instanceInfo.ID, StorageVolumeName: s.VolumeName, } sa, err := saClient.CreateStorageAttachment(saInput) if err != nil { err = fmt.Errorf("Problem attaching master volume: %s", err) ui.Error(err.Error()) state.Put("error", err) return multistep.ActionHalt } state.Put(s.InstanceInfoKey+"/attachment", sa) ui.Message("Volume attached to instance.") return multistep.ActionContinue } func (s *stepAttachVolume) Cleanup(state multistep.StateBag) { sa, ok := state.GetOk(s.InstanceInfoKey + "/attachment") if !ok { return } client := state.Get("client").(*compute.Client) ui := state.Get("ui").(packer.Ui) saClient := client.StorageAttachments() saI := &compute.DeleteStorageAttachmentInput{ Name: sa.(*compute.StorageAttachmentInfo).Name, } if err := saClient.DeleteStorageAttachment(saI); err != nil { err = fmt.Errorf("Problem detaching storage volume: %s", err) ui.Error(err.Error()) state.Put("error", err) return } }
{ "pile_set_name": "Github" }
Label-free proteomic analysis of breast cancer molecular subtypes. To better characterize the cellular pathways involved in breast cancer molecular subtypes, we performed a proteomic study using a label-free LC-MS strategy for determining the proteomic profile of Luminal A, Luminal-HER2, HER2-positive, and triple-negative (TN) breast tumors compared with healthy mammary tissue. This comparison aimed to identify the aberrant processes specific for each subtype and might help to refine our understanding regarding breast cancer biology. Our results address important molecular features (both specific and commonly shared) that explain the biological behavior of each subtype. Changes in proteins related to cytoskeletal organization were found in all tumor subtypes, indicating that breast tumors are under constant structural modifications to invade and metastasize. We also found changes in cell-adhesion processes in all molecular subtypes, corroborating that invasiveness is a common property of breast cancer cells. Luminal-HER2 and HER2 tumors also presented altered cell cycle regulation, as shown by the several DNA repair-related proteins. An altered immune response was also found as a common process in the Luminal A, Luminal-HER2, and TN subtypes, and complement was the most important pathway. Analysis of the TN subtype revealed blood coagulation as the most relevant biological process.
{ "pile_set_name": "PubMed Abstracts" }