text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
A large-scale mining operation is here defined, for the sake of argument, as one that produces mineral commodities with an average value of more than US$100 million a year for a period of at least ten years. Although the title of this article features the concept of a ‘large-scale mine’, we do not think of a ‘mine’ as a place where mineral resources are extracted from the ground, but rather as a group of excavation and processing activities that are integrated into a single whole by means of a network of corporate relationships between the producers. In Papua New Guinea (PNG), each large-scale mining operation occupies and constitutes a distinctive territorial enclave within the country’s borders, with one place of excavation, one processing plant and one route by which the product leaves the country. In calculating the contribution of extractive industry to PNG’s exports, we have included a petroleum project that has been exporting oil on this scale since 1992, and this has recently been supplemented by a liquid natural gas project whose contribution to the value of PNG’s exports will be much greater. However, for the purpose of the present article, we have excluded oil and gas projects from our definition of a ‘mining operation’: because the mining and petroleum industries are subject to different forms of regulation in PNG. PNG has hosted seven large-scale mining projects in the period since independence, five of which are still in operation. - The Panguna gold and copper mine operated from 1972 to 1989, when it was forcibly closed by civil unrest on the island of Bougainville. - The Misima gold mine (in Milne Bay Province) operated from 1986 to 2004. - The Ok Tedi gold and copper mine (in Western Province) began production in 1984. - The Porgera gold mine (in Enga Province) in 1992. - The Lihir gold mine (in New Ireland Province) in 1997. - The Hidden Valley gold mine (in Morobe Province) in 2009. - The Ramu nickel and cobalt mine (in Madang Province) in 2013. The Ok Tedi, Porgera and Hidden Valley mines are all scheduled to close within the next decade. A large-scale seabed mining project has been approved for development but is not yet operational. Four other prospective large-scale mining projects (in Milne Bay, Morobe, Madang and West Sepik provinces) are currently undergoing feasibility studies, and the Autonomous Bougainville Government has plans to reopen the Panguna mine if it can mobilise popular support for this to happen. The PNG Government reserves the right to purchase anything up to 30 per cent of the equity in any mining project for which it grants a development licence, but has generally ended up with a smaller stake or no stake at all. The balance of the shares in all large-scale mining projects has normally been held by foreign investors, including the companies that operate them, but the Ok Tedi project is now an exception to this rule because the former operator (BHP Billiton) decided that it was a liability rather than an asset. When the government has purchased a stake in a large-scale mining project, all or part of it has commonly been held in trust for the provincial government, local-level government or landowning community that hosts the project. These other entities are allowed to choose whether they wish the national government to exercise this option on their behalf. The PNG Government currently holds a 5 per cent stake in the Porgera project, which is operated by the Canadian company Barrick Gold, and this stake is held in trust for the Enga Provincial Government and the local landowners. A comparable stake in the Lihir project, which is operated by the Australian company Newcrest, was acquired and then sold by the local landowners. The government did not exercise the option to purchase shares in either the Hidden Valley project, which is also operated by Newcrest, or in the Ramu project, which is operated by the China Metallurgical Group Corporation. BHP Billiton bequeathed its shares in the Ok Tedi project to a charitable trust known as the PNG Sustainable Development Program, but this entity was nationalised in 2013, so this project is now wholly owned by the government. The development of a large-scale mining project in PNG is framed by three different types of agreement that are normally negotiated in the following order. First, there is a compensation agreement between the holder of an exploration licence and the customary owners of the land covered by that licence. Since 97 per cent of the land in PNG is generally held to be customary land, there is no such thing as an exploration licence without customary landowners. Second, there is a development agreement (or mining development contract) between the national government and the project proponent based on feasibility studies that the proponent provides to the government. This type of agreement is linked to the production of an environmental impact statement that must also be approved by the national government. Third, there is a benefit-sharing agreement between the national government, the provincial and local-level government (or governments) hosting the project and the customary owners of the land required for development purposes, which is negotiated through an institution known as the development forum. The project proponent is also represented in the negotiation of this third type of agreement, but a development licence is not normally granted until it has been finalised. The first two types of agreement have been features of PNG’s mineral policy framework since independence in 1975; the third type was added in 1988 in response to political pressure from provincial governments and local community representatives. In PNG, development agreements have normally required that national participation be specified in training and localisation plans and business development plans whose implementation is then reported to the national government at regular intervals. These plans are subject to the ‘preferred area policy’, which has come to inform the negotiation of benefit-sharing agreements. The origins of the preferred area policy can be traced back to a pair of decisions made by the newly independent national government in 1976. The first decision was to repatriate a sum equivalent to the whole of the royalty collected by the national government, in its capacity as the legal owner of subsurface mineral resources, to the province from which those resources were extracted. This decision was made in response to threats of secession from the province that hosted the Panguna mine, but it would have general application under the new system of provincial government that was put in place at the same time. The second decision was to oblige the future developer of the Ok Tedi mine to give preference in training, employment and business development to the people of the area most directly affected by the mining operation. This decision was not initially meant to have general application, nor did it apply to the Panguna mine. It was justified by the observation that the people living around the Ok Tedi mine were exceptionally poor and therefore deserved this form of affirmative action. Considerations of poverty and equity have long since disappeared from PNG’s preferred area policy. The allocation of royalties and the allocation of entitlements to training, employment and business development opportunities are now included in the range of benefits that are subject to benefit-sharing agreements through the development forum. In effect, the preferred area policy creates concentric rings of entitlement to a range of benefit streams that are subject to such agreements, including entitlements to training, employment and business development opportunities. The innermost ring is occupied by the customary owners of the land covered by development licences, the next by ‘project area people’ (however these might be defined), the next by the people or government of the host province, and the outermost ring by the population or government of the nation as a whole. In the period since the development forum was invented in 1988, there has been a steady increase in the proportion of government revenues from each new resource project that is captured by organisations or individuals in the three inner circles of entitlement. Since 1993, the economic privilege bestowed on preferred areas has been compounded by a tax credit scheme for developers who supply social and economic infrastructure to local communities. There is no simple answer to the question of what actually constitutes the ‘local’ level at which large-scale mining operations become the subject of local-level politics. There is no necessary correspondence between the boundaries of a mine-affected area and those of a set of political or administrative units, even if each mine-affected area is conceived as a unique geographical space surrounding a large-scale mine. The question of what actually constitutes a mine-affected area is itself a political question, especially if people’s inclusion in such an area entitles them to special consideration in the payment of compensation or the distribution of project-related benefits. Project operators have an obvious interest in limiting the size of such an area in order to treat broader social and environmental impacts as externalities for which they cannot be held accountable. On the other hand, there may be some circumstances in which the boundaries of the area affected by one project overlap those of the area affected by another project, and it is also possible to represent the areas affected by several different projects as part of a larger region which experiences the cumulative social and environmental impacts of all of them. PNG has four tiers or levels of political organisation. The second tier consists of the National Capital District, 20 provinces and one autonomous region (Bougainville) that used to be a province. Each of these 22 entities has its own elected representative in the national parliament and 21 of these representatives are known as governors. At the next level down, there are 89 ‘open electorates’, also with their own elected representatives in the national parliament, three of which are subdivisions of the national capital, while the rest are known as districts in their own right. The fourth tier comprises 332 local-level governments, each with its own directly elected president who participates in decisions made at the district and provincial levels, but not at the national level. Twenty-nine of these local governments represent towns, most of which are provincial capitals, while the rest represent rural areas. Since most of PNG’s provinces do not host a large-scale mining project, nor any of the facilities associated with the export of oil and gas, there is an ongoing political debate about the way that national revenues from the extractive industry sector should be distributed between the minority of project-hosting provinces and districts and the majority that cannot currently claim ownership of such a project. The point at issue here is the so-called ‘derivation principle’, which says that a national government should transfer some of the revenue it derives from any economic activity to the lower levels of government responsible for the area where the activity occurs so that these lower levels of government will have an incentive to support this activity. This is a national issue, not a local one, and is only indirectly connected to the distribution of revenues from foreign aid. Foreign aid now accounts for only 3 per cent of PNG’s gross domestic product, and its distribution across the country could not possibly compensate for the effects of the preferred area policy, even if policymakers thought this would be a good idea. In the context of the mining industry in PNG, local-level politics seems to have a quite distinctive focus on the negotiation and implementation of promises to provide compensation and benefit packages to the customary owners of land in mine-affected areas, but also on the distribution of the contents of these packages among the people who are entitled to a share of them. The size of the enclaves that are demarcated for this purpose varies from one project to another, but the boundaries may also change during the course of the project cycle. At one extreme is the Lihir group of islands, which accounts for just one of the ten local government areas in PNG’s New Ireland Province. At the other extreme is the area now recognised as the one affected by the Ok Tedi mine, which has grown to include all or part of ten out of the 14 local government areas in Western Province. The larger the scale of a mine-affected area, the greater the scope for it to be internally divided into separate geographical zones, each of which may then become the site of a distinct set of political activities related to the mining project that affects it. A mining enclave or mine-affected area cannot be neatly demarcated by lines on a map in the same way that a mine site or plant site is bounded and guarded by fences, gates and signposts. It is also constituted in an institutional and ideological sense, by means of its connections with different political and administrative levels and spaces. These connections can be made of legal rules or administrative norms, the distribution of shareholdings in different companies, networks of patronage or clientelism, or the positions assigned to the practice or promise of mining in different political imaginations. The multidimensional nature of the mining enclave or mine-affected area therefore means that local-level politics is not simply politics conducted within a particular physical space, or even at one level of political organisation.
<urn:uuid:9b25c2e1-1e9c-4bf9-b777-c3a05b859d6b>
CC-MAIN-2021-43
https://tokpisin.info/large-scale-mining-operations-papua-new-guinea/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00310.warc.gz
en
0.966257
2,661
3.03125
3
…Three historical cases of major technological innovations whose benefits and risks were the subject of heated public controversy are examined, in search of lessons that may suggest a path toward consensus in the biotechnology debate. In each of the cases-water fluoridation, nuclear power and pesticides-proponents of the technology gathered scientific evidence that they believed established that the innovations were safe. In each case, the federal government was heavily involved in oversight, safety regulation, and in the first two cases, active promotion of the technology. Supporters of the technologies employed a variety of communications strategies, ranging from massive “educational” campaigns (e.g., “Our Friend The Atom”) to vituperative ad hominem attacks on leading opponents. None of these strategies succeeded in achieving broad societal acceptance of the technologies. Fluoridation today is opposed as vigorously by activist groups as it was when first introduced around 1950, it has not been universally adopted even in the U.S., and it has been rejected in most other countries. . . . 1. Fluoridation. In the 1930s, studies of mottled dental enamel in parts of the Midwest and western United States found that fluoride in the water caused the problem. Further research found that people with mottled teeth had fewer cavities, and dental researchers soon proposed adding fluoride to water supplies to reduce tooth decay. Experimental fluoridation trials began in three communities in 1945. But enthusiastic proponents of the idea could not wait for more scientific evidence. They mounted an intensive lobbying campaign, and in 1950 persuaded the U.S. Public Health Service (PHS), which had done or sponsored most of the research up to that point, to endorse fluoridation and urge local communities to adopt it. The PHS and a few state dental officials then began vigorously promoting fluoridation of community water supplies nationwide.(10,11,12,13) If advocates of fluoridation had expected the PHS endorsement to persuade the public to accept fluoridation, they were rudely disappointed. Virulent public opposition cropped up in communities where fluoridation was being considered.(14) In retrospect, it is not hard to understand why. The scientific case for fluoridation was much weaker than it looked to supporters of the idea. The experimental trials had not run their course, and there had been no significant studies examining the long-term health of people in communities with naturally fluoridated water.(15) Those who favored fluoridation had focused very narrowly on demonstrating its benefits, and had essentially taken its safety as a given. Fluoride is in fact quite toxic (it was once widely used as an insecticide and a rodenticide). Exposure via drinking water, at levels not much higher than what was proposed for fluoridation, had been associated in numerous published studies, beginning around 1940, with serious adverse skeletal and neuromuscular effects, in India and other countries.(16,17) Opposition to fluoridation initially came from scientists concerned about the lack of good evidence on possible health risks. Non-scientific concerns also loomed large: The pro-fluoridation activists had given no serious thought to the rights of individuals to choose whether or not to take the risks of ingesting fluoride, and seemed insensitive to the complex ethical questions raised by adding something beneficial but toxic to the public water supply.(18,19) When controversy exploded and pro-fluoridationists had no good answers for questions raised by opponents, fluoridation took a political beating. At the local level, opponents demanded referenda on fluoridation, and usually defeated the measure. Congress held hearings in 1952 and recommended that the PHS pursue a “go-slow” policy.(20,21) One option for advocates of new technologies-asking government to regulate the product, and certify its safety-was unavailable in this case. The government (the PHS) was the leading sponsor of fluoridation, had already decided it was safe, and did not consider the risks an open question.(22) The pro-fluoridationists seemed insensitive to the perception on many citizens’ part that because the PHS was a strong advocate of fluoridation’s benefits, it could not be an unbiased assessor of its risks. Faced with such unexpected and strong opposition, the pro-fluoridation side hardened its stance. Leading PHS dental researchers lobbied every leading scientific organization, to gain endorsements of fluoridation.(23,24) They cast fluoridation as a product of scientific progress under siege from anti-scientific forces, and rallied the scientific community in political support of the measure.(25) They carried out a few studies looking for possible adverse effects of fluoridation; the studies were poorly designed and inconclusive, by today’s standards, but they found no convincing evidence of harm.(26) The PHS declared the issues closed, the debate over.(27,28) The studies were roundly criticized as inadequate and biased by leading opponents of the day,(29,30) but fluoridation advocates rapidly took the stance that there was no longer any scientific doubt that fluoridation was safe and effective.(31,32) Their political strategy was simply to steamroll the opposition, to insist that opponents had no basis for any valid objections. They focused on political campaigning, not on research; in fact, research all but halted, as it was politically inexpedient for the PHS to be studying questions they had already declared adequately answered.(33,34) The pro-fluoridation movement adopted a hostile attack posture toward opponents. They characterized opponent leaders, regardless of scientific credentials (and many were either research scientists or physicians), as cranks and crackpots.(35) They aggressively used guilt-by-association, spreading images like the right-wing lunatic General Jack D. Ripper in “Dr. Strangelove,” to discredit the very idea of opposition to fluoridation. They used slick public relations campaigns, avoided scientific discourse, sought to solidify political support for fluoridation in the scientific professions, and to energize local health leaders to fight to win referenda.(36) Did these tactics work? In some limited ways. Few respectable scientists voiced doubts about fluoridation, once proponents had reinforced public perceptions that opposition to fluoridation was a “crackpot” cause.(37) Those who did openly oppose fluoridation were often subjected to personal attacks and professional reprisals.(38) For decades, mainstream scientific journals would reject for publication any paper that did not articulate a strictly pro-fluoridation position on risk and benefit questions.(39,40,41) The strategy of waging political war against the opposition also helped recruit zealous pro-fluoridation leaders, to engage opponents in local skirmishes. But the tactics pursued in support of fluoridation also had serious counterproductive effects. By recruiting scientific bodies as political endorsers and refusing to debate the scientific issues, proponents substituted dogmatism for open-mindedness and weakened their own scientific credibility. Their scorched-earth attacks on their opponents further polarized the debate, redoubled the determination of the antis, and made them appear to be the underdogs. Far from silencing the opposition, these attacks both increased public sympathy for the anti-fluoridation position and drove anti leaders toward more extreme positions.(42) Fifty years after it began, the fluoridation debate persists largely unchanged. Despite half a century of official approval and promotion, only about 60 percent of American public water supplies are fluoridated.(43) When local health officials propose fluoridation, grass-roots opposition almost always crops up, and fluoridation still goes down to defeat more often than not. Risk issues much like those raised 50 years ago, with the sophistication added by decades of public debate of environmental health hazards, are raised today, and the science backing up those concerns is accessible on the internet.(44) Outside the United States, most other countries have rejected fluoridation, choosing other effective strategies for combating tooth decay.(45) In short, the fluoridation model is hardly one the biotechnology industry would want to emulate today. Endorsements by prestigious scientific bodies and “clean bills of health” issued by expert committees from which competent critics were systematically excluded have limited persuasive value; their biases are obvious, and they don’t address the issues that often concern the public. While biotechnology advocates may occasionally feel the urge to sweep aside risk issues and crush their critics with propaganda and ad hominem attacks, all that approach really accomplished for the pro-fluoridation movement was to create an entrenched, undying opposition that limited adoption. Fluoridation advocates have never managed to persuade the public to accept the idea solely on its merits. 10. McNeil, D.R. (1957), The Fight for Fluoridation. New York: Oxford University Press. 11. Exner, F.B. and G.L. Waldbott with J. Rorty (ed.) (1957), The American Fluoridation Experiment. New York: The Devin-Adair Company. 12. Wollan, M. (1968), Controlling the Potential Hazards of Government-Sponsored Technology. George Washington Law Review 36(5): 1105-1137. 13. Groth, E. (1973), Two Issues of Science and Public Policy: Air Pollution Control in the San Francisco Bay Area, and Fluoridation of Community Water Supplies. Ph.D. Dissertation, Department of Biological Sciences, Stanford University, May 1973. 14. McNeil, op. cit. (Note 10). The book is a history of fluoridation proponents’ early struggles to overcome public opposition, told from the pro-fluoridation perspective. 15. Wollan, op. cit (Note 12), pp. 1128-29. 16. Groth, op. cit, (Note 13) reviews the literature, as do Exner et al., op. cit. (Note 11). 17. Waldbott, G.L., A,W. Burgstahler and H.L. McKinney (1978), Fluoridation: The Great Dilemma. Lawrence, KS: Coronado Press. 18. Wollan, op. cit, (Note 12), p. 1129. 19. Martin, B. (1991), Scientific Knowledge in Controversy: The Social Dynamics of the Fluoridation Debate. Albany, NY: State University of New York Press. Ethical and legal issues raised by fluoridation are explored at pp. 30-34. 20. Wollan, op. cit. (note 12), pp. 1128-1130. 21. McNeil, op. cit. (Note 10), pp. 145-154. 22. Wollan, op. cit. (Note 12), pp. 1131-1133. 23. Wollan, op. cit. (Note 12), p. 1131. Groth, op. cit. (Note 13) and McNeil, op. cit (Note 10) also reviewed the PHS lobbying campaign to gain endorsements. 24. McClure, F.J. (1970), Water Fluoridation: The Search and The Victory. Bethesda, MD: National Institute of Dental Research. Chapter 14 details the endorsements. 25. McNeil, op. cit (Note 10); McClure, op. cit (Note 24). 26. See McClure, op. cit. (Note 24) for a summary of these studies, many of which he authored or co-authored; see Groth, op. cit. (Note 13) for a more critical review. 27. The PHS began referring to fluoridation risk issues as “not debatable;” see Wollan, op. cit. (Note 12), p. 1133. 28. Pro-fluoridation scientists also refused to debate the evidence on risks and benefits in public with scientists critical of fluoridation. Panel discussions at major scientific meetings were set up to present only the pro-fluoridation perspective. Invitations to debate the evidence with opponents in communities where referenda were pending were rarely accepted. The effort by pro-fluoridation advocates to avoid public debate of the scientific issues has been documented in detail by Waldbott et al., op. cit. (Note 17) and Martin, op. cit. (Note 19). 29. Exner et al., op. cit. (Note 11) presented a detailed and incisive scientific critique in 1957. Waldbott et al., op. cit. (Note 17) updated the critique in the 1970s. 30. Groth, op. cit. (Note 13) reviewed the original studies and found most of Exner et al.’s criticisms valid. 31. Wollan, op. cit. (Note 12) and McNeil, op. cit. (Note 10) document both the stance taken by the pro-fluoridationists and the political context that gave rise to it. 32. McClure’s book (Note 24) embodies the one-sided, closed-minded attitude towards the scientific evidence held by the PHS researchers. 33. Wollan, op. cit., discusses the PHS attitude toward research after 1950. 34. The American Dental Association, an early recruit to the pro-fluoridation effort, put out special issues of its Journal devoted entirely to promoting fluoridation. Articles included advice on organizing a local campaign, discrediting opponents, publicity and other political strategic issues. 35. Martin, op. cit. (Note 19), details how pro-fluoridation leaders made attacks on the credibility of opponents a keystone of their campaign. For examples of the kinds of information used to discredit anti leaders, see American Dental Association (1965), Comments on the Opponents of Fluoridation. Journal of the American Dental Assn. 71:1155-1183. 36. McNeil, op. cit. (Note 10); Exner et al., op. cit. (Note 11). 37. Waldbott et al., op. cit. (Note 17), pp. 316-352. 38. Martin, op. cit. (Note 19) details numerous examples of professional reprisals taken against scientists who questioned fluoridation publicly (pp. 92-114). 39. Waldbott, a Detroit, MI, allergist who reported what he believed to be idiosyncratic reactions on the part of patients hyper-intolerant of fluoride, had difficulty publishing his reports in U.S. journals. He details several of the rejections-and the explicitly political reasons given for them-in Waldbott et al., op. cit. (Note 17), pp. 333-335. 40. Martin, op. cit. (Note 19), also explores the difficulty anti-fluoridation scientists have had in getting their views published in mainstream journals (pp. 97-99). 41. I myself had three manuscripts based on my doctoral dissertation (Note 13) rejected by U.S. public health journals in the 1970s. My reviews of the evidence on risks and benefits of fluoridation were sent to anonymous pro-fluoridation referees, who found them “biased.” One editor advised that he wished to do nothing that might offer anti-fluoridationists any political leverage. Unlike Waldbott, who was an active political anti-fluoridation leader, I was politically outside the fray; my interest was exploring the interplay between political controversy and interpretations of scientific data. My papers were still rejected by several leading American journals in the 1970s, I believe because of a pervasive bias in favor of defending and promoting fluoridation. 42. Groth, E. (1991), The Fluoridation Controversy: Which Side Is Science On? A Commentary, in Martin, op. cit. (Note 19), pp. 169-192. 43. Martin, op. cit. (Note 19) reports that as of 1990, 121 million out of 212 million Americans served by public water supplies were drinking fluoridated water. The figure has held more or less steady at this level since about 1970 and probably has not changed appreciably since Martin’s report. 44. For example, the Fluoride Action Network, an international coalition of organizations opposed to fluoridation, maintains a web site at http://www.fluoridealert.org. 45. Martin, op. cit. (Note 19), Appendix, Fluoridation around the world (pp. 193-217).
<urn:uuid:ea6d6285-aaef-47c8-9dab-5db1d63c6ae9>
CC-MAIN-2021-43
https://fluoridealert.org/articles/groth-2001/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00110.warc.gz
en
0.95283
3,404
2.734375
3
Most things are complicated, even things that appear rather simple. Take the toilet as an example. As a thought experiment, would you be able to explain to someone else how a toilet works? If you’re fumbling for an answer, you’re not alone. Most people cannot either. This not just a party trick. Psychologists have used several means to discover the extent of our ignorance. For example, Rebecca Lawson at the University of Liverpool presented people with a drawing of a bicycle which had several components missing. They were asked to fill in the drawing with the missing parts. Sounds easy, right? Apparently not. Nearly half of the participants were unable to complete the drawings correctly. Also, people didn’t do much better when they were presented with completed drawings and asked to identify the correct one. To a greater or lesser extent, we all suffer from an illusion of understanding. That is, we think we understand how the world works when our understanding is rudimentary. In their new book The Knowledge Illusion, cognitive scientists Steven Sloman and Philip Fernbach explore how we humans know so much, despite our individual ignorance. Thinking is for action To appreciate our mental limitations, we first need to ask ourselves: what is the purpose of the human brain? The authors note there is no shortage of of explanations of what the human mind evolved for. For example, there are those who argue the mind evolved to support language, or that it is adapted for social interactions, hunting, or acclimatising to changing climates. “[…] [T]hey are all probably right because the mind actually evolved to do something more general than any of them… Namely, the mind evolved to support our ability to act effectively.” This more general explanation is important, as it helps establish why we don’t retain all the information we receive. The reason we’re not all hyperthymesics is that it would make us less successful at what we’ve evolved to do. The mind is busy trying to choose actions by picking out the most useful stuff and leaving the rest behind. Remembering everything gets in the way of focusing on the deeper principles that allow us to recognize how a new situation resembles past situations and what kind of actions will be effective. The authors argue the mind is not like a computer. Instead, the mind is a flexible problem solver that stores the most useful information to aid survival and reproduction. Storing superficial details is often unnecessary, and at times counterproductive. Community of knowledge Evidently, we would not do very well if we relied solely on our individual knowledge. We may consider ourselves highly intelligent, yet we wouldn’t survive very long if we found ourselves alone in the wilderness. So how do we survive and thrive, despite our mental limitations? The authors argue the secret of our success is our ability to collaborate and share knowledge. [W]e collaborate. That’s the major benefit of living in social groups, to make it easy to share our skills and knowledge. It’s not surprising that we fail to identify what’s in our heads versus what’s in others’, because we’re generally- perhaps always- doing things that involve both. Whether either of us washes dishes, we thank heaven that someone knows how to make dish soap and someone else knows how to provide warm water from a faucet. We wouldn’t have a clue. One of the most important ingredients of humanity’s success is cumulative culture— our ability to store and transmit knowledge, enabled by our hyper-sociality and cooperative skills. This fundamental process is known as cultural evolution, and is outlined eloquently in Joe Henrich’s book The Secret of Our Success. Throughout The Knowledge Illusion, the metaphor of a beehive is used to describe our collective intelligence. “[…][P]eople are like bees and society a beehive: Our intelligence resides not in individual brains but in the collective mind.” However, the authors highlight that unlike beehives which have remained largely the same for millions of years, our shared intelligence is becoming more powerful and our collective pursuits are growing in complexity. In psychology, intelligence has largely been confined to ranking individuals according to cognitive ability. The authors argue psychologists like general intelligence as it’s readily quantifiable, and has some power to predict important life outcomes. For example, people with higher IQ scores do better academically and perform better at their jobs. Whilst there’s a wealth of evidence in favour of general intelligence, Sloman and Fernbach argue that we may be thinking about intelligence in the wrong way. “Awareness that knowledge lives in a community gives us a different way to conceive of intelligence. Instead of regarding intelligence as a personal attribute, it can be understood as how much an individual contributes to the community.” A key argument is that groups don’t need a lot of intelligent people to succeed, but rather a balance of complimentary attributes and skill-sets. For example to run a company, you need some people who are cautious and others who are risk takers; some who are good with numbers and others who are good with people. For this reason, Sloman and Fernbach stress the need to measure group performance, rather than individual intelligence. “Whether we’re talking about a team of doctors, mechanics, researchers, or designers, it is the group that makes the final product, not any one individual.” A team led by Anita Woolley at the Tepper School of Business have begun devising ways of measuring collective intelligence, with some progress made. The idea of measuring collective intelligence is new, and many questions remain. However, the authors contend that the success of a group is not predominantly a function of the intelligence of individual members, but rather how well they work together. Committing to the community Despite all the benefits of our communal knowledge, it also has dangerous consequences. The authors argue believing we understand more than we do is the source of many of society’s most pressing problems. Decades worth of research shows significant gap between what science knows, and what the public believes. Many scientists have tried addressing this deficit by providing people with more factual information. However, this approach has been less than successful. For example, Brendan Nyhan’s experiments into vaccine opposition illustrated that factual information did not make people more likely to vaccinate their children. Some of the information even backfired– providing parents stories of children who contracted measles were more likely to believe that vaccines have serious side effects. Similarly, the illusion of understanding helps explains the political polarisation we’ve witnessed in recent times. In the hope of reducing political polarisation, Sloman and Fernbach conducted experiments to see whether asking people to explain their causal understanding of a given topic would make them less extreme. Although they found doing so for non-controversial matters did increase openness and intellectual humility, the technique did not work on highly charged political issues, such as abortion or assisted suicide. Viewing knowledge as embedded in communities helps explain why these approaches don’t work. People tend to have a limited understanding of complex issues, and have trouble absorbing details. This means that people do not have a good understanding of what they know, and they rely heavily on their community for the basis of their beliefs. This produces passionate, polarised attitudes that are hard to change. Despite having little to no understanding of complicated policy matters such as U.K. membership of the European Union or the American healthcare system, we feel sufficiently informed about such topics. More than this, we even feel righteous indignation when people disagree with us. Such issues become moralised, where we defend the position of our in-groups. As stated by Sloman and Fernbach (emphasis added): [O]ur beliefs are not isolated pieces of data that we can take and discard at will. Instead, beliefs are deeply intertwined with other beliefs, shared cultural values, and our identities. To discard a belief means discarding a whole host of other beliefs, forsaking our communities, going against those we trust and love, and in short, challenging our identities. According to this view, is it any wonder that providing people with a little information about GMOs, vaccines, or global warming have little impact on their beliefs and attitudes? The power that culture has over cognition just swamps these attempts at education. This effect is compounded by the Dunning-Kruger effect: the unskilled just don’t know what they don’t know. This matters, because all of us are unskilled in most domains of our lives. According to the authors, the knowledge illusion underscores the important role experts play in society. Similarly, Sloman and Fernbach emphasise the limitations of direct democracy– outsourcing decision making on complicated policy matters to the general public. “Individual citizens rarely know enough to make an informed decision about complex social policy even if they think they do. Giving a vote to every citizen can swamp the contribution of expertise to good judgement that the wisdom of crowds relies on.” They defend charges that their stance is elitist, or anti-democratic. “We too believe in democracy. But we think that the facts about human ignorance provide an argument for representative democracy, not direct democracy. We elect representatives. Those representatives should have the time and skill to find the expertise to make good decisions. Often they don’t have the time because they’re too busy raising money, but that’s a different issue.” Nudging for better decisions By understanding the quirks of human cognition, we can design environments so that these psychological quirks help us rather than hurt us. In a nod to Richard Thaler and Cass Sunstein’s philosophy of libertarian paternalism, the authors provide some nudges to help people make better decisions: 1. Reduce complexity Because much of our knowledge is possessed by the community and not by us individually, we need to radically scale back our expectations of how much complexity people can tolerate. This seems pertinent for what consumers are presented with during high-stakes financial decisions. 2. Simple decision rules Provide people rules or shortcuts that perform well and simplify the decision making process. For example, the financial world is just too complicated and people’s abilities too limited to fully understand it. Rather than try to educate people, we should give them simple rules that can be applied with little knowledge or effort– such as ‘save 15% of your income’, or ‘get a fifteen-year mortgage if you’re over fifty’. 3. Just-in-time education The idea is to give people information just before they need to use it. For example, a class in secondary school that reaches the basics of managing debt and savings is not that helpful. Giving people information just before they use it means they have the opportunity to practice what they have just learnt, increasing the change that it is retained. 4. Check your understanding What can individuals do to help themselves? A starting point is to be aware of our tendency to be explanation foes. It’s not practical to master all details of every decision, but it can be helpful to appreciate the gaps in our understanding. If the decision is important enough, we may want to gather more information before making a decision we may later regret. Written by Max Beilby for Darwinian Business Click here to buy a copy of The Knowledge Illusion Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013). Political extremism is supported by an illusion of understanding. Psychological Science, 24(6), 939-946. Haidt, J. (2012) The Righteous Mind: Why good people are divided by politics and religion. Pantheon. Henrich, J. (2016). The Secret of Our Success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press. Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance: Can one construct predict them all? Journal of Personality and Social Psychology, 86(1), 148-161. Lawson, R. (2006). The science of cycology: Failures to understand how everyday objects work. Memory & Cognition, 34(8), 1667-1675. Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: a randomized trial. Pediatrics, 133(4), e835-e842. Sunstein, C., & Thaler, R. (2008). Nudge: Improving decisions about health, wealth and happiness. New Haven. Thaler, R. H. (2013). Financial literacy, beyond the classroom. The New York Times. Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686-688.
<urn:uuid:554765ce-b8b3-4509-9299-759efa5968b6>
CC-MAIN-2021-43
https://darwinianbusiness.com/2017/06/11/the-knowledge-illusion-by-steven-sloman-philip-fernbach/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00149.warc.gz
en
0.938046
2,772
3.171875
3
When a company reaches a certain size and meets certain requirements, its management can decide to offer shares of stock to the public. This process is called an initial public offering, IPO, or simply “going public,” and is widely considered to be a major milestone in the development of any organization. While an IPO will always be the first time any company issues stock that can be purchased by the general public on a major stock exchange, it might not always be the last. There are certain circumstances that could call for the issuing of additional shares if management decides to do so. What Is Stock Dilution? Stock dilution, also known as equity dilution or simply “dilution,” happens when companies issue new shares of stock beyond what was issued at the time of the company becoming publicly traded. Sometimes companies issue new stock shares by creating stock options for employees or board members as part of a compensation or retirement plan. An increase in the number of shares outstanding means that each individual stockholder winds up owning a smaller and less significant, or diluted, portion of the company. Stocks represent ownership stakes in their respective companies. Owning a share of stock is like owning a tiny piece of the operations of a business. When a company’s board of directors first makes the decision to take a company public by offering shares of its stock to trade freely on one or more stock exchanges, a set number of shares will be offered. This initial number of shares is often called the “float.” Any further issuance of stock (often referred to as secondary offerings) will result in the stock being diluted. How Does Stock Dilution Work? The process of stock dilution is relatively simple. It begins with a publicly traded company issuing a secondary stock offering. The first issuing of stock will have occurred before this, during the company’s IPO. There are any number of reasons that companies choose to issue secondary shares of stock. A company might want to give rewards to its employees or raise new capital. Issuing new shares as a method of raising money can be a particularly desirable option because it allows a business to receive an infusion of cash without going into debt or having to sell any assets that belong to the company. It should be noted that stock splits are separate events that do not result in dilution. When a business has a standard split of its stock, investors who already hold that stock receive additional shares, so their ownership in the company stays the same. Dilution of stock only occurs when new shares are issued and sold to additional investors who hadn’t purchased shares before the secondary offering. How Does Stock Dilution Affect Investors? When a company creates new shares of stock, the value of existing shares becomes diluted, meaning they decrease in value. Think of it like a birthday cake. At first, you and seven of your friends agree to each have one slice of cake. But then two of your other friends unexpectedly show up, also wanting cake. Now you have to slice the cake into ten pieces rather than eight, so each piece will be smaller than it otherwise would have been, had only eight of you each enjoyed a piece. This scenario is similar to what happens when a company issues more shares of stock and stockholders see the value of their shares reduced. The difference is that each share not only becomes like a smaller piece of the cake, but usually (but not always) becomes less valuable and entitles its holder to less company ownership and voting rights. Stock Dilution and Dividends For dividend-yielding stocks, dilution can also lead to smaller dividend payouts unless earnings per share rise enough to make up the difference. Because more shareholders now have to be paid, paying the same dividend yield takes a heavier toll on profits. If a company is only issuing new shares out of an attempt at raising new capital because their business is hurting, then they may have to cut dividends even deeper down the line or halt them altogether. This can be disastrous for investors who hold equities for income. Dividend investors will do well to keep an eye on the number of shares outstanding for any stock, as well as how previous dilutions (if any) have affected dividends. To be clear, dilution doesn’t have to affect dividends. Dilution cuts down on earnings per share (EPS) but not necessarily on dividends per share (DPS). While EPS measures a company’s profitability per each share of stock outstanding, DPS measures the value of dividends paid out to investors per each share of stock outstanding. A company can choose to keep DPS the same after dilution, although doing so will cut into the profits of their business to a larger extent than before. The more dividends per share a company pays out, and the more shares there are, the more unsustainable the dividend is likely to become, since a company can only afford to pay so much of its profits out to investors. The only way for big dividend payments to be sustainable is when a company is either growing rapidly or taking on lots of debt to finance its operations. Other Stock Dilution Effects Stock dilution has an impact on more than just the price of a stock or potential dividend payouts. When additional shares are created, this reduces the stock’s earnings per share (there will be fewer earnings per share with more shares on the market) as well as the voting rights of the shareholder (holders of stocks sometimes get to cast a vote for important company decisions, like the addition or removal of board members). In fact, income statements issued by companies often show both “basic” and “diluted” earnings per share (EPS) numbers. This allows for shareholders and investors thinking about purchasing the stock to see the effect that dilution would have if the maximum number of potential shares were to come into existence (through the use of unexercised stock options, for example). Dilution of a stock can also have a positive impact on the stock’s valuation, however. That’s because the issuing of new shares being bought increases the stock’s market cap, as people buy those shares. If this momentum outpaces any selling caused by negative market views of the secondary offering, then share prices could rise. Beyond the short-term, news-based influence of dilution, the long-term effects of new stock shares coming into existence depends largely on how a company’s management decides to spend the funds they just received. Pros and Cons of Stock Dilution While it’s easy to interpret stock dilution as a negative thing from the perspective of those who hold shares before the dilution occurs, the concept isn’t so one-sided. When done in the right way for purposes that contribute to company growth, dilution can benefit both a company and its shareholders over the long-term. When done recklessly or in an attempt at covering up bad business performance, dilution can provide a temporary cash flow boost that doesn’t solve any real problems and puts shareholders in a precarious position. It comes down to whether or not a management team has a good reason for diluting their stock and what they choose to do with the funds raised afterward. Pros of Stock Dilution In some ways, dilution of stock can be a good thing. When new shares are used to reward managers and employees, this can indicate a company is growing and performing well, and that it wants to share some of its good fortune. When new shares are issued at a price higher than what the stock is currently selling for, this can also be a win-win scenario. It indicates demand for shares while minimizing the share dilution that existing shareholders must endure. Ideally, companies should have a good reason to issue new shares and use the resulting cash infusion in a productive manner. Raising money for a new product, research and development, or bringing on new and valuable employees might be some good reasons for dilution of a stock. When a company dilutes its stock without good reason, or doesn’t use the proceeds in a productive way, then the cons of stock dilution are all that’s left. Cons of Stock Dilution In general, investors don’t take kindly to the concept of new stock shares being issued to internal shareholders, as it usually decreases the value of the stock and the ownership stake of those who already hold shares. To the investing public that has some kind of awareness of this, stock dilution can be seen as negative news. Some of the things mentioned previously can also be considered cons of stock dilution: a decrease in earnings per share, less voting power for shareholders, or declining share prices. Recurring, new stock issuances can be perceived as a warning sign by investors. If a company needs to keep diluting its stock to raise money, perhaps their business operations haven’t been performing well. This perception might lead people to sell shares, resulting in a decline in the stock price. Sometimes this happens when a company merely announces that they might be issuing new shares in the future. The perception can become reality before anything even happens. Example of Stock Dilution Let’s assume that a hypothetical company called Green Growth Galore (GGG) issued 500 shares to 50 individual investors during its initial public offering. Each investor holds 10 shares, which amounts to 2% of the company for each shareholder. A few years after GGG goes public, they decide to bring on a new chief executive officer. This individual has a lot of experience and a long track record of managing successful companies, so GGG would like to provide a little extra incentive for this new executive to join their team. They decide to provide that incentive in the form of stock options, which the new executive chooses to exercise immediately. If this imaginary company (GGG) were to have a secondary offering and issue an additional 500 shares at this time to compensate their new CEO with, each shareholder would then see their ownership stake reduced by one-half to 1% (because there would now be 1,000 shares outstanding with each shareholder owning 10 shares). Their voting power would also be reduced by an equivalent amount. As a result, all the effects of stock dilution listed above would also happen. Understanding Corporate Buyback The opposite of a company creating more shares is when a company buys its own shares back. This is sometimes called a corporate buyback and reduces the number of shares outstanding, usually leading to a rise in the price of a stock (due to the law of supply and demand). While this might be good for shareholders in the short-term, it can be a bad thing for a company overall, since the money used could have been spent to improve business operations instead. Sometimes stock can become highly overvalued due to the practice of corporate share buybacks, leading to precipitous drops in prices later on. Sometimes companies issue public statements detailing their exact plans for dilution as well as their reasons for doing so. This way, both current and future investors can prepare accordingly. The news alone can sometimes lead to a stock selloff due to the fact that the concept of stock dilution is usually interpreted in a negative way by most investors. Investors would do well to monitor the amount of shares a company has outstanding. If the number keeps increasing, earnings per share are likely to decline or stay flat while investor’s voting rights diminish in their influence. And while a drop in share counts can be a good thing, they can cover up a lack of growth by boosting earnings per share without any real underlying growth happening. Gaining Investment Savvy While the topic of stock dilution might rarely come up in casual conversation, savvy investors who keep up-to-date on their stocks are likely to understand how stock dilution affects their investments. Using the app to track investments as they put their money to work with SoFi Invest® helps members see how those investments are doing and if changes need to be made to meet their financial goals. Plus, eligible SoFi members can access perks like complimentary financial planning, personalized career advice, and more. The information provided is not meant to provide investment or financial advice. Investment decisions should be based on an individual’s specific financial needs, goals and risk profile. SoFi can’t guarantee future financial performance. Advisory services offered through SoFi Wealth, LLC. SoFi Securities, LLC, member FINRA / SIPC . The umbrella term “SoFi Invest” refers to the three investment and trading platforms operated by Social Finance, Inc. and its affiliates (described below). Individual customer accounts may be subject to the terms applicable to one or more of the platforms below. Neither the Investment Advisor Representatives of SoFi Wealth, nor the Registered Representatives of SoFi Securities are compensated for the sale of any product or service sold through any SoFi Invest platform. Information related to lending products contained herein should not be construed as an offer to sell, solicitation to buy or a pre-qualification of any loan product offered by SoFi Lending Corp and/or its affiliates. For additional disclosures related to the SoFi Invest platforms described above, please visit https://www.sofi.com/legal/. Advisory services are offered through SoFi Wealth, LLC an SEC-registered Investment adviser. Information about SoFi Wealth’s advisory operations, services, and fees is set forth in SoFi Wealth’s current Form ADV Part 2 (Brochure), a copy of which is available upon request and at adviserinfo.sec.gov . Third Party Trademarks: Certified Financial Planner Board of Standards Inc. (CFP Board) owns the certification marks CFP®, CERTIFIED FINANCIAL PLANNER™, CFP® (with plaque design), and CFP® (with flame design) in the U.S., which it awards to individuals who successfully complete CFP Board's initial and ongoing certification requirements. External Websites: The information and analysis provided through hyperlinks to third party websites, while believed to be accurate, cannot be guaranteed by SoFi. Links are provided for informational purposes and should not be viewed as an endorsement.
<urn:uuid:fa2f54d4-a666-41bb-ae14-50199e40ecb8>
CC-MAIN-2021-43
https://www.sofi.com/learn/content/understanding-stock-dilution/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.957815
2,935
3.15625
3
14 common garden invaders and the best ways to control them – April 13, 2009 1 of14Clyde Elmore Crabgrass (Digitaria species) This infamous summer annual thrives in warm, moist areas. Seeds germinate in early spring in warmer climates, later in cooler areas. As the plant grows, it branches out at the base; stems can root where they touch the soil. In flower beds, pull crabgrass before it sets seed. To thwart crabgrass in lawns, keep the turf well fertilized and vigorous, so it will provide tough competition for weeds. Also water your lawn deeply, but infrequently; this tactic will dry out crabgrass roots, killing the weeds or at least diminishing their vigor. Solarization can control crabgrass if high temperatures are achieved. Use corn gluten as a preemergence treatment. If chemical control is necessary, in ornamental beds only, use a postemergence herbicide control that kills grasses. Bindweed (Convolvulus arvensis) Also called wild morning glory, bindweed grows in open areas. Its 1- to 4-foot-long stems crawl along the ground and twine over and around other plants. Pulling usually doesn't eradicate it ― the stems break off, but the weed returns from the roots. To control its spread, you'll have to dig the roots out repeatedly (persistence is required). It's important not to let bindweed set seed, since the hard-coated seeds can sprout after lying dormant for 50 years! Best control is prevention. Remove flowers before they set seed, and pull or hoe seedlings. Kill established plants by regularly cutting to the ground any stems that have reached six inches tall. For chemical control, in midsummer, when bindweed is at the height of its growth season but has not yet set seed, spot-treat isolated patches with glyphosate. 3 of14David Goldberg Bermuda grass (Cynodon dactylon) A fine-textured and fast-growing perennial, Bermuda grass is frequently planted as a lawn in warm climates. In other sorts of lawns and in gardens, though, it can be a difficult weed. It spreads by underground stems (rhizomes), aboveground runners (stolons), and seed. If you have a Bermuda grass lawn, use an 8-inch-deep barrier or edging to prevent it from advancing into other parts of the garden. Dig up stray clumps before they form sod, being sure to remove all the underground stems; any left behind can start new shoots. Repeated pulling and digging are generally necessary to stop this weed; mulches will slow it down, but it eventually grows through most of them. For chemical control, use a selective postemergence herbicide. Spotted Spurge (Chamaesyce maculata) This annual weed produces large quantities of seed within just a few weeks of germination and scatters them widely. It grows from a shallow taproot and forms a low mat of branching stems that exude a milky juice when cut. Prevention is the best control. Hoe or pull young seedlings early, before they bloom and set seed. Apply a 1-inch layer of fine mulch to suppress germination in garden beds. A vigorous, well-fertilized lawn competes well against spotted spurge. If chemical control is necessary in lawns, use a preemergence product in late winter before seeds germinate, following label directions. Spot treat spurge plants with herbicidal soap when they are young. For spurge growing in cracks in pavement, use a hand weeder. 5 of14David Goldberg Yellow nutsedge (Cyperus esculentus) Also known as yellow nutgrass, this perennial weed thrives in moist areas in much of the country. Its bright green leaves grow from the base in groups of three; grass leaves, in contrast, grow in sets of two. The flower head is golden brown. Small, roughly round tubers (nutlets) form at the tips of the roots; the weed spreads by these tubers as well as by seed. Hoe or pull nutsedge when it's young and still small ― when plants have fewer than five leaves or are less than 6 inches tall. Older, taller plants are mature enough to produce tubers; when you dig or pull the plant, the tubers remain in the soil to sprout. Repeatedly removing top growth eventually weakens tubers. For small patches in lawns, dig deeply (8 inches); remove the whole patch, then refill with soil and seed or sod the patch. Yellow oxalis (Oxalis corniculata) A very aggressive perennial weed, yellow oxalis (also called yellow wood sorrel) is happy in sun or shade, and spreads quickly by seed. Seedlings start out from a single taproot, which soon develops into a shallow, spreading, knitted root system. Tiny yellow flowers are followed by elongated capsules that can shoot seeds as far as 6 feet. Dig out small plants early. If you have a lawn, keep it vigorous to provide competition; water deeply but infrequently, since frequent light watering encourages this shallow-rooted weed. You can also use a preemergence herbicide on turf and around ornamentals listed on the label. Spot-treat oxalis in garden areas with glyphosate. Dandelion (Taraxacum officinale) Dandelion is particularly vigorous in cold-winter climates. It grows from a deep, fleshy taproot and spreads by windborne seeds. Flowering begins in spring and often continues until frost. A healthy lawn can outcompete dandelions, so thicken the turf by overseeding and by proper fertilizing, watering, and mowing. Pull dandelions while they're small, before they produce a taproot and set seed. Once the taproot has formed, you must remove all of it, since new plants can sprout from even a small piece. A dandelion weeder with a forked blade is helpful, or use a hand weeder with a bent shaft. For chemical control, use a selective postemergence herbicide labeled for dandelions in turf. Plantain (Plantago species) Plantains are perennials that form rosettes of dark green leaves marked from end to end with distinctive parallel veining. Leaves of P. lanceolata (buckhorn plantain) are long and narrow; those of P. major (broadleaf plantain) are broadly oval. They love damp, heavy soil. To reduce infestations in lawns, keep the turf thick through consistent fertilizing; aerating will help, too. Dig out plantains before they set seed. Be sure to remove as much of the roots as possible (a dandelion weeder is helpful here), since these weeds can regrow from any pieces of the rootstalk that remain. For chemical control, use a preemergence product or spot-treat plantains in the garden with glyphosate, taking care not to get the chemical on desirable plants. 9 of14Whitney Cranshaw Common mallow (Malva neglecta) Also known as cheeseweed (thanks to the fruits, which resemble a round of cheese), common mallow is a widespread annual or biennial weed with broad, lobed leaves and pinkish white, five-petaled flowers. Hoe or pull these weeds when they're young. Mature plants have a long, tough taproot that is difficult to extract from the soil, and they are of course more likely to have set seed. For chemical control, use a preemergence herbicide to prevent seedlings from becoming established in lawns and around ornamentals. Poison oak, Poison ivy Poison oak is most common along the West Coast. In the open or in filtered sun, it forms a dense, leafy shrub; in the shade, it's a tall-growing vine. Its leaves are divided into three leaflets with scalloped, toothed, or lobed edges. Poison ivy looks similar; it's common east of the Rockies and also grows in eastern Oregon and eastern Washington. Usually found in shady areas and at the edges of woodlands, it sprawls along the ground until it finds something to climb, then becomes a vine. A resin on both poison oak and poison ivy causes severe contact dermatitis in most people. Control the both with an appropriately labeled herbicide, such as glyphosate (be sure to avoid getting these chemicals on other plants). 11 of14David Goldberg Blackberry (Rubus species) Wild blackberry can be a vexing weed almost anywhere in the United States, but it's particularly troublesome in the Northeast, the Southeast, and many areas of the West. Plants spread rapidly by underground runners and by seed. Pull young plants in spring, before they develop a perennial root system. To kill established clumps, repeatedly prune back the stems as they sprout; this eventually exhausts the roots. Or mow the tops and dig out the roots; repeat the process as new canes grow from roots left behind. You can also cut stems to the ground and apply glyphosate to the stubs as soon as possible after cutting. Spot-treat any new shoots with glyphosate as they appear. Puncture vine (Tribulus terrestris) With its sharp, thorny burs that poke into tires, paws, and bare feet, puncture vine is painfully familiar to gardeners in much of the country. An annual weed often found in dry areas, it forms a dense, low mat 5 to 15 feet in diameter. For best control of small infestations, hoe or dig plants before they can set seed, cutting below the crown to prevent regrowth. Once you've removed puncture vine growing in lawns, improve the soil with compost and sow grass seed in bare spots to prevent the weeds from reestablishing. For chemical control, preemergence herbicides containing trifluralin may be used on some lawn grasses and ornamentals in late winter or early spring. For postemergence control in lawns, use a selective herbicide. Purslane (Portulaca oleracea) Not to be confused with the large-flowered ornamental purslane sold in nurseries (the 'Wildfire' strain, variously ascribed to Portulaca oleracea, P. umbraticola, and P. grandiflora), weedy purslane is a low-growing summer annual found throughout the country. It thrives in moist conditions but can withstand considerable drought. Its fleshy, dark green leaves are edible, with a tart, lemony flavor. Purslane is easy to pull or hoe. But pieces of stem can reroot readily, so be sure to remove them from the garden. Also remove plants that have begun to flower, since they can ripen seed even after they've been pulled. Don't compost any part of the plant. Also known as couch grass or devil's grass, quack grass is an aggressive perennial that produces an extensive mass of long, slender, yellowish white branching rhizomes (underground stems) that can spread laterally 3 to 5 feet. Thoroughly dig the area and remove all visible pieces of rhizome; this will slow the weed's growth for a few years. You can also suppress quack grass by smothering it; leave the cover in place for at least a year. For chemical control, use preemergence herbicides on turf grasses and around ornamentals listed on the label. Or spot treat with an herbicide containing glyphosate, taking care to avoid contact with desirable plants. Sign Up for our Newsletter Get fresh recipes, wine pairings, weekend getaway ideas, regional gardening tips, home design inspiration, and more.
<urn:uuid:a2f6eadc-310d-4a48-829b-3e4ef8c73d2d>
CC-MAIN-2021-43
https://www.sunset.com/garden/garden-basics/how-to-control-weeds
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.93041
2,511
2.640625
3
- Describe and give examples of ethnocentrism and cultural relativism Ethnocentrism and Cultural Relativism Despite how much humans have in common, cultural differences are far more prevalent than cultural universals. For example, while all cultures have language, analysis of particular language structures and conversational etiquette reveal tremendous differences. In some Middle Eastern cultures, it is common to stand close to others in conversation. North Americans keep more distance and maintain a larger “personal space.” Even something as simple as eating and drinking varies greatly from culture to culture. If your professor comes into an early morning class holding a mug of liquid, what do you assume she is drinking? In the United States, the mug is most likely filled with coffee, not Earl Grey tea, a favorite in England, or Yak Butter tea, a staple in Tibet. The way cuisines vary across cultures fascinates many people. Some travelers pride themselves on their willingness to try unfamiliar foods, like celebrated food writer Anthony Bourdain, while others return home expressing gratitude for their native culture’s fare. Often, people in the United States express disgust at other cultures’ cuisine and think that it’s gross to eat meat from a dog or guinea pig, for example, while they don’t question their own habit of eating cows or pigs. Such attitudes are an example of ethnocentrism, or evaluating and judging another culture based on how it compares to one’s own cultural norms. Ethnocentrism, as sociologist William Graham Sumner (1906) described the term, involves a belief or attitude that one’s own culture is better than all others, and should therefore serve as the standard frame of reference. Almost everyone is a little bit ethnocentric. For example, Americans tend to say that people from England drive on the “wrong” side of the road, rather than on the “other” side. Someone from a country where dog meat is standard fare might find it off-putting to see a dog in a French restaurant—not on the menu, but as a pet and fellow patron’s companion. A good example of ethnocentrism is referring to parts of Asia as the “Far East.” One might question, “Far east of where?” A high level of appreciation for one’s own culture can be healthy; a shared sense of community pride, for example, connects people in a society. But ethnocentrism can lead to disdain or dislike for other cultures and could cause misunderstanding and conflict. People with the best intentions sometimes travel to a society to “help” its people, because they see them as uneducated or backward—essentially inferior. In reality, these travelers are guilty of cultural imperialism, the deliberate imposition of one’s own ostensibly advanced cultural values on another culture. Europe’s colonial expansion, begun in the sixteenth century, was often accompanied by a severe cultural imperialism. European colonizers often viewed the people in the lands they colonized as uncultured savages who were in need of European governance, dress, religion, and other cultural practices. A more modern example of cultural imperialism may include the work of international aid agencies who introduce agricultural methods and plant species from developed countries while overlooking indigenous varieties and agricultural approaches that are better suited to a particular region. Another example would be the deforestation of the Amazon Basin as indigenous cultures lose land to timber corporations. Ethnocentrism can be so strong that when confronted with all of the differences of a new culture, one may experience disorientation and frustration. In sociology, we call this culture shock. A traveler from Chicago might find the nightly silence of rural Montana unsettling, not peaceful. An exchange student from China might be annoyed by the constant interruptions in class as other students ask questions—a practice that is considered rude in China. Perhaps the Chicago traveler was initially captivated by Montana’s quiet beauty and the Chinese student was originally excited to see a U.S.-style classroom firsthand. But as they experience unanticipated differences from their own culture, their excitement gives way to discomfort and doubts about how to behave appropriately in the new situation. Eventually, as people learn more about a culture and adapt to its norms, they recover from culture shock. Culture shock may appear because people aren’t always expecting cultural differences. Anthropologist Ken Barger (1971) discovered this when he conducted a participatory observation in an Inuit community in the Canadian Arctic. Originally from Indiana, Barger hesitated when invited to join a local snowshoe race. He knew he’d never hold his own against these experts. Sure enough, he finished last, to his mortification. But the tribal members congratulated him, saying, “You really tried!” In Barger’s own culture, he had learned to value victory. To the Inuit people, winning was enjoyable, but their culture valued survival skills essential to their environment: how hard someone tried could mean the difference between life and death. Over the course of his stay, Barger participated in caribou hunts, learned how to take shelter in winter storms, and sometimes went days with little or no food to share among tribal members. Trying hard and working together, two nonmaterial values, were indeed much more important than winning. During his time with the Inuit tribe, Barger learned to engage in cultural relativism. Cultural relativism is the practice of assessing a culture by its own standards rather than viewing it through the lens of one’s own culture. Practicing cultural relativism requires an open mind and a willingness to consider, and even adapt to, new values and norms. However, indiscriminately embracing everything about a new culture is not always possible. Even the most culturally relativist people from egalitarian societies—ones in which women have political rights and control over their own bodies—would question whether the widespread practice of female genital mutilation in countries such as Ethiopia and Sudan should be accepted as a part of cultural tradition. Sociologists attempting to engage in cultural relativism, then, may struggle to reconcile aspects of their own culture with aspects of a culture they are studying. Sometimes when people attempt to rectify feelings of ethnocentrism and to practice cultural relativism, they swing too far to the other end of the spectrum. Xenocentrism is the opposite of ethnocentrism, and refers to the belief that another culture is superior to one’s own. (The Greek root word xeno, pronounced “ZEE-no,” means “stranger” or “foreign guest.”) An exchange student who goes home after a semester abroad or a sociologist who returns from the field may find it difficult to associate with the values of their own culture after having experienced what they deem a more upright or nobler way of living. Perhaps the greatest challenge for sociologists studying different cultures is the matter of keeping a perspective. It is impossible for anyone to keep all cultural biases at bay; the best we can do is strive to be aware of them. Pride in one’s own culture doesn’t have to lead to imposing its values on others. And an appreciation for another culture shouldn’t preclude individuals from studying it with a critical eye. Overcoming Culture Shock During her summer vacation, Caitlin flew from Chicago to Madrid to visit Maria, the exchange student she’d befriended the previous semester. In the airport, she heard rapid, musical Spanish being spoken all around her. Exciting as it was, she felt isolated and disconnected. Maria’s mother kissed Caitlin on both cheeks when she greeted her. Her imposing father kept his distance. Caitlin was half asleep by the time supper was served—at 10 p.m.! Maria’s family sat at the table for hours, speaking loudly, gesturing, and arguing about politics, a taboo dinner subject in Caitlin’s house. They served wine and toasted their honored guest. Caitlin had trouble interpreting her hosts’ facial expressions, and didn’t realize she should make the next toast. That night, Caitlin crawled into a strange bed, wishing she hadn’t come. She missed her home and felt overwhelmed by the new customs, language, and surroundings. She’d studied Spanish in school for years—why hadn’t it prepared her for this? What Caitlin hadn’t realized was that people depend not only on spoken words but also on subtle cues like gestures and facial expressions, to communicate. Cultural norms accompany even the smallest nonverbal signals (DuBois 1951). They help people know when to shake hands, where to sit, how to converse, and even when to laugh. We relate to others through a shared set of cultural norms, and ordinarily, we take them for granted. For this reason, culture shock is often associated with traveling abroad, although it can happen in one’s own country, state, or even hometown. Anthropologist Kalervo Oberg (1960) is credited with first coining the term “culture shock.” In his studies, Oberg found that most people found encountering a new culture to be exciting at first. But bit by bit, they became stressed by interacting with people from a different culture who spoke another language and used different regional expressions. There was new food to digest, new daily schedules to follow, and new rules of etiquette to learn. Living with these constant adaptive challenges can make people feel incompetent and insecure. People react to frustration in a new culture, Oberg found, by initially rejecting it and glorifying one’s own culture. An American visiting Italy might long for a “real” pizza or complain about the unsafe driving habits of Italians compared to people in the United States. It helps to remember that culture is learned. Everyone is ethnocentric to an extent, and identifying with one’s own country is natural. Caitlin’s shock was minor compared to that of her friends Dayar and Mahlika, a Turkish couple living in married student housing on campus. And it was nothing like that of her classmate Sanai. Sanai had been forced to flee war-torn Bosnia with her family when she was fifteen. After two weeks in Spain, Caitlin had developed a bit more compassion and understanding for what those people had gone through. She understood that adjusting to a new culture takes time. It can take weeks or months to recover from culture shock, and it can take years to fully adjust to living in a new culture. By the end of Caitlin’s trip, she’d made new lifelong friends. She’d stepped out of her comfort zone. She’d learned a lot about Spain, but she’d also discovered a lot about herself and her own culture. In January 2011, a study published in the Proceedings of the National Academy of Sciences of the United States of America presented evidence indicating that the hormone oxytocin could regulate and manage instances of ethnocentrism. Read the full article “Oxytocin promotes human ethnocentrism” here. Think It Over - Do you feel that feelings of ethnocentricity or xenocentricity are more prevalent in U.S. culture? Why do you believe this? What issues or events might inform this? - cultural imperialism: - the deliberate imposition of one’s own cultural values on another culture - cultural relativism: - the practice of assessing a culture by its own standards, and not in comparison to another culture - culture shock: - an experience of personal disorientation when confronted with an unfamiliar way of life - the practice of evaluating another culture according to the standards of one’s own culture - a belief that another culture is superior to one’s own
<urn:uuid:1fc970d0-8399-4fd9-9160-2f506b94f273>
CC-MAIN-2021-43
https://courses.lumenlearning.com/wm-introductiontosociology/chapter/ethnocentrism-and-cultural-relativism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00190.warc.gz
en
0.958263
2,455
3.71875
4
“It’s just a joke.” “You know that’s not what I meant.” For many folks from marginalized communities, these offhand remarks are something they hear regularly – and often in response to them expressing their hurt and pain. An Asian-American woman born in America may hear such phrases when she expresses offense at the question: “Where are you really from?” A Black man’s hurt may be belittled when he’s told he’s “overreacting” when a co-worker calls him “one of the good ones.” And someone from the LGBTQ* community may be considered overly sensitive if they get fired up when someone remarks: “That’s so gay!” Most of the time, it’s true that the speaker has no intention of causing offense or pain. People are often unaware of how their words or actions impact the recipient, whose experiences differ so much from their own. But regardless of intention, these instances of “microaggressions” have real effects on people’s lives. Much like unconscious biases, being unaware isn’t an excuse for perpetuating harmful behaviors or beliefs. In this blog, we’ll briefly discuss the various types of microaggressions and how they manifest at work. We’ll also share tips for how you can respond to and grow from being told you’ve committed a microaggression. What are microaggressions? Kevin Nadal, a professor of psychology, defines microaggressions as: “The everyday, subtle, intentional – and oftentimes unintentional – interactions or behaviors that communicate some sort of bias toward historically marginalized groups." Microaggressions happen everywhere, including at work. We may call microaggressions “micro” aggressions, but their cumulative impact can be measured on a “macro” scale. Day-by-day, slight-by-slight, microaggressions can feel like "death by a thousand papercuts." Given that most folks spend the majority of their lives at work, microaggressions in the workplace have a profound impact on people’s mental, spiritual, and even physical health. That's probably part of the reason why only 3% of Black employees reported wanting to return to the office full-time, as compared to 21% of White workers. For all these reasons, we believe that understanding and reducing the occurrence of microaggressions is essential for building a better, more humane world of work. Microaggressions usually emerge from our deeply-rooted biases against those who are different from us. Frequently a result of our upbringing, many folks don’t know they possess these biases until they come face-to-face with them in a conversation or confrontation. That being said, it’s human to make mistakes. Our perspectives are limited, and it’s natural not to understand how every other community experiences the world. What matters most is how we choose to respond once we’re made aware of our biases and the ways they manifest themselves as microaggressions. Understanding the various types of microaggressions By definition, microaggressions are a comment or action that negatively targets a marginalized group of people. Communities or identities that can be targeted include, but are not limited to: - Sexual orientation - Socioeconomic class - Citizenship status Often, folks exist at the intersection of many overlapping identities (e.g., an Indigenous trans woman or a disabled immigrant). Thus, microaggressions can be intersectional. For example, an Indigenous trans woman may encounter microaggressions on the basis of her race, her gender, being trans, or any combination of the three. How microaggressions manifest at work Microaggressions can be verbal, behavioral, or environmental. Below are a few examples of how microaggressions can show up at work. A verbal microaggression occurs when someone says something offensive or disrespectful to a marginalized group. Some examples include: - Asking a lesbian co-worker, “Who is the ‘man’ in your relationship?” - Mispronouncing someone’s name because “it’s too difficult to say” - Complimenting a non-White colleague’s English under the assumption they weren’t born and raised in an English-speaking country - Continuing to use words or phrases that others find offensive A behavioral microaggression is an insensitive or problematic action that often plays into identity stereotypes. This might look like: - Mistaking a Latinx colleague for a service worker - Giving only personality-based feedback (“You should smile more”) to a female employee during her performance review - Excluding a coworker with a disability from an after-work event due to the assumption that they aren’t capable of participating - Assuming an older coworker isn’t able to use or learn to use a technology Environmental microaggressions are expressed in society through lack of representation, inclusion, and diversity. This often manifests through: Microaggressions can significantly and adversely impact organizational health by creating a toxic work culture that corrodes employee engagement and the overall employee experience. Experiencing microaggressions can steadily wear away at everything from performance, sense of belonging, current and future development, to retention, and more. Not least of all, microaggressions can undermine your organization’s diversity, equity, and inclusion efforts and stagnate the innovation afforded by diverse perspectives. What to do when you’ve committed a microaggression Almost everybody has committed a microaggression before, but not everybody is accustomed to being called out and responding with grace. Accepting criticism is difficult for the best of us, no matter how enthusiastically we embrace the idea of unlearning our biases. That’s why it’s important to treat these confrontations as learning moments, rather than a personal attack. Just because you’ve said something problematic doesn’t mean you are a helplessly problematic person. If you listen to other’s concerns with an open heart and mind, you can make significant progress in aligning your words and actions with your ideology. With that said, here are a few things to keep in mind if you are approached by someone who is concerned or hurt by something you have said or done: Resist the urge to react defensively. Accepting criticism is difficult, even more so when you’re being criticized for something you were unaware of – like the right way to interact with a colleague in a wheelchair without infantilizing them. It probably wasn’t your intention to offend them, but intention doesn’t overrule the very real pain you’ve caused them. Sincerely listen with an empathetic heart. Avoid saying anything similar to “I didn’t mean it” or “I was just making a joke.” By saying you didn’t mean it, you can come across as trying to discredit the other person’s experience. And calling the encounter a “joke” can seem like you’re making light of that person's pain. As they express their feelings, strive to empathize and understand their perspective, rather than invalidate or “other” it. Verbally acknowledge your impact. Regardless of your intent, it’s important to recognize and own up to the pain you’ve caused them. Verbal acknowledgment also serves as a spoken “promise” you make to yourself and to the other person. It’s a way of conveying, “I have heard and internalized what you said, I now recognize the pain I’ve caused, and my future self will act more thoughtfully and intentionally because of what you have shared with me.” Apologize, but don’t expect forgiveness. You may not get it, and that’s okay. The best way to make amends is to educate yourself and be more cautious in the future. Ask questions, but don’t expect answers. If your relationship allows the two of you to dig deeper, you can consider asking more questions about their life experiences, including the microaggressions they’ve experienced. Doing so may help you untangle other verbal or nonverbal microaggressions you are prone to, as well as deepen your understanding of their overlapping identities. That being said, you shouldn’t expect folks from marginalized communities to do the heavy lifting for you. The burden of teaching has historically fallen on folks from underrepresented backgrounds, but it’s time we did our own homework. What you can do after you the confrontation As you walk away from the conversation, try not to hyper-focus on your guilt, or worse, to treat what happened like a one-time fluke. Instead, use the encounter as a springboard for driving personal, or even organizational growth. Here are a few ways to productively channel the experience into meaningful action at your company: Educate yourself. Exposing yourself to diverse perspectives can help you uncover unconscious biases and build the awareness necessary to align your actions and words with your values. Whether it’s books, movies, TV shows, or podcasts, there is a myriad of resources available for your self-education. Step up for co-workers from marginalized communities. If you see a microaggression unfolding, there are several things you can do. Kerry Ann Rockquemore suggests engaging in what she calls “microresistance” by speaking up and intervening (professionally) on behalf of someone from a marginalized group. It can be as simple as saying, “What makes you say that?” or “I don’t get the joke. Can you explain it to me?” If you aren’t in a position where you feel safe speaking up at the moment, you can also consider approaching the perpetrator at a later time. Raise awareness of microaggressions among coworkers and friends. Doing so can reduce some of the burden marginalized folks face at work. As you continue to educate yourself and grow, you can share resources and raise awareness about unconscious biases, the different types of microaggressions that exist, and the fact that we are all capable of (unintentionally) committing them. Advocate for organizational and/or policy changes. For example, you can start a petition to offer gender-neutral bathrooms, or ask to hold a Diversity, Equity, and Inclusion survey to measure and understand how different folks feel at your company. You can also advocate for increased resources and support systems specifically for individuals from marginalized communities, or to hold organized focus sessions across the organization to promote greater understanding and awareness. Recognizing and untangling microaggressions at work We’re all biased, we all make mistakes, and we’ve all probably committed microaggressions against others. That doesn’t mean our imperfections are an excuse for the problematic or insensitive ways we interact with others. It’s not about accepting your biases as inevitable – it’s about recognizing how they affect others and untangling them from your core beliefs and behaviors. Every individual at a company shifts the needle on inclusion and belonging. That’s why building a truly diverse, equitable, and inclusive workplace begins by making an honest effort to educate yourself, embrace unfamiliar perspectives, and act thoughtfully and intentionally. Resources to help start your self-education These resources are by no means exhaustive and have been broadly placed into several, equally non-exhaustive categories. Many of the resources focus on extremely intersectional issues and identities but are listed only once for ease of reference.
<urn:uuid:8336a057-48cc-4035-b200-4ee9e2363afe>
CC-MAIN-2021-43
https://www.cultureamp.com/blog/microaggressions-at-work
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00551.warc.gz
en
0.942982
2,477
3.296875
3
(See also List of types of clothing) Humans often wear articles of clothing (also known as dress, garments or attire) on the body (for the alternative, see nudity). In its broadest sense, clothing includes coverings for the trunk and limbs as well as coverings for hands (gloves), feet (shoes, sandals, boots), and head (hats, caps). Humans also decorate their bodies with makeup or cosmetics, perfume, jewelry and other ornament; cut, dye, and arrange their head, face and body hair (hairstyle), and sometimes their skin (tattoo, scarifications, piercing). All these decorations contribute to the overall effect and message of clothing, but do not constitute clothing per se. People wear clothing for functional and/or social reasons. Clothing protects the body; it also delivers social messages to other humans. Function includes protection of the body against strong sunlight, extreme heat or cold, and precipitation; protection against insects, noxious chemicals, weapons, contact with abrasive substances -- in sum, against anything that might injure an unprotected human body. Humans have shown extreme inventiveness in devising clothing solutions to practical problems. Social messages sent by clothing, accessories, and decorations can involve social status, occupation, ethnic and religious affiliation, marital status and sexual availability, etc. Humans must know the code in order to recognise the message transmitted. If different groups read the same item of clothing or decoration with different meanings, the wearer may provoke unanticipated responses. - Social status: in many societies, people of high rank reserve special items of clothing or decoration for themselves. Only Roman emperors could wear garments dyed with Tyrian purple; only high-ranking Hawaiian chiefs could wear feather cloak s and palaoa or carved whale teeth. In many cases, there were elaborate systems of sumptuary laws regulating who could wear what. In other societies, no laws prohibit lower-status people wearing high status garments, but the high cost of status garments effectively limits purchase and display. In current Western society, only the rich can afford haute couture. The threat of social ostracism may also limit garment choice. - Occupation: military, police, firefighters usually wear uniforms, as do workers in many industries. School-children often wear school uniforms, college and university students wear academic dress. Members of religious orders may wear uniforms known as "habits". Sometimes a single item of clothing or a single accessory can declare one's occupation and/or status -- for example, the high toque or chef's hat worn by a chief cook. - Ethnic, political, and religious affiliation: In many regions of the world, national costumes and styles in clothing and ornament declare membership in a certain village, caste, religion, etc. A Scotsman declares his clan with his tartan; an Orthodox Jew his religion with his (non-clothing) sidelocks ; a French peasant woman her village with her cap or coif. - Clothes can also proclaim dissent from cultural norms and mainstream beliefs, as well as personal independence. In 19th century Europe, artists and writers lived la vie de Bohème and dressed to shock: George Sand in men's clothing, female emancipationists in bloomers, male artists in velvet waistcoats and gaudy neckcloths. Bohemians, beatniks, hippies, Goths, punks and Skinheads continued the ( counter-cultural) tradition in the 20th century West. Now that haute couture plagiarises street fashion within a year or so, street fashion may have lost some of its power to shock, but it still motivates millions trying to look hip and cool. - Marital status : Hindu women, once married, "wear" sindoor, a red powder, in the parting of their hair; if widowed, they abandon sindoor and jewelry and wear simple white clothing. Men and women of the Western world may wear wedding rings to indicate their marital status. See also Visual markers of marital status. - Censored page: Some clothing indicates the modesty of the wearer. For example, many Muslim women wear a head or body covering (hijab, bourqa or burka, chador, abaya) that proclaims their status as respectable women. Other clothing may indicate flirtatious intent. For example, a Western woman might wear extreme stiletto heels , close-fitting and body-revealing black or red clothing, exaggerated make-up, flashy jewelry and perfume to show sexual availability. What constitutes modesty and allurement varies radically from culture to culture, within different contexts in the same culture, and over time as different fashions rise and fall. Moreover, a person may choose to display a mixed message. For example, a Saudi Arabian woman may wear an abaya to proclaim her respectability, but choose an abaya of luxurious material cut close to the body and then accessorize with high heels and a fashionable purse. All the details proclaim sexual desirability, despite the ostensible message of respectability. Because clothing and adornment have such frequent links with sexual display, humans may develop clothing Censored pagees. They may strongly prefer to have sexual relations with other humans wearing clothing and accessories they consider arousing or sexy. In Western culture, such fetishes may include extremely high heels, lace, leather, or military clothing. Other cultures have different fetishes. For many centuries, Chinese men desired women with bound feet (see footbinding). The men of Heian Japan lusted after women with floor-sweeping hair and layers of silk robes. Fetishes vary as much as fashion. Sometimes the clothing itself becomes the object of fetish, such as in case with used girl panties in Japan. Common clothing materials include: Less common clothing materials include: Clothing, once manufactured, suffers assault both from within and from without. The human body inside sheds skin cells and body oils, and exudes sweat, urine, and feces. From the outside, sun damage, damp, abrasion, dirt, and other indignities afflict the garment. Fleas and lice take up residence in clothing seams. Well-worn clothing, if not cleaned and re-furbished, will smell, itch, look scruffy, and lose functionality (as when buttons fall off and zippers fail). In some cases, people simply wear an item of clothing until it falls apart. Cleaning leather presents difficulties; one cannot wash barkcloth (tapa) without dissolving it. Owners may patch tears and rips, and brush off surface dirt, but old leather and bark clothing will always look old. Humans have developed many specialized methods for laundering, ranging from the earliest "pound clothes against rocks in running stream" to the latest in electronic washing machines and dry cleaning (dissolving dirt in solvents other than water). Mending has become less common in these days of cheap mass-manufactured clothing -- people may prefer to buy a new piece of clothing rather than to spend their time mending old clothes. But the thrifty still replace zippers and buttons and sew up ripped hems. Early 21st-century clothing styles Western fashion has to a certain extent become international fashion, as Western media and styles penetrate to all parts of the world. Very few parts of the world remain where people do not wear items of cheap mass-produced Western clothing. Even people in poor countries can afford used clothing from richer Western countries. However, people may wear ethnic or national dress on special occasions; or if carrying out certain roles or occupations. For example, most Japanese women have adopted Western-style dress for daily wear, but will still wear expensive silk kimonos on special occasions. Items of Western dress may also appear worn or accessorized in distinctive, non-Western ways. A Tongan man may combine a used T-shirt with a Tongan wrapped skirt, or tupenu . Mainstream Western or international styles - International standard business attire -- global in influence, just as business functions globally. - Haute couture - Clothing of Europe and Russia - Clothing in the Americas - United States mainstream fashion - United States alternative fashion - These fashions are often associated with fans of various musical styles. - Clothing in Asia - Clothing in Africa - Clothing in Oceania Religious habits and special religious clothing - Christian religious dress - Christian monastic habits - Buddhist monastic dress - Orthodox Jewish dress - Hindu religious dress - Muslim religious dress History of clothing Main article: History of Clothing Prior to the invention of clothing, mankind existed in a state of nudity. The earliest clothing probably consisted of fur, leather, leaves or grass, draped, wrapped or tied about the body for protection from the elements. Knowledge of such clothing remains inferential, since clothing materials deteriorate quickly compared to stone, bone, shell and metal artifacts. Archeologists have identified very early sewing needles of bone and ivory, from about 30,000 B.C., found near Kostenki , Russia in 1988. Mark Stone , an anthropologist at the Max Planck Institute for Evolutionary Anthropology, has conducted a genetic analysis of human body lice that shows they first evolved only 72,000 ± 42,000 years ago. Since most humans have very sparse body hair, body lice require clothing to survive, so this suggests a surprisingly recent date for the invention of clothing. Its invention may have coincided with the spread of modern Homo sapiens from the warm climate of Africa, thought to have begun between 50,000 and 100,000 years ago. Some human cultures, like the various peoples of the Arctic Circle, until recently made their clothing entirely of furs and skins, cutting clothing to fit and decorating lavishly. Before the invention of the powered loom , weaving remained a labor-intensive process. Weavers had to harvest fibres, clean, spin, and weave them. When using cloth for clothing, people used every scrap of it. One approach simply involves draping the cloth. Many peoples wore, and still wear, garments consisting of rectangles of cloth wrapped to fit -- for example the Scottish kilt or the Javaese sarong. Pins or belts hold the garments in place. The precious cloth remains uncut, and people of various sizes can wear the garment. Another approach involves cutting and sewing the cloth, but using every bit of the cloth rectangle in constructing the clothing. The tailor may cut triangular pieces from one corner of the cloth, and then add them elsewhere as gussets . Traditional European patterns for men's shirts and women's chemises take this approach. Modern European fashion treats cloth much more prodigally, typically cutting in such a way as to leave various odd-shaped cloth remnants. Industrial sewing operations sell these as waste; home sewers may turn them into quilts. In the thousands of years that humans have spent constructing clothing, they have created an astonishing array of styles, many of which we can reconstruct from surviving garments, photos, paintings, mosaics, etc., as well as from written descriptions. Costume history serves as a source of inspiration to current fashion designers, as well as a topic of professional interest to costumers constructing for plays, films, television, and historical reenactment. As technologies change, so will clothing. - Man-made fibers such as nylon, polyester, lycra, and Gore-Tex already account for much of the clothing market. Many more types of fibers will certainly be developed, possibly using nanotechnology. For example, military uniforms may stiffen when hit by bullets, filter out poisonous chemicals, and treat wounds. - "Smart" clothing will incorporate electronics. We will have wearable computers, flexible wearable displays (leading to fully animated clothing and some forms of invisibility cloaks), medical sensors, etc. - Present-day ready-to-wear technologies will presumably give way to computer-aided custom manufacturing. Harmless laser beams will measure the customer; computers will draw up a custom pattern and execute it in the customer's choice of cloth. - The Internet Public Library - Clothing resources - La Couturière Parisienne - Japanese scientist invents 'invisibility coat' - BBC News - German Hosiery Museum (English language)
<urn:uuid:2967a82f-6eaa-4ecb-914c-b1edef1e5107>
CC-MAIN-2021-43
https://www.fact-archive.com/encyclopedia/Clothes
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00189.warc.gz
en
0.935007
2,540
3.625
4
Why the relational model is here to stay and why absolutely everyone who ever touched an Excel Sheet can still benefit from knowing its most basic concepts. 2020 marked the 50th anniversary of Edgar F. Codd’s revolutionary paper on Relational Database Theory which introduced many important concepts for handling what is now known as “Big Data” and laid the foundation of Structured Query Language (SQL). Original Paper: “Relational Model of Data for Large Shared Data Data Banks”. This theory spawned a whole industry focused on efficient and organized access to data, thereby enabling the modern computing age not unlike the advances in personal computing. A solid foundation The success of relational databases can be attributed to many mathematical and computational aspects which also facilitated its widespread adoption in the decades following Codd’s publication. Most notable was the relational model’s foundation in mathematical set theory, enabling the relational join which is in essence an intersection of two sets. However, in this article we want to emphasize the fact that relational database theory is uniquely suited to depict the many business relations in modern enterprises. Benefits of familiarization with database systems and the relational model Relational databases are a key foundation of modern business since they are driving all online and many offline commercial activities (at the very least). In contrast to some beliefs, not everyone in the modern workforce can be an IT expert or data scientist. However, we think that everyone should know some basic concepts of modern database theory. From CEOs to accounting clerks, risk managers or sales personnel – everyone can benefit from either gaining a deeper understanding of modern IT architecture and therefore the company itself or simply creating better working documents such as relational excel tables (and YES – there is such a thing)! For this purpose, we will illustrate a couple of high-level database concepts with a focus on the relational model’s most essential rules: The purpose of databases A database does not necessarily have to be integrated into a complex Database System, which also includes Database Management Systems (DBMS), Database Applications, and last but not least the Users. These components make up the skeleton of all modern Database Systems: - Database – Stores the actual data in relational tables. - Database Management System (DBMS) – Acts as a gatekeeper to restrict users or programs from gaining direct access to the Database. All information must travel through DBMS to ensure data quality. - Database Application – For instance, a website with an online store. - Users – Generally People. However, a User does not necessarily have to be a human being. Other Programs can be users as well. On the one hand, a database itself has three primary functions: - To store data - To provide an organizational structure for data - To provide a mechanism for querying, creating, modifying, and deleting data These are the four primary database operations. A concept which is also widely known as C.R.U.D. (CREATE, READ, UPDATE, DELETE). On the other hand, a database can store this information together with relationships that are more complicated than a simple list. But what is a relation? Business is all about relations… As previously hinted at, in business there are many natural hierarchical relationships among data. For example: - A customer can place many orders. - Or put another way: Many different orders can be associated with the same customer A relational database enables us to model and represent these relationships. Information complexities and resulting data anomalies Relations are being depicted in relational tables. These follow a specific set of rules which allows them to avoid certain problems arising from information complexities in contrast to traditional tables or data lists. These are notorious for getting messy and introducing various data integrity problems along the way. However, first we need to define some of these concepts, work out these problems and then introduce the solution in form of relational thinking: The traditional data list A list is a simple two-dimensional table which stores data that is important to us for some reason. Let us take a list of projects as an example: At first glance, this data list might not look all that bad. The table already follows one very important concept: - Each data field consists of only one value – a name, number, or date, etc. Don’t laugh, this is not always the case out there! [This concept is also known as normal form 1 in relational database theory.] A negative example would be: The fields (or cells) above are not atomic. They consist of multiple values – names and numbers mixed. With such a table, almost no further operations are possible. Problem #1: List Redundancy However, each row in our Projects table is intended to be self-contained. As a result, the same information may be entered several times. This redundancy might not be a problem on its own, except that we are using more space than is necessary with this approach: If a particular person is currently managing 10 or 100 Projects, all his associated information would appear in this list 10 or even 100 times! Scaled to the needs of a big corporation this approach would be very wasteful. Problem #2: Multiple business concepts cause list anomalies Additionally, in a list each row may contain information on more than one theme or rather business concept. As a result, certain information may appear in the list only if information about other business concepts is also present. In our list of projects, there is obviously certain project-related data, such as “project name”, “start date” and “budget”. However, in the centre of the table we find various information associated with the project managers themselves – a second concept. In a relational table none of the manager’s information would be present, except for the data that fully identifies the Manager – ideally the Employee ID, which is only one column in our case. Resulting list modification issues Aside from redundancy, there are typically three major problems or anomalies which get introduced with lists that are mixing multiple business concepts: - Deletion anomalies - Update anomalies - Insertion anomalies The following illustration explains these anomalies with our easy example. However, we can already see that this data list is a complete and utter mess: The benefits of relational thinking Relational databases both solve the problems that are associated with lists as well as enable us to model these natural business relationships effectively! A relational database also stores information in tables. However, each informational theme or business concept is stored in its own table – a relational table! Going back to our easy example, a relational approach will break the data list into several parts until each part only represents one business concept – The Projects and the Project Managers. If our company is not only tracking internal projects, but also external projects, then another “customer” table would be needed and put in relation with our projects table – you can now see the general idea. [After separating the business concepts, the remaining tables fulfil all requirements for normal form 2.] Relational tables or Excel on steroids As mentioned earlier a relational table also follows certain rules: - Each column in a relational table represents a specific attribute of an entity. - Each row represents an instance of an entity. Our new and improved project managers’ table follows these simple rules: The Project Manager is our entity which can have various instances – the individual manager. We also defined the following attributes in accordance with our prior example: Manager ID, Manager Name and Phone Number. Furthermore, the table above has one column which uniquely identifies any record (or row) of data inside – and it is not the name. Given a large enough size of the company, a name could exist multiple times. Therefore, we used an Employee ID column. This column is called the primary key and will later help us to establish a relationship to another table. Please note, that introducing such identification columns is not always needed. Keys can also consist of a group of attributes, where the combination may act as a key. However, in our case even the combination of a First Name and a Last Name would not be sufficient to uniquely identify a manager. Therefore, another attribute would have been needed. In such cases, it is easier to introduce a new primary key. Putting the pieces back together Our data list is now broken apart into several tables, but these need to be linked or joined somehow. In relational database theory, these relations are modelled by linking relational tables together using matched pairs of values. These matching pairs of values are the keys from before. Ideally, we would also like to create a separate table for all phone numbers that are stored, and which relate to the manager, in case the company also tracks multiple numbers such as Home, Work, Mobile. At first glance this approach towards tables and databases sounds like more work. Indeed, it is more complicated than a simple list. However, it also offers many advantages when working with data, such as minimizing data redundancy, preserving complex relationships among different topics, and it allows for partial data (so called null values). SQL, NoSQL – There is more to it In relational database systems these tables and their relations are processed via SQL (Structured Query Language). As mentioned before, you can create new tables or entries, read, update, or delete them. This is where the mathematical foundation becomes important. SQL also allows us to interact with multiple tables by essentially using mathematical set operations like unions which we all still know from school 😉 Let’s assume we want to view the project names together with the manager names. Since we employed our matching pairs of values, we could quickly query such a view: SELECT Projects.ProjectName, Employees.EmployeeName, FROM Projects, Employees WHERE Projects.EmployeeID = Employees.EmployeeID (Please note that the naming convention we employed so far is not usable for actual relational tables and SQL. For this code snippet, we used the familiar attribute names simply without spaces.) Most modern Database systems are relational (RDBMS) and can therefore be accessed via SQL. However, the last decade also brought several NoSQL database systems, such as MongoDB, which are document-based. Obviously, we only touched the surface in this article. So far, we introduced two fundamental concepts (normal forms). For an efficient database system, one should employ at least two more. However, these are the ones I try to follow even when I am creating quick tables in Excel or other working documents. How to use these concepts even within Excel Assuming you have different tables in Excel for each business concept, how would you establish the relations there? Well, nowadays Excel actually offers a couple ways to employ relational practices: - The oldschool method: Using VLOOKUP to combine data from multiple tables to create a PivotTable. (works for all Excel Versions) - The modern solution: Excel 2013 and newer versions come with an Excel data model which you can use with Pivot tables to analyse multiple tables. Here you can find a small tutorial. - For more complicated analyses, there is a new Power Pivot Add-in which allows you to work with larger datasets among other benefits. Note: You will have to activate it first. For people interested in general Database lessons including their structure, the relational model, Database Design and SQL, I can recommend Dr. Daniel Soper’s short series on Youtube. If you are interested in other topics related to Risk Management, Internal Controls or IT System scoping, be sure to check out the following articles:
<urn:uuid:ce1f4710-ef14-40ca-9ff8-36cc686f802d>
CC-MAIN-2021-43
https://graser.co.at/en/50-years-of-relational-database-theory-en/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00191.warc.gz
en
0.928185
2,390
2.890625
3
“Virtually There” is a monthly column addressing the special challenges associated with designing, developing, and implementing virtual and blended learning. The modern workplace learning environment is about creating experiences that are richly authentic and deliver content to learners in the right place—classroom, desktop computer, mobile device, or on the job—at the right time—formally scheduled or on-demand. (On-demand as a concept also has evolved. Jane Hart of the Centre for Learning & Performance Technologies discusses the concepts of learning in the flow of work and at the point of need as critical to modern workplace learning.) Why Should You Care? Learning design isn’t just dictated by where and when learning takes place; designers also need to consider and decide which technologies best fit the delivery of certain content—that means thinking about which implementation techniques and instructional practices make the most sense for the situation. Because these concepts can seem to overlap or contradict one another, and a lack of full understanding can result in creating training on a “trendy” platform that is nevertheless inappropriate, we need to distinguish between three distinct concepts: - Instructional technologies - Implementation techniques - Instructional methods 1. Instructional technologies: Contrary to popular belief, traditional classrooms, desktop computers, and mobile devices are not considered instructional technologies. Though they are learning environments (as defined in the article Where and When Is the Modern Classroom?), the title, “instructional technology,” refers more precisely to a category of tools and tool sets that are utilized across learning environments, face-to-face or otherwise. Examples include: online collaborative hubs (e.g., discussion boards, wikis), videoconferencing software, and live virtual classrooms. Instructional technology selection is probably the most familiar part of program, course, and lesson preparation among today’s professional facilitators. For many, it comes down to a choice between e-learning environments and virtual classrooms (as dictated by scalability and cost). It’s easy to imagine, for instance, a context in which new product training has to be rolled out to a global salesforce within a two-week time frame. Under those constraints, the training staff might understandably default to a virtual classroom setup. Well-designed, this can be powerful. Yet—as so often happens when designing for speed over quality—it’s likely that the virtual classroom simply will supply Webinars that offer little facilitator-learner or learner-learner interaction and lose much of the value afforded by the format. That’s why, as a general rule of thumb, it’s always good to ask yourself: Is this the best tool to help my learners reach the learning objectives, or am I sacrificing my learning objectives just to use the tool I want? This can be difficult to answer, especially if your instructional technology options are limited by organizational policy or resource availability. As a precaution, we recommend making technology selection the last step of the design process after confirming your learning objectives and assessment strategies. If you can assess your objectives in a self-paced format, you probably can deliver the content in the same way. If you need to assess your objectives in a live setting, then the content also should be delivered in a live setting (virtual or face-to-face). When all is said and done, objectives should the driving force behind instructional technology use—not the other way around. How and to what extent your trainees will learn all comes down to the wise integration of technology based on sound implementation techniques and instructional methods. (For more on the role of instructional design in the modern learning landscape, click here.) 2. Implementation techniques: Implementation techniques can involve any and all of the instructional technologies discussed in the previous section and can be deployed in classroom, on a desktop computer, or via a mobile device. They do not emphasize which tool or toolset is used but instead focus on how content is conveyed to the learner. This includes: - Stand-alone courses, the formal instructional modules designed to fully explore a particular piece of content. These are the business version of traditional school courses and typically come with clearly defined learning objectives, assessment measures, and an individual facilitator. One example might be a 30-minute e-learning module on the topic of sexual harassment awareness. The module is intended to fully address the required learning and includes some activity and/or evaluation measure. While some courses are brief, others might be one or more days (e.g., a two-day product training program delivered in a classroom). Importantly, once the course ends and assessments are complete, the learners are done. - Blended learning, a formal instructional treatment that involves matching content to the most appropriate delivery technology (at the learning objective level) and sequencing the resulting lessons, activities, and assessments into a complete program of instruction. In academic settings, this often features a combination of face-to-face interaction and distance learning (i.e., the use of discussion boards, group wikis, and other online resources). In business, it usually involves a facilitator identifying individual learning objectives and pairing them with available technologies that optimally convey the content. As multiple instructional technologies and methods are woven together, the facilitator is able to deliver necessary information based on where and how learners will apply it in the workplace. - Microlearning, a means of delivering content to learners in brief, specific bursts. In microlearning environments, learners control what and when they’re learning, occurring on-demand at the point of need. (See also: John Eades). Each “burst” addresses a single learning objective and is delivered in segments of (generally) less than three to four minutes. Popular formats include videos, podcasts, and infographics and often embed a “call to action” that encourages learners to immediately rehearse what they have learned and/or complete a self-contained quiz that will reinforce content recollection. Most microlearning occurs informally, and assessments typically are not recorded. 3. Instructional methods: Instructional methods describe how learning is designed and moderated. As educator Karl Kapp says, “[Methodology] is a design choice, not a delivery method.” Examples of instructional methods include gamification, simulation, and social collaboration, as well as those to which most learners already are accustomed: lectures, case studies, and role-playing. They may be incorporated as part of a formal instructional program or used informally within a given community of learners, employing a variety of technologies to design and moderate the overall learning experience. Here are three methods that have become increasingly popular over the last decade: - Gamification: Karl Kapp defines gamification as “the concept of using game-based mechanics, aesthetics, and game thinking to engage people, motivate action, promote learning, and solve problems.” (For more see Kapp’s video: https://www.youtube.com/watch?v=BqyvUvxOx0M) Game outcomes are rarely certain, and they enable learners to engage in what educational gaming researchers call “productive failure.” Games often involve competition and rewards (i.e., operant conditioning, positive reinforcement). These types of learning activities can occur in any learning environment (including a traditional classroom) and utilize a variety of technologies. Importantly, gamification doesn’t require the use of console or computer video games—it also can involve board or card games, role-playing, and other playful tools or mechanics. - Simulation: Simulations are “instructional elements that help a learner explore, navigate, or obtain more information about [a] system or environment that generally cannot be acquired from mere experimentation.” Unlike games, which focus on “play,” simulations focus on emulating real-world activities and processes as accurately as possible. While familiar examples might include military (e.g., flight) or medical (e.g., surgical) applications, simulations run the gamut from pen-and-paper case studies to large-scale virtual reality environments. Generally speaking, there are no predefined outcomes in a simulation—the problems encountered are open-ended and complex, allowing for multiple solutions based on learner experience and problem-solving capability. - Social collaborative learning: Social collaborative learning doesn’t rely on a specific technology, often emerging through typical day-to-day activities. By interacting with our personal learning networks—colleagues, friends, family, individuals met through the Internet—we come to rely on one another’s expertise to overcome the various challenges we’re facing at work, school, and elsewhere. Social collaboration can be formally structured or informally emergent depending on the learner and learning community in question. When solicited by an individual, Jane Hart describes it as “continuous, on-demand, unstructured, and autonomous” as with a Web-based discussion group or wiki. While it’s possible and even advisable to include social collaborative learning technologies and platforms as part of formal training, it can be difficult—if not impossible—to force social interaction between individuals. The most effective approach tends to be the adoption of moderated communities that prove their value over time, so learners can see how participating (as writers, readers, or both) is time well spent. Keep in mind that gamification, simulation, and social collaborative learning methods can require substantial planning and development to build and moderate. As with instructional technologies, be sure to consider which (if any) of these methods will make your training more effective and why it’s the best choice. We will continue to see new technologies, techniques, and instructional methods enter the modern workplace landscape. That’s a good thing! We just need to actualize their potential by making educated design choices and wisely integrating approaches. Don’t follow the latest trend for the sake of being trendy—do what’s best for your learners! A thought leader in the field of virtual classrooms, Jennifer Hofmann is the president of InSync Training, LLC, a consulting firm that specializes in the design and delivery of virtual and blended learning. Featured in Forbes Most Powerful Women issue (June 16, 2014) as a New England Women Business Leader, she has led InSync Training to the Inc. 5000 as the 10th Fastest Growing Education Company in the U.S. (2013). Hofmann is the author of The Synchronous Trainer’s Survival Guide: Facilitating Successful Live and Online Courses, Meetings and Events (Pfeiffer, 2003), Live and Online! Tips, Techniques, and Ready-To-Use Activities for the Virtual Classroom (Pfeiffer, 2004), and How To Design For The Live Online Classroom: Creating Great Interactive and Collaborative Training Using Web Conferencing (Brandon Hall, 2005). She has co-authored, with Dr. Nanette Miner, Tailored Learning: Designing the Blend That Fits (ASTD, 2009), a book focused on taking advantage of distributed technologies to create the best blended training solution possible. Her most current projects include a monthly Training magazine online series titled “Virtually There” and her newest book, Body Language in the Bandwidth – How Facilitators, Producers, Designers, and Learners Connect, Collaborate & Succeed in the Virtual Classroom (InSync Training, 2015). Follow Jennifer Hofmann at her blog, Body Language In The Bandwidth at http://blog.insynctraining.com or on Twitter @InSyncJennifer.
<urn:uuid:ba32c676-482d-4aaf-b0eb-2ebfbc025aff>
CC-MAIN-2021-43
https://trainingmag.com/virtually-there-virtual-classrooms-blended-learning-microlearning-curation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00551.warc.gz
en
0.930084
2,391
3
3
Topic "Children's Health" in Macedonian - total 43 documentsTitle: Action taken form- student head lice Summary: This form is used to gain an understanding of the treatment used for headlice and when it commenced. Title: Bedwetting in childhood Summary: Bed-wetting is common. About one in every five children in Australia wets the bed. Bed-wetting can run in families and is more common in boys than girls before the age of nine years. It can be upsetting for the child and stressful for the whole family. The good news is that you can get help. Title: Bedwetting teenagers and young adults Summary: This translated brochure provides information for bedwetting teenagers and young adults including what causes bedwetting, how it can be helped, and chances of becoming dry. It also describes how common it is and whether there is help available. Title: Belonging, being and becoming – the early years framework Summary: Australia’s first national Early Years Learning Framework aims to extend and enrich children’s learning from birth to five years and through the transition to school. The Framework’s vision is that all children experience learning that is engaging and builds success for life. Your early childhood service can provide you with further information. Title: Bilingualism and languages learning Summary: This brochure explains the benefits of teaching a child more than one language. Title: Challenging behaviours Summary: Information about challenging or disruptive behaviours of children including tantrums and ADHD, that persist or become so severe that they cause major problems for families or communities. You find information on how to cope with it and where to get help. Title: Childhood pneumococcal vaccine Summary: This translated resource provides information on the free pneumococcal vaccine provided by the National Immunisation Program schedule to infants. It outlines what the disease is, the vaccines, who is eligible for the vaccines and also possible side effects. Title: Ciprofloxacin: an antibiotic for contacts of a person with meningococcal infection Summary: Information about ciprofloxacin, an antibiotic for close contacts of a person with a meningococcal infection. Title: Connecting with families Summary: Bringing the Early Years Learning Framework to life in your community. As early childhood educators relationships are at the heart of everything we do. Genuine, positive relationships with children, families and each other are essential if we want to achieve positive outcomes. When we think about relationships it is usually relationships with children that spring to mind. But the relationships and partnerships that we build with families are just as important. Working with parents is rewarding, challenging and always full of possibilities. Title: Consent to conduct head lice inspections Summary: This form asks parents to provide written consent for their children's school to conduct head lice inspections. Summary: A guide to cryptosporidiosis, an infection which causes diarrhoea. Includes information on causes, prevention and symptoms. Title: Diphtheria, tetanus, and pertussis (whooping cough) booster vaccine for 18 month old children Summary: This translated information resource provides information on the diphtheria, tetanus, and pertussis (whooping cough) booster vaccine given for free to children at 18 months old through the National Immunisation Program schedule. It describes what diphtheria, tetanus, and pertussis (whooping cough) are, describes the vaccines and also side effects. It also has a short pre-immunisation checklist. Title: Diphtheria, tetanus, pertussis (whooping cough) and poliomyelitis (polio) immunisation Summary: The National Immunisation Program schedule provides free diphtheria, tetanus, whooping cough and polio vaccine to children at four years of age. The sheet lists possible side effects and provides a pre-immunisation checklist. Title: Diphtheria, tetanus, whooping cough, hepatitis B, polio and Hib vaccine for infants Summary: The National Immunisation Program provides free diphtheria, tetanus, whooping cough, hepatitis B, polio and Hib vaccine to infants at two, four and six months of age. This document provides a pre-immunisation checklist and information about side effects. Includes diphtheria, tetanus and whooping cough (pertussis) vaccine consent form Title: Fact sheet for parents of children at risk of anaphylaxis Summary: Anaphylaxis is the most severe form of allergic reaction and is potentially life threatening. It usually occurs rapidly after exposure to a food, insect or medicine to which a person may already be allergic. Anaphylaxis must always be treated as a medical emergency and requires immediate treatment with adrenaline. Summary: The document explains causes and prevention of giardiasis, an intestinal infection causing symptoms like diarrhoea, stomach cramps and nausea. Title: Head lice alert notice Summary: This alert notice explains that some children in a school have head lice and explains what parents need to do to prevent the spread of head lice. Title: Infant hepatitis B immunisation information Summary: Information about the hepatitis B vaccine given to newborn babies. Includes information about the disease, why babies should have the vaccine, and information about possible side effects. It is important to start the hepatitis B immunisation as soon as possible after birth. Title: Measles - information for contacts Summary: Information for 'contacts' of people with measles - meaning those who've shared the same air as someone with measles. Includes information about symptoms and treatment to prevent the disease. Title: Measles (NSW Health) Summary: Fact sheet about measles, including information about how the infection is spread,symptoms, treatment and immunisation. Title: Measles, mumps and rubella immunisation information Summary: This translated resource provides information about the measles, mumps, and rubella vaccines for children at 12 months and 18 months of age through the National Immunisation Program schedule. It includes information on what they are, the vaccine itself, possible side effects and a short pre-immunisations checklist. Title: Measles, mumps, rubella and chickenpox immunisation information Summary: This translated resource provides information about the measles, mumps, rubella and chickenpox vaccines for children at 18 months of age through the National Immunisation Program schedule. It includes information on what they are, the vaccine itself, possible side effects and a short pre-immunisations checklist. Title: Meningococcal ACWY secondary school vaccine program: Information and consent form Summary: This document provides information about the four-in-one combined vaccine for protection against meningococcal A, C, W, Y strains that is free to secondary school students in Victoria. It must be signed by the parents/carers of eligible young people under 18 years old so they can receive the vaccine at secondary school. Title: Meningococcal disease Summary: Information about meningococcal disease, an uncommon but serious disease, which is more likely to affect small children, adolescents and young adults. Includes symptoms, prevention and treatment. Title: National assessment program- literacy and numeracy Summary: This information sheet explains what is the national assesment program- lIteracy and nurmeracy (NAPLAN) test Title: NDIS pathway hearing stream from 0 to 6 years Summary: This fact sheet explains how the National Disability Insurance Scheme (NDIS) can help if your child aged between 0 and 6 years has just been diagnosed with hearing loss. Title: Parents count too - helping your child with measuring length and area Summary: A numeracy resource for parents/carers which describes how children learn about measurement and offers practical and fun suggstions for developing this skill. Title: Parents count too - helping your child with measuring temperature and time Summary: A numeracy resource for parents and carers which outlines how children learn about temperature and time. It provides practical ideas for developing these concepts at home. Title: Parents count too - helping your child with measuring volume and mass Summary: A numeracy resource for parents and carers which describes how children learn about volume and mass and provides advice on games and activities that will develop skills. Title: Parents count too - helping your child with mental calculations Summary: A numeracy resource for parents/carers which describes how to foster the development of using mental strategies to solve problems in common daily activities. Title: Parents count too - helping your child with patterns and algebra Summary: A numeracy resource for parents and carers which outlines activities to help children to recognise, make, describe and continue repeating patterns. Title: Parents count too - helping your child with representing and interpreting graphs and tables Summary: A numeracy resource for parents and carers which describes activities to promote children's understanding of graphs and tables. Title: Parents count too - helping your child with shapes and objects Summary: This fact sheet explains what spatial mathematics is and how you can help your child to identify the different shapes such as triangles and circles. Title: Parents count too: helping your child with arithmetic: addition, subtraction, multiplication and division Summary: This factsheets has information to help parents teach their children to count. Title: Pertussis (whooping cough) Summary: Information about pertussis (whooping cough), a disease that can be very serious in small children, but is preventable by immunisation. Title: Polio immunisation information Summary: This translated information resource provides information on Poliomyelitis (polio) and the Polio vaccine. It includes what it is, the vaccine, possible side effects of the vaccine and a short pre-immunisation checklist. Title: Reading with your child at home Summary: Ideas for parents and carers to help their young children with reading Title: Rotavirus immunisation information Summary: This translated resource provides information on the Rotavirus, the most common cause of severe gastroenteritis in infants and young children in Australia. It includes information on the protection against rotavirus that is available free of charge under the National Immunisation Program Schedule for babies in two doses at two and four months of age. It also includes a pre-immunisation checklist and information on possible side effects. Summary: Rubella (also known as German measles) is caused by infection with a virus. Infection is usually mild, but can cause serious damage to unborn babies. Immunisation is recommended and provided free for all children at 12 months and 18 months of age. Title: Statewide Eyesight Preschooler Screening (StEPS) brochure Summary: The StEPS brochure explains why children should have their vision screened before they start school and how children can access the StEPS program and have their vision screened for free before they start school. Title: Why does my baby need a hearing check? Summary: Information for parents on hearing checks for babies, and why they're important for detecting problems early. Title: Why has my baby been referred directly for a diagnostic audiology assessment? Summary: Information from the NSW Statewide Infant Screening-Hearing (SWISH) Program for parents whose baby was found to be not eligible for screening, and referred directly for a diagnostic assessment. Title: Year 7 secondary school vaccine program - information and consent form Summary: This translated consent form is for the Human Papillomavirus (HPV) vaccine and the Diphtheria-tetanus whooping cough vaccine that are offered to all children in Year 7 at secondary school or aged 12 to 13 years in the community setting. It firstly provides information on HPV and Diphtheria-tetanus whooping cough, and then has the consent form attached. This resource has been reviewed in the last 3 years and complies with the Health Translation Directory editorial guidelines and collection policy.
<urn:uuid:6286ae4b-bffa-468b-a556-419311f283fb>
CC-MAIN-2021-43
https://healthtranslations.vic.gov.au/bhcv2/bhcht.nsf/PresentMultilingualResourceByTopic?Open&x=Children's_health&s=Macedonian
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00511.warc.gz
en
0.904567
2,500
3.265625
3
While cereals such as rice and corn are the staple food in Asia, legumes are increasing in importance as sources of nutrition for both humans and the soil. With the growing demand for food, especially rice and corn, farmers are prone to resort to monocropping, a farming practice which not only favors pest infestation and soil degradation but also reduces the opportunity to generate better incomes. When the rice-rice or corn-corn cropping pattern becomes the norm, humans, particularly those in the distant areas or those with much reduced purchasing power, may miss out on other essential and inexpensive sources of protein, vitamins, and minerals with a largely rice or corn diet. Hence, crop diversification is an attractive option that wise farmers could take advantage of given its several benefits. Ms. Rose Mary Aquino, a senior researcher at the Cagayan Valley Integrated Agricultural Research Center (CVIARC), is one staunch advocate of crop diversification. Specifically, she actively promotes grain legumes, such as groundnut (peanut), mungbean, and soybean, as a means to alleviate poverty in the face of climate change and malnutrition in marginalized rice- and corn-based farming communities. During the 7th National Agriculture and Fisheries Technology Forum and Product Exhibition organized by the Bureau of Agricultural Research (BAR) at the SM Mega Trade Hall 2, SM Mega Mall, Mandaluyong City on 11-14 August 2011, Aquino presented the enhancement of grain legumes productivity in predominantly rice and corn producing areas in Region 2. Breaking the monocrop Based on the studies made by CVIARC researchers from 2006-2010, Aquino presented the four possible cropping patterns for a cereals-legumes cropping system. For example, it is possible to adopt a rice-mungbean, corn-soybean, corn-peanut, or corn-mungbean-peanut intercrop depending on the average monthly rainfall and temperature. “Grain legumes production supports the agriculture sector’s climate-change mitigation measures in three ways: it requires minimum or zero tillage, it retains adequate levels of crop residues to protect the soil from erosion, and intercropping reduces pest and disease incidence,” explained Aquino. Groundnut: A+ in nutrition In her presentation, Aquino presented groundnut or peanut as the legume that could be graded A+ in nutrition for its low salt but high protein content, energy value, and dietary fiber. She presented data showing that groundnut contains unsaturated fat – the good fat – that helps to remove cholesterol from the blood. Likewise, it contains more protein than eggs, dairy products, and various cuts of meat and fish. As a source of dietary fiber, it also reduces the risk of having some types of cancer and controls blood sugar levels. “Of the 13 vitamins needed by the body, half of it is present in groundnut. Moreover, of the 20 minerals necessary for normal body growth, seven can be found in groundnut” she added. According to Aquino, the current climate change-ready varieties of groundnut are NSIC Pn 11 or Namnama-1, NSIC Pn 14 or Namnama-2, and NSIC Pn 15 or Asha. As the overall coordinating agency for agriculture and fisheries R&D, BAR has funded the adaptability trials and promotion of these promising peanut varieties, tapping CVIARC as one of the research proponents. “The market potential of groundnut is great particularly in our country that imports 30,000 to 50,000 tons annually which is more than 50 percent of our national requirement. The demand of peanut processors in our country is at 343 metric tons monthly in the shelled form which sells at P50 to P60 per kilogram. In unshelled form, the farm gate price of peanut is ranges from P25 to P28 per kilogram,” said Aquino. “We have submitted samples of these promising peanut varieties to five major processors in Manila and they all gave positive feedback mainly due to their improved shells and bigger nut size,” she added. Mung bean: The black gold If you used to think twice about eating mung bean, you’d be better off thinking twice about not getting the rich protein and minerals that can be found in it. After all, it is not called ‘black gold’ for nothing. “Mung bean sprouts are rich in Vitamin C and iron. Iron in mung bean sprouts is twelve-fold high as compared to mung bean soup while a four-fold increase in iron can be achieved when it is cooked with tomato,” said Aquino in her presentation. Like groundnut, mung bean is a drought-tolerant, all-season crop that matures from 55 days to 74 days after emergence (DAE) depending on the variety planted. The climate change-ready varieties for mung bean are Pagasa 7, NSIC Mg 12 or Pagasa 19, and NSIC Mg 15 or Kinang. According to Aquino, mung bean’s high carbohydrate content particularly found in Pagasa 19 makes it a good raw material for bread or noodle production. “The Pagasa 7 mung bean is sold at farm gate price of P35 per kilogram. In terms of rat and frog infestation, Pagasa 19 is at an advantage because the plant grows taller. On the other hand, Kinang is a good choice if you want your plant to mature early,” she explained. Soybean: The wonder crop Soybean, while currently classified as legume in the country and in other parts of the world, the Food and Agriculture Organization (FAO) and the United States Department of Agriculture (USDA) classify this crop as an oilseed. Whether touted as the “king of beans” or “wonder crop”, the powerhouse of health benefits that is present in soybeans is worth paying attention to. Aquino discussed soybean as a source of high quality protein and has numerous health benefits that can be gained from soy isoflavones including the ability to lower the risk of several types of cancer, reduce menopausal symptoms such as hot flushes in women, and reduce the risk of osteoporosis and heart disease. “The tocopherol or Vitamin E found in soy is a very good anti-oxidant. Soybean sprouts also contain 12 times more iron than found in mung bean,” she added. While the US remains the biggest producer of soybeans in the world, the Philippines is beginning to dip into the global opportunities for soybean. Through the Philippine Soybean Roadmap 2011-2014 crafted by the Department of Agriculture (DA), BAR is tasked to coordinate all R&D activities needed to be undertaken for the roadmap to take off. BAR, in coordination with regional research centers such as CVIARC, is looking into the potential to supply the huge soy demand in the Japanese market. Currently, the approved varieties for soybean are PSB Sy 2 or Tiwala 6 which matures in 86 to 103 DAE. Soil health and organic agriculture The soil also deserves justice- if only they could have their own lawyers. Thus was the lament of Aquino about the severe problem on land degradation due to wrong farming practices. Thankfully, even with increasing cropping intensity, soil health can be maintained with the inclusion of grain legumes in the production system. “Peanut, mung bean, and soybean are nitrogen-fixing crops because of Rhizobium, the bacteria that can be found abundantly in the root nodules of legumes,” said Aquino.The nitrogen compounds that can be found in legumes are essential for the growth of plants. When the plant dies, the nitrogen that is fixed is released to other plants and also helps to fertilize the soil. Aquino said that legumes maintain soil fertility by serving as ‘green manure’ and soil conditioner, and acts as a good substrate for organic fertilizer production. According to Aquino, due to this nitrogen-fixing ability, the organic way of legumes production is highly possible and commendable. “Legumes are very responsive to residual nutrient utilization and require less fertilizer input. In Cagayan Valley, the farmers who adopted a peanut-white corn intercrop did not apply chemical fertilizers or pesticides. Farmers who organically planted mungbean and soybean after upland rice also found that on-time planting helped minimize pest damage,” said Aquino. Development of a legume industry A 2006 study from Central Luzon State University (CLSU) by Abon, et al. assessed the legumes industry and the current policy environment to help boost the industry. An inventory of government policies and programs were undertaken that included the enabling institutions and their outcomes. The researchers highly recommended an appropriate policy environment that will establish a national program for sustained legumes production with provisions for adequate R&D support on its production, processing, and utilization. Improvements on government credit-marketing support services that will promote productivity, link, and expand markets were also deemed necessary. Furthermore, the development of a stronger linkage between producers and processors of legumes and the enabling mechanisms for the establishment of rural-based legumes enterprises with the government providing market and price incentives were recommended. With the bright promise of legumes and the existence of government support particularly from the DA including BAR, dedicated researchers and scientists, farmers’ interest, and increasing public awareness in nutritious and organic food, the aim of producing quality source of food and sustainable livelihood is an exciting prospect that, hopefully, this generation would not miss out this time. For more information on legumes, you may contact Ms. Rose Mary Aquino or CVIARC at tel. no. (078) 622-0961 to 62, email: [email protected], [email protected]. Source: Miko Jazmine J. Mojica, Bar Chronicle August 2011 Issue (Vol. 12 No. 8)
<urn:uuid:1827da33-b33a-4d27-a266-f803afd13d33>
CC-MAIN-2021-43
https://www.pinoybisnes.com/agri-business/food-takes-good-care-soil/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00551.warc.gz
en
0.934641
2,102
3.265625
3
|Allegro CL version 10.1| Unrevised from 10.0 to 10.1. This document contains the following sections:1.0 Environments introduction An environment in Common Lisp is a Lisp object that contains, in some fashion, a set of bindings and also information about operators, variables, symbols, and so on. This information can be useful in various ways, particularly during evaluation, compilation and macro expansion. The macroexpand operator takes an optional environment argument, whose value is either nil or an The actual nature of an environment object is implementation dependent. In standard Common Lisp, there is no way for programmers to access or modify environment objects. In the proposed Common Lisp standard specified by Guy Steele et. al. in the second edition of Common Lisp: the Language, an interface to environment objects was proposed (in Section 8.5, pp 207-214), but the ANSI J13 committee decided not to include this in the ANS. Allegro CL now has implemented much of the CLtL-2 environments proposal, with some differences that we describe. We recommend that users read that section of CLtL-2, although this document is complete in itself. Environments can be thought of as being specified in CLtL-2, Section 8.5, pp 207-214, with a number of differences. In the following points, we describe the differences between our implementation and the CLtL-2 specification. :macros-only. These kinds are described in detail in Section 2.1 Kinds of environments. Note that :compilationassists in distinguishing between the compilation process, which wants to expand macros, and the walking process, which does not. :macros-onlyallows the creation of an environment which is appropriate for a macrolet lexical closure. :special-operator), one call with a :compilationenvironment (which might return :compiler-macro) and one with a :evaluationenvironment in order to see if there is a real functional definition ( :special, then the second value is nil, because the value is dynamic and can be best accessed via symbol-value. :special-operator, then the second value is nil, because the value of a special operator is opaque to the programmer (companion-macros are mandated by the spec in order for a non-compiler to "see" into special-operators, and so a functional value doesn't make sense). :macro, and the definition is in fact dynamic and thus accessible via fdefinition or macro-function, respectively, then the second returned value is nil, unless the third argument to function-information is non- nil, indicating that consing definitions and declarations is ok. :reuseargument described below. nilalso results in a non- nilsecond return value (the definition housed in a locative), although these return values are likely to be consed on the fly. This switch is added so that the interpreter, which almost never looks at declarations, doesn't need to cons as much for no good reason. :declare, in order to reduce consing. In this case, the locative (if present) is sought in the :locativeargument (see below) nil(the default) a new environment object is consed, i.e. the environment object returned is not eq to the one given. But when reuse is non- nil, then the old environment object is returned, and any additions to the environment are added at the same level, as if they had been all added in the same call to augment-environment which created this environment object. nilor a cons cell, and can be used efficiently when only one name is being added to the environment. When a :constant is being augmented, the locative argument is the actual value of the constant. The locative argument becomes the value which is returned as the second value from the *-information functions. For augmentation with many names at a time, a locative can be specified for each name, where instead of a list of names for each keyword, the list may be an alist, each element of which specifies the name in the car and the locative in the cdr. The car of a non- nillocative cons is always mutable, unless it represents a :constant value. See also the function system:constant-value. The functions that are described as being defined by the macro are in fact implemented, and work as specified, except that it may return as its first value one of The kind slot of an augmentable-environment object determines what behaviors the accessors have on it. Usually, accessors will return similar information, but for several reasons including performance and the fact that namespaces are not pure mappings, the kind does play a part, along with optional arguments, in returning different levels of information. An environment object can be "modified" to become a different kind simply by performing a null augmentation on the original environment and by storing into the kind slot: (setq e2 (sys:augment-environment e1)) (setf (sys::augmentable-environment-kind e2) :evaluation) which results in an evaluation environment in e2, which has exactly the same information as e1 (which might be any kind of environment, but which in practice is probably a The environment kinds are: :interpreter: for a lisp with an interpreter, such as Allegro CL, an interpreter environment can form the basis and implementation for the interpreter. Accesses on it will tend to generate no declaration information (with the special-case of the special declaration), and global values will be left up to the caller to retrieve, rather than to cons up a locative for each call. The by-words for an interpreter environment are speed and cons-free operation. :compiler: a compiler environment is the normal environment which the compiler uses to establish contours and contexts for the compilation process. The by-word for a compilation environment is as much information as possible. (Prior to version 8.0, :compilerenvironments were called A compiler environment might also have a global component, suitable for storing definitions temporarily during a compile-file If so, it is generally stored into any definitions it contains will shadow any global definitions stored directly in the lisp. When the removed, then the shadowing is stopped, and the original definitions :evaluation: an evaluation environment is a compilation environment posing as an interpreter environment. It has many of the same characteristics of a compilation environment, and in fact is created with the augment-environment/setf technique described above. :compilation: a compilation environment is also similar to a compiler environment, except that macros and compiler-macros can recognize one and macroexpand differently. Note: it is a goal to eventually remove this kind of environment; the distinction should not be as useful as it currently is. (Prior to version 8.0, :compilationenvironments were called :macros-only: this environment kind serves a special-purpose when making a lexical-closure for a macrolet. Because macrolet makes macro-functions in its own lexical environment, but because referencing a local variable within this environment is undefined, it is necessary that only macro definitions be copied when the lexical-closure is created. If one considers a namespace to be a one-to-one mapping of a name to a binding, then the function namespace is not a pure namespace in Common Lisp; consider that a name can simultaneously name both a special-operator and either a macro or a compiler-macro, or it can name a macro or function and a compiler-macro simultaneously. Of course, any lexical definition created for that name (such as an flet, labels, or macrolet) will shadow all of these potential combinations, but if no such shadowing occurs, there is a necessity for function-information to be able to make the distinctions between the various combinations of definition that are possible. If the fourth argument (the special-operators argument) to function-information is true, and if the name is a non-shadowed special-operator, then :special-operator is returned, even if it has a macro or a compiler-macro definition as well. If the argument is nil, then for a special-operator which also has a compiler-macro, :compiler-macro is returned :compilation environments (otherwise :special-operator is returned), and for a special-operator which also has a macro definition, :macro is returned only for We do not define what occurs if a special-operator has both a macro and a compiler-macro definition, because Allegro CL has none of these situations. There should be a normalized behavior for such a situation. If a name defines a compiler-macro as well as either a macro or a function, then which is returned depends on the environment kind: a :compilation environment will cause the :compiler-macro to be returned, and an :interpreter or an environment will result in the :macro being returned. The following functions and variables are defined in our environments implementation: *compile-file-environment*(now deprecated, use *compilation-unit-environment*(8.0 replacement for The optimize qualities (safety, space, speed, debug) are no longer repesented by variables in the excl package excl::.speed.) Also, the compilation-speed quality is added to the set, because it is specified by ANS. These 5 qualities are now accessed by 'optimize <env>) which returns an alist which will always include at least one of each of these qualities. This return value is constant -- never modify its contents. Also, there may be more than one entry for a quality; the first one encountered is the correct value for the specified environment. The optimize declaration is defined by (sys:define-declaration optimize (declaration env) .optimize. :declare <body>) This means, for debugging purposes, that sys::.optimize. is used as the holder of the global optimization quality list. The declaration-information function will return this list, possibly shadowed by local lexical optimize declarations, depending on the environment object passed to it. cl-user(1): (sys:declaration-information 'optimize nil) ((safety 1) (space 1) (speed 1) (compilation-speed 1) (debug 2)) cl-user(2): cl-user(1): (setq e1 (sys::make-augmentable-environment-boa :compilation)) #<Augmentable compilation environment @ #x71b0941a> cl-user(2): (setq e2 (sys:augment-environment e1 :declare '((optimize speed (safety 0))))) #<Augmentable compilation environment 1 @ #x71b0992a> cl-user(3): (sys:declaration-information 'optimize e1) ((safety 1) (space 1) (speed 1) (compilation-speed 1) (debug 2)) cl-user(4): (sys:declaration-information 'optimize e2) ((speed 3) (safety 0) (safety 1) (space 1) (speed 1) (compilation-speed 1) (debug 2)) cl-user(5): This is an example of a mutually exclusive declaration; one can declare the same symbol binding both inline and notinline, but not at the same time. Therefore, these two declarations are combined into a single declaration class called inline. The entire definition for these two declarations is (sys:define-declaration (inline inline notinline) (declaration env) .inline. :function) The first inline in the namespace is the class, and each of the two further names in the namespace are the declaration instances. The kind of declaration is :function, and the body is null and thus takes on the default action. Thus, function information for a function name that has been declared either inline or notinline will include as its third returned value an entry of the form (inline <value>), where value is one of inline, notinline, or nil (but this might usually be left off). cl-user(1): (setq e1 (sys::make-augmentable-environment-boa :compilation)) #<Augmentable compilation environment @ #x71b0941a> cl-user(2): (setq e2 (sys:augment-environment e1 :function 'foo :locative (list #'(lambda (x) (1+ x))) :declare '((notinline foo)))) #<Augmentable compilation environment 1 @ #x71b09c62> cl-user(3): (setq e3 (sys:augment-environment e2 :declare '((inline foo)))) #<Augmentable compilation environment 1 1 @ #x71b09eba> cl-user(4): (sys:function-information 'foo e1) nil cl-user(5): (sys:function-information 'foo e2) :function (#<Interpreted Function (unnamed) @ #x71b09c32>) ((inline notinline)) t cl-user(6): (sys:function-information 'foo e3) :function (#<Interpreted Function (unnamed) @ #x71b09c32>) ((inline inline) (inline notinline)) t cl-user(7): cl-user(1): (sys:function-information 'bar nil) nil cl-user(2): (sys:augment-environment nil :declare '((notinline bar)) :reuse t) nil cl-user(3): (sys:function-information 'bar nil) nil cl-user(4): (sys:function-information 'bar nil t) :free (nil) ((inline notinline)) cl-user(5): (setq e1 (sys::make-augmentable-environment-boa :compilation)) #<Augmentable compilation environment @ #x71b09bd2> cl-user(6): (setq e2 (sys:augment-environment e1 :declare '((inline bar)))) #<Augmentable compilation environment 1 @ #x71b09fba> cl-user(7): (sys:function-information 'bar e1) nil cl-user(8): (sys:function-information 'bar e1 t) :free (nil) ((inline notinline)) cl-user(9): (sys:function-information 'bar e2 t) :free (nil) ((inline notinline) (inline inline) (inline notinline)) nil cl-user(10): Like inline and notinline, these declarations are mutually exclusive. excl::ignore-if-unused is the archaic form for ignorable. The declaration class is called ignore, and will have a value of ignore, ignorable, or excl::ignore-if-unused (though the archaic form only shows up in a compilerless lisp, where it is never used anyway). The initial definition of these declarations is: (sys:define-declaration (ignore ignore ignorable excl::ignore-if-unused) (declaration env) .ignore. :variable) However, when the compiler is loaded, it overwrites this definiton with one which will It doesn't make sense to ignore a variable which hasn't been bound, but no error is generated when such a situation occurs. cl-user(1): (setq e1 (sys::make-augmentable-environment-boa :compilation)) #<Augmentable compilation environment @ #x71b0941a> cl-user(2): (setq e2 (sys:augment-environment e1 :variable 'foo :locative (list (comp::make-varrec-boa 'foo)) :declare '((ignore foo)))) #<Augmentable compilation environment 1 @ #x71b09ab2> cl-user(3): (multiple-value-list (sys:variable-information 'foo e2)) (:lexical (#<compiler::varrec foo DynamicExtent: maybe Used: ignore unknown-type>) ((ignore ignore)) t) cl-user(4): (comp::varrec-used (caadr *)) ignore cl-user(5): Example of maintenance of definitions in compile-file-environment, as opposed to the global-environment: Note that the two definitions of foo given here are maintained in different locations; one in the global environment, and one in the lexical environment: cl-user(1): (fboundp 'foo) nil cl-user(2): (sys:augment-environment nil :function 'foo :locative (list (compile nil (lambda (x y) (+ x y))))) nil cl-user(3): (sys:function-information 'foo) :function nil nil cl-user(4): (fdefinition 'foo) #<Function (:anonymous-lambda 9) @ #x71b0ae42> cl-user(5): (funcall * 10 20) 30 cl-user(6): (setq e1 (sys:make-compile-file-environment)) #<Augmentable compilation environment @ #x71b0e7b2> cl-user(7): (setq e2 (sys:augment-environment e1 :function 'foo :locative (list (compile nil (lambda (x y) (- x y)))))) #<Augmentable compilation environment 1 @ #x71b11d62> cl-user(8): (sys:function-information 'foo) :function nil nil cl-user(9): (fdefinition 'foo) #<Function (:anonymous-lambda 9) @ #x71b0ae42> cl-user(10): (sys:function-information 'foo e2) :function (#<Function (:anonymous-lambda 10) @ #x71b10bda>) nil t cl-user(11): (multiple-value-bind (ignore locative) (sys:function-information 'foo e2) (funcall (car locative) 10 20)) -10 cl-user(12): Copyright (c) 1998-2019, Franz Inc. Oakland, CA., USA. All rights reserved. This page was not revised from the 10.0 page. |Allegro CL version 10.1| Unrevised from 10.0 to 10.1.
<urn:uuid:4523b391-4b28-486a-af0a-dec0435e506b>
CC-MAIN-2021-43
https://franz.com/support/documentation/10.1/doc/environments.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.770875
4,012
2.78125
3
For centuries, France and England have been like ambitious siblings, close in age, evenly matched in most things, competitive, living in a house that’s too small for there ever to be peace. They have repeatedly come into conflict over religion, territory, colonies, and anything else two countries might conceivably argue over… In 1754, that rivalry came to the American frontier and set into motion a chain of events that would ultimately culminate in an American revolution… This lesson was reported from: A chapter of The United States: An Open Ended History, a free online textbook. Adapted in part from open sources. For Your Consideration: How did the French and British differ in their approach to colonization of North America? What was the Albany Plan of Union? Did it work? What were the terms of the Treaty of Paris (1763)? What was Pontiac’s Rebellion? Why did the Proclamation of 1763 anger British colonists? A European Rivalry Throughout most of their mutual history, France and Britain have engaged in a succession of wars. During the 1700s, these European wars spilled over into the Caribbean and the Americas, drawing in settlers, African slaves, and native peoples. Though Britain secured certain advantages—primarily in the sugar-rich islands of the Caribbean—the struggles were generally indecisive, and France remained in a powerful position in North America. By 1754, France still had a strong relationship with a number of Native American tribes in Canada and along the Great Lakes. It controlled the Mississippi River and, by establishing a line of forts and trading posts, had marked out a great crescent-shaped empire stretching from Quebec to New Orleans. The British remained confined to the narrow belt east of the Appalachian Mountains. Thus the French threatened not only the British Empire, but also the American colonists themselves, for in holding the Mississippi Valley, France could limit their westward expansion. Large areas of North America had no colonial settlements. The French population numbered about 75,000 and was heavily concentrated along the St. Lawrence River valley. Fewer lived in New Orleans, Biloxi, Mississippi, Mobile, Alabama, and small settlements in the Illinois Country, hugging the east side of the Mississippi River and its tributaries. French fur traders and trappers traveled throughout the St. Lawrence and Mississippi watersheds, did business with local Indian tribes, and often married Indian women. Traders married daughters of chiefs, creating high-ranking unions. In this way, French colonial interests in North America meant coexistence, exchange, and commerce with native peoples. In contrast, British settlers outnumbered the French 20 to 1 with a population of about 1.5 million ranged along the eastern coast of the continent from Nova Scotia and Newfoundland in the north to Georgia in the south. Many of the older colonies had land claims that extended arbitrarily far to the west, as the extent of the continent was unknown at the time when their provincial charters were granted. Their population centers were along the coast, yet the settlements were growing into the interior. Nova Scotia had been captured from France in 1713, and it still had a significant French-speaking population. Britain also claimed Rupert’s Land where the Hudson’s Bay Company traded for furs with local Indian tribes. However, in the more southern colonies – that would one day become the United States – British interests were frequently at odds with those of the natives, as the large colonial population pressed ever westward, clearing ancestral native land for new English-style farms and towns. War on the Frontier Disputes over who would control the Ohio River Valley lead to deployment of military units and the construction of forts in the area by both the British and the French, even though the area was in fact already occupied by the Iroquois Confederacy. An armed clash took place in 1754 at the French Fort Duquesne, the site where Pittsburgh, Pennsylvania, is now located, between a band of French regulars and Virginia militiamen. The Virginians were under the command of 22-year-old George Washington, a Virginia planter and surveyor who had been sent on a mission to warn the French to leave the area. Following an intense exchange of fire in which approximately one third of his men died, Washington surrendered and negotiated a withdrawal under arms. This inauspicious battle is now regarded as the opening battle of a much larger war. British colonial governments were used to operating independently of one another and of the government in London, a situation that complicated negotiations with Native American tribes, whose territories often encompassed land claimed by multiple colonies. The British government attempted to deal with the conflict by calling a meeting of representatives from New York, Pennsylvania, Maryland, and the New England colonies. From June 19 to July 10, 1754, the Albany Congress, as it came to be known, met with the Iroquois in Albany, New York, in order to improve relations with them and secure their loyalty to the British. But the delegates also declared a union of the American colonies “absolutely necessary for their preservation” and adopted a proposal drafted by Benjamin Franklin. The Albany Plan of Union provided for a president appointed by the king and a grand council of delegates chosen by the assemblies, with each colony to be represented in proportion to its financial contributions to the general treasury. This body would have charge of defense, Native-American relations, and trade and settlement of the west. Most importantly, it would have independent authority to levy taxes. Franklin was a man of many inventions – his was the first serious proposal to organize and unite the colonies that would become the United States. But in the end, none of the colonial legislatures accepted the plan, since they were not prepared to surrender either the power of taxation or control over the development of the western lands to a central authority. Britain’s superior strategic position and her competent leadership ultimately brought victory in the conflict with France, known as the French and Indian War in America (named for Britain’s enemies, though some natives fought on the British side, too) and the Seven Years’ War in Europe. Really the first true world war, with conflicts stretching from Europe to Asia, only a modest portion of it was fought in the Western Hemisphere. The Treaty of Paris (1763) The war in North America officially ended with the signing of the Treaty of Paris in 1763. The British offered France the choice of surrendering either its continental North American possessions east of the Mississippi or the Caribbean islands of Guadeloupe and Martinique, which had been occupied by the British. France chose to cede their North American possessions. They viewed the economic value of the Caribbean islands’ sugar cane to be greater and easier to defend than the furs from the continent. French philosopher Voltaire referred to Canada disparagingly as nothing more than a few acres of snow. The British, however, were happy to take New France, as defense of their North American colonies would no longer be an issue; also, they already had ample places from which to obtain sugar. Spain traded Florida to Britain in order to regain Cuba, but they also gained Louisiana from France, including New Orleans, in compensation for their losses. Great Britain and Spain also agreed that navigation on the Mississippi River was to be open to vessels of all nations. In the aftermath of the French and Indian War, London saw a need for a new imperial design that would involve more centralized control, spread the costs of empire more equitably, and speak to the interests of both French Canadians and North American Indians, now subjects of the British Empire. The colonies, on the other hand, long accustomed to a large measure of independence, expected more, not less, freedom. And, with the French menace eliminated, they felt far less need for a strong British presence. A scarcely comprehending Crown and Parliament on the other side of the Atlantic found itself contending with colonists trained in self‑government and impatient with interference. Furthermore, the French and Indian War nearly doubled Great Britain’s national debt. The Crown would soon impose new taxes on its colonies in attempt to pay off this debt. These attempts were met with increasingly stiff resistance, until troops were called in to enforce the Crown’s authority. These acts ultimately led to the start of the American Revolutionary War. The incorporation of Canada and the Ohio Valley into the empire necessitated policies that would not alienate the French and Indian inhabitants. Here London was in fundamental conflict with the interests of its American colonists. Fast increasing in population, and needing more land for settlement, they claimed the right to extend their boundaries as far west as the Mississippi River. Hadn’t that been how this whole war started in the first place? Proclamation of 1763 The British government, fearing a series of expensive and deadly Indian wars, believed that former French territory should be opened on a more gradual basis. Reinforcing this belief was Pontiac’s Rebellion, a bitter conflict which came on the heels of the Treaty of Paris, launched in 1763 by a loose confederation of Native American tribes, primarily from the Great Lakes region. Named for Pontiac, the most prominent of many native leaders in the conflict, the members of the alliance were dissatisfied with British policies after the British victory in the French and Indian War (1754–1763). While the French had long cultivated alliances among certain of the Native Americans, the British post-war approach was essentially to treat the Native Americans as a conquered people, eliminating benefits and autonomy that the various tribes had enjoyed while the French claimed the region. While French colonists—most of whom were farmers who seasonally engaged in fur trade—had always been relatively few, there seemed to be no end of settlers in the British colonies, who wanted to clear the land of trees and occupy it. Shawnees and Delawares in the Ohio Country had been displaced by British colonists in the east, and this motivated their involvement in the war. On the other hand, Native Americans in the Great Lakes region and the Illinois Country had not been greatly affected by white settlement, although they were aware of the experiences of tribes in the east. Before long, Native Americans who had been allies of the defeated French attacked a number of British forts and settlements. Eight forts were destroyed, and hundreds of colonists were killed or captured, with many more fleeing the region. Warfare on the North American frontier was brutal, and the killing of prisoners, the targeting of civilians, and other atrocities were widespread on both sides. The ruthlessness and treachery of the conflict was a reflection of growing tensions between British colonists and Native Americans, who increasingly felt they were in a war for their very survival. Hostilities came to an end after British Army expeditions in 1764 led to peace negotiations over the next two years. Native Americans were unable to drive away the British, but the uprising prompted the British government to modify the policies that had provoked the conflict. The Royal Proclamation of 1763 reserved all the western territory between the Appalachian Mountains and the Mississippi River for use by Native Americans – no British settlers allowed. Thus the Crown attempted to sweep away every western land claim of the thirteen colonies and to stop westward expansion. Although never effectively enforced, this measure, in the eyes of the colonists, constituted a betrayal – what had they been fighting for the last seven years if not a right to occupy and settle western lands? Why was King choosing Native Americans over his own loyal subjects? Thus, the Proclamation of 1763 would rouse the latent suspicions of colonials who would increasingly see Britain as no longer a protector of their rights, but rather a danger to them. The article was adapted in part from:
<urn:uuid:6d3f384c-6d4c-4d05-a879-d6a21b373e2e>
CC-MAIN-2021-43
https://openendedsocialstudies.org/tag/short/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00711.warc.gz
en
0.974419
2,393
4.125
4
This week we’re looking at some very small animals–but not animals that we think of as small. Join us for a horrendously cute episode! The Barbados threadsnake will protecc your fingertip: Parvulastra will decorate your thumbnail: Berthe’s mouse lemur will defend this twig: The bumblebee bat will eat any bugs that come near your finger: The vaquita, tiny critically endangered porpoise: The long-tailed planigale is going to steal this ring and wear it as a belt: A pygmy hippo and its mother will sample this grass: This Virgin Islands dwarf gecko will spend this dime if it can just pick it up: Welcome to Strange Animals Podcast. I’m your host, Kate Shaw. I talk a lot about biggest animals on this podcast, so maybe it’s time to look at the very smallest animals. I don’t mean algae or bacteria or things like that, I mean the smallest species of animals that aren’t usually considered especially small. We’ll start with the smolest snek, the Barbados threadsnake. It only lives on a few islands in the Caribbean, notably Barbados. The very largest individual ever measured was only 4.09 inches long, or 10.4 cm, but most are under four inches long. But it’s an extremely thin snake, not much thicker than a spaghetti noodle. The Barbados threadsnake mostly eats termites and ant larvae. It spends most of its time in leaf litter or under rocks, hunting for food. The female only lays one single egg, but the baby is relatively large, about half the mother’s length when it hatches. That’s so cute. Why are small things so cute? Remember the starfish episode where we talked about the largest starfish? Well, what’s the smallest starfish? That would be Parvulastra parvivipara, which is smaller than a fingernail decoration sticker. It grows to about ten millimeters across and is orangey-yellow in color. It lives on the coast of Tasmania in rock pools between low and high tide, called intertidal rock pools. If you remember the Mangrove killifish from a few episodes ago, you’ll remember how killifish females are hermaphrodites that produce both eggs and sperm, and usually self-fertilize their eggs to produce tiny clones of themselves. Well, Parvulastra does that too, although like the killifish it probably doesn’t always self-fertilize its eggs. But then it does something interesting for a starfish. Instead of releasing its eggs into the water to develop by themselves, Parvulastra keeps the eggs inside its body. And instead of the eggs hatching into larvae, they hatch into impossibly tiny miniature baby starfish, which the parent keeps inside its body until the baby is big enough to survive safely on its own. But what do the baby starfish eat while they’re still inside the mother? Well, they eat their SIBLINGS. The larger babies eat the smaller ones, and eventually leave through one of the openings in the parent’s body wall, called gonopores. Researchers theorize that one of the reasons the babies leave the parent is to escape being eaten by its siblings. And yes, occasionally a baby grows so big that it won’t fit through the gonopores. So it just goes on living inside the parent. Next, let’s look at the smallest primate. The primate order includes humans, apes, monkeys, and a lot of other animals, including lemurs. And the very smallest one is Berthe’s mouse lemur. Its body is only 3.6 inches long on average, or 9.2 cm, with a tail that more than doubles its length. Its fur is yellowish and brownish-red. Berthe’s mouse lemur was only discovered in 1992. It lives in one tiny area of western Madagascar, where it lives in trees, which means it’s vulnerable to the deforestation going on all over Madagascar and is considered endangered. It mostly eats insects, but also fruit, flowers, and small animals of various kinds. Its habitat overlaps with another small primate, the gray mouse lemur, but they avoid each other. Madagascar has 24 known mouse lemur species and they all seem to get along well by avoiding each other and eating slightly different diets. Researchers discover new species all the time, including three in 2016. Last October we had an episode about bats, specifically macrobats that have wingspans as broad as eagles’. But the smallest bat is called the bumblebee bat. It’s also called Kitti’s hog-nosed bat, but bumblebee bat is way cuter. It’s a microbat that lives in western Thailand and southeast Myanmar, and like other microbats it uses echolocation to find and catch flying insects. Its body is only about an inch long, or maybe 30 millimeters, although it has a respectable wingspan of about 6 ½ inches, or 17 cm. It’s reddish-brown in color with a little pig-like snoot, and it only weighs two grams. That’s just a tad more than a single Pringle chip weighs. Because the bumblebee bat is so rare and lives in such remote areas, we don’t know a whole lot about it. It was only discovered in 1974 and is increasingly endangered due to habitat loss, since it’s only been found in 35 caves in Thailand and 8 in Myanmar, and those are often disturbed by people entering them. The land around the caves is burned every year to clear brush for farming, which affects the bats too. The bumblebee bat roosts in caves during the day and most of the night, only flying out at dawn and dusk to catch insects. It rarely flies more than about a kilometer from its cave, or a little over half a mile, but it does migrate from one cave to another seasonally. Females give birth to one tiny baby a year. Oh my gosh, tiny baby bats. So what about whales and dolphins? You know, some of the biggest animals in Earth’s history? Well, the vaquita is a species of porpoise that lives in the Gulf of California, and it only grows about four and a half feet long, or 1.4 meters. Like other porpoises, it uses echolocation to navigate and catch its prey. It eats small fish, squid, crustaceans, and other small animals. The vaquita is usually solitary and spends very little time at the surface of the water, so it’s hard to spot and not a lot is known about it. It mostly lives in shallow water and it especially likes lagoons with murky water, properly called turbid water, since it attracts more small animals. Unfortunately, the vaquita is critically endangered, mostly because it often gets trapped in illegal gillnets and drowns. The gillnets are set to catch a different critically endangered animal, a fish called the totoaba. The totoaba is larger than the vaquita and is caught for its swim bladder, which is considered a delicacy in China and is exported on the black market. The vaquita’s total population may be no more than ten animals at this point, fifteen at the most, and the illegal gillnets are still drowning them, so it may be extinct within a few years. A captive breeding plan was tried in 2017, but porpoises don’t do well in captivity and the individuals the group caught all died. Hope isn’t lost, though, because vaquita females are still having healthy babies, and there are conservation groups patrolling the part of the Gulf of California where they live to remove gill nets and chase off fishing boats trying to set more of the nets. If you want to learn a little more about the vaquita and how to help it, episode 75 of Corbin Maxey’s excellent podcast Animals to the Max is an interview with a vaquita expert. I’ll put a link in the show notes. Next, let’s talk about an animal that is not in danger of extinction. Please! The long-tailed planigale is doing just fine, a common marsupial from Australia. So, if it’s a marsupial, it must be pretty big—like kangaroos and wallabies. Right? Nope, the long-tailed planigale is the size of a mouse, which it somewhat resembles. It even has a long tail that’s bare of fur. It grows to 2 ½ inches long not counting its tail, or 6.5 cm. It’s brown with longer hind legs than forelegs so it often sits up like a tiny squirrel. Its nose is pointed and it has little round mouse-like ears. But it has a weird skull. The long-tailed planigale’s skull is flattened—in fact, it’s no more than 4 mm top to bottom. This helps it squeeze into cracks in the dry ground, where it hunts insects and other small animals, and hides from predators. The pygmy hippopotamus is a real animal, which I did not know until recently. It grows about half the height of the common hippo and only weighs about a quarter as much. It’s just over three feet tall at the shoulder, or 100 cm. It’s black or brown in color and spends most of its time in shallow water, usually rivers. It’s sometimes seen resting in burrows along river banks, but no one’s sure if it digs these burrows or makes use of burrows dug by other animals. It comes out of the water at night to find food. Its nostrils and eyes are smaller than the common hippo’s. Unlike the common hippo, the pygmy hippo lives in deep forests and as a result, mostly eats ferns, fruit, and various leaves. Common hippos eat more grass and water plants. The pygmy hippo seems to be less aggressive than the common hippo, but it also shares some behaviors with its larger cousins. For instance, the pooping thing. If you haven’t listened to the Varmints! Episode about hippos, you owe it to yourself to do so because it’s hilarious. I’ll put a link in the show notes to that one too. While the hippo poops, it wags its little tail really fast to spread the poop out across a larger distance. Also like the common hippo, the pygmy hippo secretes a reddish substance that looks like blood. It’s actually called hipposudoric acid, which researchers thinks acts as a sunscreen and an antiseptic. Hippos have delicate skin with almost no hair, so its skin dries out and cracks when it’s out of water too long. The pygmy hippo is endangered in the wild due to habitat loss and poaching, but fortunately it breeds successfully in zoos and lives a long time, up to about 55 years in captivity. For some reason females are much more likely to be born in captivity, so when a male baby is born it’s a big deal for the captive breeding program. I’ll put a link in the show notes to a video where you can watch a baby pygmy hippo named Sapo and his mother. He’s adorable. Finally, let’s finish where we started, with another reptile. The smallest lizard is a gecko, although there are a lot of small geckos out there and it’s a toss-up which one is actually smallest on average. Let’s go with the Virgin Islands dwarf gecko, which lives on three of the British Virgin Islands. It’s closely related to the other contender for smallest reptile, the dwarf sphaero from Puerto Rico, which is a nearby island, but while that gecko is just a shade shorter on average, it’s much heavier. The Virgin Islands dwarf gecko is only 18 mm long not counting its tail, and it weighs .15 grams. A paperclip weighs more than this gecko. It’s brown with darker speckles and a yellow stripe behind the eyes. Females are usually slightly larger than males. Like other geckos, it can lose its tail once and regrow a little stump of a tail. The Virgin Islands dwarf gecko lives in dry forests and especially likes rocky hills, where it spends a lot of its time hunting for tiny animals under rocks. We don’t know a whole lot about it, but it does seem to be rare and only lives in a few places, so it’s considered endangered. In 2011 some rich guy decided he was going to release a bunch of lemurs from Madagascar onto Moskito Island, one of the islands where the dwarf gecko lives. Every conservationist ever told him oh NO you don’t, rich man, what is your problem? Those lemurs will destroy the island’s delicate ecosystem, drive the dwarf gecko and many other species to extinction, and then die because the habitat is all wrong for lemurs. So Mr. Rich Man said fine, whatever, I’ll take my lemurs and go home. And he did, and the dwarf gecko was saved. Look, if you have so much money that you’re making plans to move lemurs halfway across the world because you think it’s a good idea, I can help take some of that money off your hands. You can find Strange Animals Podcast online at strangeanimalspodcast.blubrry.net. That’s blueberry without any E’s. If you have questions, comments, or suggestions for future episodes, email us at [email protected]. We also have a Patreon if you’d like to support us that way. Thanks for listening!
<urn:uuid:bc74ba6c-01f3-4659-a83d-34168d6c11aa>
CC-MAIN-2021-43
https://strangeanimalspodcast.blubrry.net/tag/hippopotamus/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.950001
3,021
2.671875
3
Minimise the negative effects of stress with our support plan: - Learn what happens to your body when you get stressed - Understand why you put on weight around the middle - Use these tactics to combat stress for a healthy body and mind Stress is part of everyday life and 75-90% of doctor visits are stress related, so it’s time to understand what’s going on in your body and what you can do about it! UNDERSTANDING THE PHYSICAL SIDE OF STRESS The adrenal glands are designed to deal with and react to stress; they are small, pyramid-shaped, no larger than a walnut, and sit above your kidneys. When stress hits you, the adrenal glands produce stress hormones, adrenaline and cortisol, in what is known as the ‘fight or flight’ response. This is what happens to your body when you get a surge of stress: - Breathing rate increases – to help oxygenate your muscles and brain, so you can think and move faster, as well as getting rid of excess carbon dioxide. - Heart pumps faster and blood pressure increases – blood flow diverts away from your organs (including the stomach, bowel, ovaries, testes) and towards your large muscles, to help you escape from danger. - Blood becomes thicker – when blood is thicker and stickier, it is primed with immune cells, and ready to clot in case you get wounded. - You feel more alert - fat and sugar stores break down and release more energy into the bloodstream (= high blood sugar). - Pupils dilate – to help your visual awareness. - Urgency to go to the bathroom - your bladder and bowel muscles relax to let go of extra weight that could slow down your escape. These physical changes are relatively useless in modern day stress, because we don’t tend to do much fighting or running away! You could be sitting in your car, at your desk or in a confined space like a queue or train carriage and the excess energy that your body has helpfully produced has nowhere to go. SHORT-TERM EFFECTS OF STRESS The knock-on stress symptoms affect your whole body: - Digestive issues like churning, diarrhoea or constipation - Skin conditions like psoriasis and acne can get worse - Disrupted sleep - Feelings of depression - Headaches and migraines - Muscle aches/pains/twitching Ideally, these symptoms should be temporary because stress hormones should only stick around in your body for 2 hours. *Tip*: If you have the opportunity (and you're fit enough), you can reduce the negative impact of stress by doing a short blast of exercise, run up a couple of flights of stairs, do some squats and lunges to expend the extra energy in your blood stream. >> Find out what happens to your body when you’re stressed every day.. Stress hormones increase sugar in your blood and encourage the laying down of fat around your middle LONG-TERM EFFECTS OF STRESS If you’re feeling stressed every day or even several times per day, and this is on-going, then the physical changes (that should only be temporary) now become damaging to your health. - Fat around the middle - When you don’t use up the extra sugar in your blood stream, it causes weight gain, especially around your middle (your body’s way of protecting your vital organs), and inflammation. - Food cravings - craving sugar and fat can be a survival mechanism. - Metabolism slows down – this is another one of your body’s protective mechanisms. - Painful digestive issues, including IBS – because your digestion has ground to a halt! - Fertility and libido are reduced – your body is concentrating on survival, and believes that your environment is not suitable for a baby, so stressed individuals can have problems conceiving. - Lowered immunity – cortisol switches off the immune system. - Worse PMS or menopausal symptoms – a good balance of sex hormones isn’t possible when your body is focussed on producing stress hormones instead. - Raised heart disease & stroke risk – your heart rate and blood pressure rise, and blood becomes thicker, which all increase the chance of complications. - Raised diabetes & cancer risk – due to the high blood sugar levels you experience when under stress. - Acceleration of ageing – when your body spends all its energy on fight or flight, then your repair and recovery functions (that keep your skin healthy and aches and pains at bay) are reduced. - Adrenal fatigue – when your adrenal glands simply get worn out. This is a serious condition where sufferers can barely get out of bed in the mornings. When under stress, your energy is diverted away from your repair and recovery functions, this results in faster ageing! >> Read on to find out the anti-stress rules that really work! If you’re under chronic stress, you can take additional load off your body by avoiding the following choices: - Sugar (chocolate, sweets, cake, cookies) – the sugar rush always ends in a sugar crash, this triggers more cortisol in the body. - Refined carbohydrates (white flour, potatoes, cereals) – these are digested quickly, your body turns them to sugar, and they have the same effect as sugar. - Tea & coffee – caffeine stimulates adrenaline (this is why you feel alert and energised after drinking coffee). But if you’re already stressed, it will overwork the adrenals, putting the body under more strain. - Smoking & alcohol – these are toxins, they rob your body of nutrients and overwork your detoxification organs. - Hard-to-digest foods – foods that don’t agree with you, such as those known to irritate the gut like wheat and milk, can put the body under more strain because they activate your immune system. Stressed? Take an additional load off your system by avoiding stimulants, sugar, and refined carbs RESPECT YOUR NERVOUS SYSTEM There are 2 sides to your nervous system: - The sympathetic nervous system known as your ‘fight or flight’ stress response. - The parasympathetic nervous system, known as your ‘rest and digest’, the side of your nervous system that is required for repair, digestion and immunity. These two sides of your nervous system function interchangeably, not at the same time. - Don’t eat when you’re stressed - if you’re in fight or flight mode during meal times, you won’t be digesting your food properly, leading to digestive issues. This is why eating at your desk is also a bad idea. - Exercise when your stomach is empty – a workout taps into your fight or flight mode, so make sure your body isn’t trying to digest a meal at the same time. The anti-stress nutrition rules1. Eat foods that release energy slowly – protein, fibre and fat - Protein foods: Organic Burst Spirulina (add 1tsp to a green smoothie or in a glass of water with lemon juice), organic meat from grass-fed animals, game meats, organic poultry, wild caught fish, eggs, nuts, seeds, natural yoghurt, sprouted beans/seeds. - Fibre-rich foods: vegetables, fruit, soaked seeds such as Organic Burst Chia. Healthy fats: coconut oil, olive oil, nuts, oily fish, avocados, organic butter, eggs and meat from grass-fed animals. - Focus on weaning off, reducing and replacing because going cold-turkey can put your body under even more strain! - Don’t start the day with caffeine; drink water/herbal tea/lemon water first. - For caffeine-free energy, add 1tsp Organic Burst Maca to a mug of nut milk. - Replace cola or energy drinks with green juices and water. - Eat energy-balancing snacks with protein. - Magnesium in dark green leafy vegetables, seeds, almonds. - Essential fats in oily fish, walnuts, olives. - L-Carnitine an amino acid found in red meat (go for organic and grass-fed 1-2 times per week) or Organic Burst Spirulina – take 1-2 tsp in water or smoothies per day. - Vitamin C in fruits, green leafy vegetables, sprouts, tomatoes, broccoli. - B vitamins in green vegetables, beans, peas and Spirulina. - Zinc & iron in seafood, meat, Brazil nuts. - Water to aid delivery of energy and nutrients to the cells. - Organic Burst Maca is an adaptogen that supports healthy hormonal balance and, provides nourishment, as well as improving your energy levels. Try 1tsp in your breakfast bowl. - Don’t wait until you are starving to eat - Stop eating before you feel full - Never exercise on a full stomach - Take time out to enjoy food - Give each meal your full attention - Digestion starts in the mouth – chew your food to start the release of digestive juices, remember your stomach doesn’t have teeth! A FINAL NOTE In this article, the focus has been on managing the effects of stress with diet and lifestyle measures, and we haven’t looked at addressing the route causes of your stress, which of course you mustn’t ignore! You may need to speak to your colleagues or family members to find support and solutions. Other great measures include yoga, having an Epsom salt bath, massage, sport, and putting your phone down! Good luck and stay chilled.
<urn:uuid:acd11b8e-41c0-4c79-b1f5-5fdcbc79a149>
CC-MAIN-2021-43
https://www.chocandjuice.com/blogs/choc-juice/how-to-avoid-the-effects-of-stress-once-for-all
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00070.warc.gz
en
0.90944
2,061
3
3
Paul Kling had already learned fifty-two violin concertos by the time the Nazis invaded his home in Brno, Moravia. By age seven he had performed Mozart’s A Major Violin Concerto and Bach’s A Minor Violin Concerto with the Vienna Symphony Orchestra. In 1941, several months before the thirteen-year-old Kling was to receive the equivalent of a Bachelor's degree, he was expelled from school for being an 'undesirable element'. By the time Paul Kling arrived in Terezín on 9 April 1943 at the age of fifteen, the camp’s musical activities were in full swing. Hidden in the bed sheets he had brought with him was his mute violin. He felt fortunate that it had not been confiscated upon arrival. He did not bring any scores with him since, as he said in an interview in 2002, 'I had everything memorised. And I wasn’t thinking of, you know, staying there for a long vacation.' Upon arrival he was assigned to do outdoor manual labour. He remembers only that it was purposeless work and not good for his hands. Having already experienced the Czech version of the Nuremberg laws for several years, the restrictions of Terezín did not come as much of a shock. Soon after his arrival, Kling was moved to a building that housed only young people. This policy of the Council of Elders was intended to help the young people survive. At Terezín, the Jugendfürsorge (Youth Care Department) was established for this purpose. Karel Reiner, a composer assigned to the Youth Care Department, learned that Kling was a violinist and arranged for Kling to get involved in the camp’s musical activities through the Freizeitgestaltung (Leisure Time Committee). The SS established the Freizeitgestaltung during the autumn of 1942. It acted as a cultural department that developed programmes, provided instruments for musicians, scheduled concerts, recitals, cabarets, poetry readings and even arranged practice sites and times for the performers. These cultural activities were permitted, but carefully supervised by Dr Friedrich Seidl, the SS commander of Terezín. At its peak, the Freizeitgestaltung had 276 members, a small minority of the total population of the camp. Charlotte Opfermann, author of The Art of Darkness, was sceptical of the freedom given to the Jews by the Freizeitgestaltung, describing it as a mirage supposedly run by and for prisoners. Yet, the establishment of the Freizeitgestaltung clearly allowed cultural life to continue and to receive a platform. To become members of the Freizeitgestaltung, prisoners were required to submit an application. If accepted, they became eligible for special housing, additional food rations and less arduous labour. Kling’s experience at Terezín prior to joining the Freizeitgestaltung was typical for prisoners of the camp. However, once he was fortunate enough to be selected to the Freizeitgestaltung he was able to spend his days practising, rehearsing and performing in different ensembles. He was among the few prisoners given the privilege of pursuing their artistic talent full-time with no other work demanded of them. Before he came to the camp, Kling’s repertoire had been limited to showpieces, violin concertos and a few solo Bach works including the Chaconne, the Fugue in G Minor and the Prelude in E Major. His sonata repertoire was limited to the Beethoven Sonata for Piano and Violin in A Major ('Kreutzer') and the Brahms Sonata for Violin and Piano in G Major. At Terezín, Kling split his time between practising on his own and rehearsing with chamber ensembles and orchestras. He began to perform in the entertainment orchestra that played in the coffee house as well as the string orchestra, the opera orchestra and several chamber ensembles. When later on he was asked why he continue to practise in such an environment he responded, 'I was practising for an unknown future… I was practising to get better.' Kling developed a special relationship with and admiration for pianist Gideon Klein. Klein was tall, handsome and 'slightly demonic', according to Kling, who speaks of him as 'a fascinating figure and wonderful to work with – a Czech Bernstein. He had the gift of explaining things'. Kling was honoured to have been invited to play in a piano trio with Klein and Freidrich Mark, a talented cellist. Kling had little chamber music experience, but Klein was committed to encouraging the growth of young musicians. Together they played the Brahms B Major opus 8 and Beethoven’s E Flat Major, opus 70 no 2 piano trios. 'Still those trios are for me the highlight,' says Kling. 'I still love to play the B Major Brahms.' Viktor Ullmann, a composer and music critic at Terezín, reviewed the performance of the two piano trios, saying The performance is noteworthy for its excellent preparation, done by Gideon Klein, who himself mastered the difficult piano part with élan and reliable feeling for the style. Paul Kling made his debut on the violin with a lot of success, and he is on the way up and very talented, Freidrich Mark has already proven himself often as a splendid chamber music player. Kling was also a member of the Stadtkapelle (town orchestra), conducted by Peter Deutsch, former conductor of the Royal Orchestra in Copenhagen. Kling remembers playing 'entertainment music' with this orchestra in the gazebo of the main square. This concert was filmed and incorporated by the Germans into their propaganda film, Theresienstadt - ein Dokumentarfilm aus dem judische Siedlungsgebiet (Theresienstadt – a documentary from the Jewish settlement). Kling played a medley of Dvorak tunes arranged for orchestra, including an excerpt from the Quintet in A Major and other 'salon' music. This orchestra was specifically set up as part of the beautification of the camp in preparation for the Red Cross visit. 'I remember, like today, that I felt it was beneath my dignity to have to play in such a thing.' Kling was the violinist chosen to play in Viktor Ullmann’s opera, The Kaiser from Atlantis. He was chosen, by his own reckoning, because he 'was in demand'. Perhaps other violinists did not think it was virtuosic enough, or perhaps they wanted to 'let Kling suffer with that piece', he said on reflection. The opera was never performed at Terezín and turned out to be Kling’s last ensemble work before being sent to Auschwitz on 28 September 1944. There is a popular notion that composers wrote music as a form of resistance against the Nazis. Kling does not feel that resistance was on the mind of any of the composers at Terezín. He admits that his youth may have made him too naïve to recognise this at the time, but even looking back, he does not believe this was the purpose behind the compositions. David Bloch, a musicologist at Tel Aviv University who specialises in music from Terezín, asked Kling how he continued to rehearse and perform amidst the depraved conditions, the hunger and the threat of the transports to the East. Kling said that he preferred not to discuss the history and politics of the period. He would rather remember Terezín as a stage in his development as a violinist. 'Of course I was self-centered as anybody would be, professionally speaking, so all that mattered to me was that I could practise. And I would practise in basements, and I had friends who made sure I had a place to practise.' Furthermore, Kling says, I think that especially in the moment when you don’t really know what the future is, you do the best to satisfy yourself or whatever you know… I was of course very young and optimistic so I would have, I think, fallen into the category of people who assumed there was a life after Terezín. He continues by explaining that There was no happiness. It was survival, as you know. Culture is very often a survival mechanism for nations, as it is for smaller groups… Because, after all, everybody felt that there is perhaps more chance in surviving if you are unified at least in spirit if not in anything else… In another interview he elaborates: 'People had to sustain a civilised life under the conditions and needed something other than language. Culture was needed.' Paul Kling was among the lucky few to survive the Holocaust. Once he returned to Prague after the war, he entered the Music Academy of Prague and practiced 'like mad'. In 1947, at the age of nineteen, he was asked, at the last minute, to substitute for the soloist with the Prague Symphony Orchestra in the Brahms Concerto in D Major. This highly successful performance took his career to a new level. He was soon invited to become the concertmaster of the NHK Symphony in Tokyo and later concertmaster of the Louisville Symphony. In 1977 he became a professor and eventually Dean of Music at the University of Victoria. Over the years he soloed with many orchestras in the United States and abroad, receiving rave reviews. A concert performed in 1961 at Town Hall in New York City is described in New York Herald Tribune: Beauty of tone, elegance of phrasing, and complete directness marked the recital by violinist Paul Kling at Town Hall. Mr Kling played Brahms with a distinct regard for its highly romantic character, without, however, the slightest suggestion of banality. It had an easy, flowing quality about it that attested to the violinist’s affinity for savoring a melodic line while never losing sight of the various technical demands necessary to hold everything in place. Much the same can be said for Beethoven’s 'Kreutzer' Sonata, except that the work is far more exacting in its emotional content. But Mr Kling was equal to all its needs, playing it with a sure hand for detail and the kind of insight that made one continually sit up and listen. A beautiful performance! In 1998, the President of the Republic of Austria awarded Kling the Austrian Cross of Honour for Arts and Letters. Prior to his retirement, Kling taught at the University of British Columbia in Vancouver, Canada. Paul Kling died while resting in 2005. He is survived by his wife of five decades, Taka Kling, and their daughter Karen Kling. Kling’s own understanding of his life understates his talent, but emphasises, perhaps rightly, the role of good fortune in the career of any performer: I was lucky with everything, let’s face it. With all my unluck, I was lucky [in Terezín]. I was lucky in Auschwitz. I was lucky with my career after the war when I was pinch-hitting for an absent soloist in the Brahms Concerto at 36 hours notice and didn’t have time to get nervous and got rave reviews. Lucky. Lucky to get a Guarneri violin, lucky to get jobs I never really asked for. You know I went to Vienna, when I left Czechoslovakia, actually stupidly, I didn’t even think of that the whole world isn’t waiting for Kling to come and play the Paganini concerto. Paul Kling, interview by author, 2 September 2002, Vancouver, Canada, mini-disk recording, in author’s possession. Paul Kling, interview by David Bloch, 12 October 1989, Victoria, British Columbia, in the possession of the interviewer. Elena Makarova, Serei Makarov and Victor Kuperman, University Over The Abyss. Charlotte Opfermann, The Art of Darkness (Houston: University Trace Press, 2002), Joža Karas, Music in Terezín 1941-1945 (New York: Beaufort Book Publishers, in association with Pendragon Press, 1985 Review of Paul Kling Recital, by Judith Robinson (Town Hall, New York), New York Herald Tribune, 1 January 1961, 6.
<urn:uuid:d61583ac-b326-4068-a781-0e40f4e6aaf4>
CC-MAIN-2021-43
https://holocaustmusic.ort.org/it/places/theresienstadt/paul-kling/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00191.warc.gz
en
0.974275
2,554
2.859375
3
Friday, November 27, 2020 Ancient people relied on coastal environments to survive the Last Glacial Maximum It has been clear to me that the first phase of human cultural evolution ( 45,000 BP through 10,000 BP ) took place alongside the marine environment for multiple reasons. Not least the availability of ample food tresources in the form of shell fish. The best possible proof of this comes from the Pacific Northwest. The mountains were proof of interference by external tribes for over ten thousand years. That is exactly what every coast line looked like until the invention of agriculture itself. That the Northwest produced huge communities is actually impressive but again a measure of the fishery. This happened elswhere bnut was ultimately overwritten by the productive power of deltaic and herd agriculture. It also made possible the actual rise of agriculture itself by providing a working social template to support such an enterprise. After all hunting bands are lacking the large demand for food of a coastal village and are not naturally up for settling even if they plant beans here and there. They are also far too vulnerable. This conjecture has been badly missed by scholarship, not least because it is now all deep underwater. Ancient people relied on coastal environments to survive the Last Glacial Maximum NOVEMBER 24, 2020 Excavations at Waterfall Bluff, South Africa. Credit: Erich Fisher Humans have a longstanding relationship with the sea that spans nearly 200,000 years. Researchers have long hypothesized that places like coastlines helped people mediate global shifts between glacial and interglacial conditions and the impact that these changes had on local environments and resources needed for their survival. Coastlines were so important to early humans that they may have even provided key routes for the dispersal of people out of Africa and across the world. Two new multidisciplinary studies published in the journals Quaternary Science Reviews and Quaternary Research document persistent human occupation along the South African eastern seaboard from 35,000 years ago to 10,000 years ago. In this remote, and largely unstudied, location—known as the "Wild Coast"—researchers have used a suite of cutting-edge techniques to reconstruct what life was like during this inclement time and how people survived it. The research is being conducted by an international and interdisciplinary collaboration of scientists studying coastal adaptations, diets and mobility of hunter-gatherers across glacial and interglacial phases of the Quaternary in coastal South Africa. The research team is led by Erich Fisher, Institute of Human Origins at Arizona State University; Hayley Cawthra with the South Africa Council for Geoscience and Nelson Mandela University; Irene Esteban, University of the Witwatersrand; and Justin Pargeter, New York University. Together, these scientists have been leading excavations at the Mpondoland coastal rock shelter site known as Waterfall Bluff for the last five years. These excavations have uncovered evidence of human occupations from the end of the last ice age, approximately 35,000 years ago, through the complex transition to the modern time, known as the Holocene. Importantly, these researchers also found human occupations from the Last Glacial Maximum, which lasted from 26,000 to 19,000 years ago. The Last Glacial Maximum was the period of maximum global ice volume, and it affected people and places around the world. It led to the formation of the Sahara desert and caused major reductions in Amazonian rainforest. In Siberia, the expansion of polar ice caps led to drops in global sea levels, creating a land bridge that allowed people to cross in to North America. In southern Africa, archaeological records from this globally cold and dry time are rare because there were widespread movements of people as they abandoned increasingly inhospitable regions. Yet records of coastal occupation and foraging in southern Africa are even rarer. The drops in sea level during the Last Glacial Maximum and earlier glacial periods exposed an area on the continental shelf across southern Africa nearly as large as the island of Ireland. Hunter-gatherers wanting to remain near coastlines during these times had to trek out onto the exposed continental shelf. Yet these records are gone now, either destroyed by rising sea levels during warmer interglacial periods or submerged under the sea. The research team—the Mpondoland Paleoclimate, Paleoenvironment, Paleoecology, and Paleoanthropology Project (P5 Project)—has hypothesized that places with narrow continental shelfs may preserve these missing records of glacial coastal occupation and foraging. "The narrow shelf in Mpondoland was carved when the supercontinent Gondwana broke up and the Indian Ocean opened. When this happened, places with narrow continental shelfs restricted how far and how much the coastline would have changed over time," said Hayley Cawthra. Map of the Waterfall Bluff area in South Africa. Credit: Erich Fisher In Mpondoland, a short section of the continental shelf is only 10 kilometers wide. "That distance is less than how far we know past people often traveled in a day to get sea foods, meaning that no matter how much the sea levels dropped anytime in the past, the coastline was always accessible from the archaeological sites we have found on the modern Mpondoland coastline. It means that past people always had access to the sea, and we can see what they were doing because the evidence is still preserved today," said Erich Fisher. The oldest record of coastal foraging, which has also been found in southern Africa, shows that people relied on coastlines for food, water and move favorable living conditions over tens of thousands of years. In the study published in the journal Quaternary Research, led by Erich Fisher, a multidisciplinary team of researchers documents the first direct evidence of coastal foraging in Africa during a glacial maximum and across a glacial/interglacial transition. According to Fisher, "The work we are doing in Mpondoland is the latest in a long line of international and multidisciplinary research in South Africa revealing fantastic insights into human adaptations that often occurred at or near coastlines. Yet until now, no one had any idea what people were doing at the coast during glacial periods in southern Africa. Our records finally start to fill in these longstanding gaps and reveal a rich, but not exclusive, focus on the sea. Interestingly, we think it may have been the centralized location between land and sea and their plant and animal resources that attracted people and supported them amid repeated climatic and environmental variability." To date this evidence, P5 researchers collaborated with South Africa's iThemba LABS and researchers at the Centre for Archaeological Science of the University of Wollongong to develop one of the highest-resolution chronologies at a southern Africa Late Pleistocene site, showing persistent human occupation and coastal resource use at Waterfall Bluff from 35,000 years ago to 10,000 years ago. This evidence, in the form of marine fish and shellfish remains, shows that prehistoric people repeatedly sought out dense and predictable seafoods. This finding complements the results of a companion study published in the journal Quaternary Science Reviews, where paleobotanists and paleoclimatologists, led by Irene Esteban, used different lines of evidence to investigate interactions between prehistoric people's plant-gathering strategies and climate and environmental changes over the last glacial/interglacial phase. This is the first multiproxy study in South Africa that combines preserved plant pollen, plant phytoliths, macro botanical remains (charcoal and plant fragments) and plant wax carbon and hydrogen isotopes from the same archaeological archive. According to Irene Esteban, "It is not common to find such good preservation of different botanical remains, both of organic and inorganic origin, in the archaeological record." Waterfall Bluff view from the ocean. Credit: Erich Fisher Each one of these records preserves a slightly different window to the past. It let the researchers compare different records to study how each one formed and what they represented, both individually and together. "Ultimately," said Esteban, "it allowed us to study interactions between hunter-gatherer plant-gathering strategies and environmental changes across a glacial-interglacial transition." Today, Mpondoland is characterized by afrotemperate and coastal forests as well as open woodlands that are interspersed with grasslands and wetlands. Each of these vegetation types supports different plant and animal resources. One of the key findings of this study is that these vegetation types persisted across glacial and interglacial periods albeit in varying amounts due to changes in sea levels, rainfall and temperature. The implication is that people living in Mpondoland in the past had access to an ever present and diverse suite of resources that let them survive here when they couldn't in many other places across Africa. Importantly, this study showed that people who lived at Waterfall Bluff collected wood from coastal vegetation communities during both glacial and interglacial phases. It is another link to the coastline for the people living at Waterfall Bluff during the Last Glacial Maximum. In fact, the exceptional quality of the archaeological and paleoenvironmental records demonstrates that those hunter-gatherers targeted different, but specific, coastal ecological niches all the while collecting terrestrial plant and animal resources from throughout the broader landscape and maintaining links to highland locales inland. "The rich and diverse resource bases targeted by Mpondoland's prehistoric hunter-gatherers speaks to our species' unique generalist-specialist adaptations," said Justin Pargeter. "These adaptions were key to our species ability to survive wide climate and environmental fluctuations while maintaining long-distance cultural and genetic connections." Together, these papers enrich our understanding about the adaptive strategies of people facing widespread climatic and environmental changes. They also provide a complementary perspective on hunter-gatherer behavioral responses to environmental shifts that is often biased by ethnographic research on African hunter-gatherers living in more marginal environments. In the case of Mpondoland, it is now evident that at least some people sought out the coast—probably because it provided centralized access to fresh water as well as both terrestrial and marine plant and animal resources, which supported their daily survival. According to Esteban and Fisher, "These studies are just a drop in the ocean compared to the richness of the archaeological record we already know is preserved in Mpondoland. We have high expectations about what else we will discover there with our colleagues in South Africa and abroad when we can get back to the field safely in this post-COVID world."
<urn:uuid:68cad6a0-3d87-48ba-8db9-3944097e06ee>
CC-MAIN-2021-43
https://globalwarming-arclein.blogspot.com/2020/11/ancient-people-relied-on-coastal.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.951361
2,193
3.65625
4
Culture, Power and Politics in Treaty-Port Japan, 1854-1899 Renaissance Books, 2018 Compared with their counterparts in China, the Japanese treaty ports cast a small shadow. They were far fewer – only four really mattered – and lasted for just under fifty years, while the Chinese ports made their centenary. Yet the Japanese ports were important. The thriving modern cities of Yokohama and Kobe had their origins as treaty ports. Nagasaki, a major centre of foreign trade since at least the sixteenth century, may not have owed so much to its treaty-port status, but it was a factor in its modern development. Korean Art from the 19th Century to the Present From artists’ first encounters with oil paintings in the late nineteenth century to the varied and vibrant creative outputs of the 2000s, the book covers a critical and, from a cultural perspective, revolutionary period, signified by the breakdown of earlier artistic conventions and the rise of new art forms. Within this historical trajectory, Charlotte Horlyck explores artists’ interpretations of new and traditional art forms ranging from oil and ink paintings to video art, multi-media installations, ready-mades and performance, and their questions about the role of art and the artist’s position within society. This book will appeal equally to general and specialist readers wanting to explore this rich and fascinating epoch in Korea’s cultural history. Speed up your Korean: Strategies to Avoid Common Errors (Speed Up Your Language Skills) Brown, Lucien; Yeon, Jaehoon Word order, honorifics, terms of addresses and idiomatic expressions are just some of the areas that cause confusion for students of Korean. Learning how to avoid the common errors that arise repeatedly in these areas is an essential step in successful language learning. Speed Up Your Korean is a unique and innovative resource that identifies and explains these errors, enabling students to learn from their mistakes while enhancing their understanding of the Korean language. SamulNori is a percussion quartet which has given rise to a genre, of the same name, that is arguably Korea’s most successful ’traditional’ music of recent times. Today, there are dozens of amateur and professional samulnori groups. There is a canon of samulnori pieces, closely associated with the first founding quartet but played by all, and many creative evolutions on the basic themes, made by the rapidly growing number of virtuosic percussionists. And the genre is the focus of an abundance of workshops, festivals and contests. Samulnori is taught in primary and middle schools; it is part of Korea’s national education curriculum. It has dedicated institutes, and there are a number of workbooks devoted to helping wannabe ’samulnorians’. It is a familiar part of Korean performance culture, at home and abroad, in concerts but also in films and theatre productions. SamulNori uses four instruments: kkwaenggwari and ching small and large gongs, and changgo and puk drums. These are the instruments of local percussion bands and itinerant troupes that trace back many centuries, but samulnori is a recent development of these older traditions: it was first performed in February 1978. This volume explores this vibrant percussion genre, charting its origins and development, the formation of the canon of pieces, teaching and learning strategies, new evolutions and current questions relating to maintaining, developing, and sustaining samulnori in the future. A guide to Korean instruments, focussing on seven key string, percussion and wind instruments: kayagum, komun'go, haegum, changgo, p'iri, taegum, tanso. Each instrument is discussed historically and in its regional context. Different versions of each, and related instruments are described. Playing methods and techniques are given, coupled to photographs and other illustrations, then notations are introduced, building to sets of exercises and pieces given in both Korean mensural and staff notations. Two additional chapters give an historical overview, a broad consideration of different notation systems used in Korea, and an organological account of all traditional Korean instruments. Rewritten, expanded, newly illustrated edition of book originally published in 1988. Under the Ancestors’ Eyes presents a new approach to Korean social history by focusing on the origin and development of the indigenous descent group. Martina Deuchler maintains that the surprising continuity of the descent-group model gave the ruling elite cohesion and stability and enabled it to retain power from the early Silla (fifth century) to the late nineteenth century. This argument, underpinned by a fresh interpretation of the late-fourteenth-century Koryŏ-Chosŏn transition, illuminates the role of Neo-Confucianism as an ideological and political device through which the elite regained and maintained dominance during the Chosŏn period. Neo-Confucianism as espoused in Korea did not level the social hierarchy but instead tended to sustain the status system. In the late Chosŏn, it also provided ritual models for the lineage-building with which local elites sustained their preeminence vis-à-vis an intrusive state. Though Neo-Confucianism has often been blamed for the rigidity of late Chosŏn society, it was actually the enduring native kinship ideology that preserved the strict social-status system. By utilizing historical and social anthropological methodology and analyzing a wealth of diverse materials, Deuchler highlights Korea’s distinctive elevation of the social over the political. The Korean Peninsula lies at the strategic heart of East Asia, between China, Russia, and Japan, and has been influenced in different ways and at different times by all three of them. Across the Pacific lies the United States, which has also had a major influence on the peninsula since the first encounters in the mid-nineteenth century. Faced by such powerful neighbors, the Koreans have had to struggle hard to maintain their political and cultural identity. The result has been to create a fiercely independent people. If they have from time to time been divided, the pressures towards unification have always proved strong. This third edition of Historical Dictionary of the Republic of Korea covers its history through a chronology, an introductory essay, appendixes, and an extensive bibliography. The dictionary section has over 500 cross-referenced entries on important personalities, politics, economy, foreign relations, religion, and culture. This book is an excellent access point for students, researchers, and anyone wanting to know more about the Republic of Korea. Editors: Brown, Lucien; Yeon, Jaehoon The Handbook of Korean Linguistics presents state–of–the–art overviews of linguistic research into the Korean language. The volume is divided into six sections: The Sounds of Korean, Korean Morphology and Syntax, the Syntax–Semantics Interface, Discourse and Pragmatics, Language Acquisition, and Varieties of Korean. The editors have brought together contributions from a wide range of international authors, allowing for a variety of theoretical viewpoints as well as coverage of topics such as proto–Korean, present–day language policies in North and South Korea, social aspects of Korean as a heritage language, honorifics, and an in–depth study of syntactic phenomena of the language. The first authoritative reference work of its kind in the field, this Handbook is certain to become a key resource for researchers, graduate students, and advanced undergraduates studying Korean linguistics or linguistic typology. Editors: Lee, Hyunseon; Segal, Naomi Peter Lang International Academic Publishers, 2015 As a uniquely hybrid form of artistic output, straddling music and theatre and high and popular culture, opera offers vast research possibilities not only in the field of music studies but also in the fields of media and cultural studies. Using the exotic legacy of the fin-de-siècle as its primary lens, this volume explores the shifting relationships between the multimedia genre of opera and the rapidly changing world of visual cultures. It also examines the changing aesthetics of opera in composition and performance and historical (dis)continuity, including the postcolonial era. The book comprises eleven interdisciplinary essays by scholars from eight countries, researching in music, theatre, literature, film and media studies, as well as a special contribution by opera director Sir Jonathan Miller. The book begins with an examination of operatic exoticism in various cultural contexts, such as French, Latin American and Arabic culture. The next sections focus on the most beloved figures in opera performance – Salome, Madame Butterfly and Aida – and performances of these operas through history. Further interpretations of the operas in film and new media are then considered. In the final section, Sir Jonathan Miller reflects on the ‘afterlife’ of opera. Editors: Horlyck, Charlotte; Pettid, Michael J. University Of Hawai'i Press, 2014 Death and the activities and beliefs surrounding it can teach us much about the ideals and cultures of the living. While biologically death is an end to physical life, this break is not quite so apparent in its mental and spiritual aspects. Indeed, the influence of the dead over the living is sometimes much greater than before death. This volume takes a multidisciplinary approach in an effort to provide a fuller understanding of both historic and contemporary practices linked with death in Korea Contributors from Korea and the West incorporate the approaches of archaeology, history, literature, religion, and anthropology in addressing a number of topics organized around issues of the body, disposal of remains, ancestor worship and rites, and the afterlife. The first two chapters explore the ways in which bodies of the dying and the dead were dealt with from the Greater Silla Kingdom (668–935) to the mid-twentieth century. Grave construction and goods, cemeteries, and memorial monuments in the Koryŏ (918–1392) and the twentieth century are then discussed, followed by a consideration of ancestral rites and worship, which have formed an inseparable part of Korean mortuary customs since premodern times. Chapters address the need to appease the dead both in shamanic and Confucians contexts. The final section of the book examines the treatment of the dead and how the state of death has been perceived. Ghost stories provide important insight into how death was interpreted by common people in the Koryŏ and Chosŏn (1392–1910) while nonconformist narratives of death such as the seventeenth-century romantic novel Kuunmong point to a clear conflict between Buddhist thought and practice and official Neo-Confucian doctrine. Keeping with unendorsed views on death, the final chapter explores how death and the afterlife were understood by early Korean Catholics of the eighteenth and nineteenth centuries. Key Papers on Korea: Essays Celebrating 25 Years of the Centre of Korean Studies, SOAS, University of London Edited and Introduced by Jackson, Andrew David Key Papers on Korea is a commemorative collection of papers celebrating 25 years of the Centre of Korean Studies (CKS), SOAS, University of London that have been written by senior academics and emerging scholars. The subjects covered in this collection reflect the different research interests and different strengths of the CKS and include historical perceptions of ancient kingdoms in Manchuria, North Korean propaganda literature, the problematic history of Sino-North Korean borderlands, the millenarian aspects of Won Buddhism, and the importance of the years 1910-11 in the development of Korean music. The collection is framed by two pieces on SOAS, which have been commissioned exclusively for this publication: an introduction that examines the 60-year history of Korean studies at SOAS, and a closing paper that sheds light on the rare collections of Korean art held at SOAS.
<urn:uuid:c7ff5210-86eb-4a6b-89da-72dea95c1e52>
CC-MAIN-2021-43
https://www.soas.ac.uk/koreanstudies/publications/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.932763
2,444
2.6875
3
If your group is interested in introducing poetry to your reading list, but isn’t sure how, take a look at the books below for a variety of approaches. Beyond just reading collections of poems, you can also explore essays about poetry, novels written in verse, and memoirs that feature lyric, poetic language. Here are our recommendations for where to start! If you’d also like some suggestions for fun ways to integrate poems into your group or for approaches to discussing poetry, check out our post that addresses these very questions. by Mary Oliver In Upstream, a collection of essays, revered poet Mary Oliver reflects on her willingness, as a young child and as an adult, to lose herself within the beauty and mysteries of both the natural world and the world of literature. Emphasizing the significance of her childhood “friend” Walt Whitman, through whose work she first understood that a poem is a temple, “a place to enter, and in which to feel,” and who encouraged her to vanish into the world of her writing, Oliver meditates on the forces that allowed her to create a life for herself out of work and love. Upstream follows Oliver as she contemplates the pleasure of artistic labor, her boundless curiosity for the flora and fauna that surround her, and the responsibility she has inherited from Shelley, Wordsworth, Emerson, Poe, and Frost, the great thinkers and writers of the past, to live thoughtfully, intelligently, and to observe with passion. Throughout this collection, Oliver positions not just herself upstream but us as well as she encourages us all to keep moving, to lose ourselves in the awe of the unknown, and to give power and time to the creative and whimsical urges that live within us. by Jane Hirshfield “Poetry,” Jane Hirshfield has said, “is language that foments revolutions of being.” In ten eloquent and highly original explorations, she unfolds and explores some of the ways this is done—by the inclusion of hiddenness, paradox, and surprise; by a perennial awareness of the place of uncertainty in our lives; by language’s own acts of discovery; by the powers of image, statement, music, and feeling to enlarge in every direction. The lucid understandings presented here are gripping and transformative in themselves. Investigating the power of poetry to move and change us becomes in these pages an equal investigation into the inhabitance and navigation of our human lives. by Tony Hoagland Live American poetry is absent from our public schools. The teaching of poetry languishes, and that region of youthful neurological terrain capable of being ignited only by poetry is largely dark, unpopulated, and silent, like a classroom whose shades are drawn. —Tony Hoagland Twenty Poems That Could Save America presents insightful essays on the craft of poetry and a bold conversation about the role of poetry in contemporary culture. At the heart of this book is an honesty and curiosity about the ways poetry can influence America at both the private and public levels. by Tracy K. Smith The youngest of five children, Tracy K. Smith was raised with limitless affection and a firm belief in God by a stay-at-home mother and an engineer father. But just as Tracy is about to leave home for college, her mother is diagnosed with cancer, a condition she accepts as part of God’s plan. Ordinary Light is the story of a young woman struggling to fashion her own understanding of belief, loss, history, and what it means to be black in America. In lucid, clear prose, Smith interrogates her childhood in suburban California, her first collision with independence at Harvard, and her Alabama-born parents’ recollections of their own youth in the Civil Rights era. Here is a universal story of being and becoming, a classic portrait of the ways we find and lose ourselves amid the places we call home. by Seth Greenland A modern love story, I Regret Everything confronts the oceanic uncertainty of what it means to be alive, and in love. Jeremy Best, a Manhattan-based trusts and estates lawyer, leads a second life as published poet Jinx Bell. To his boss’s daughter, Spaulding Simonson, at 33 years old, Jeremy is already halfway to dead. When Spaulding, an aspiring 19-year-old writer, discovers Mr. Best’s alter poetic ego, the two become bound by a devotion to poetry, and an awareness that time in this world is limited. Their budding relationship strikes at the universality of love and loss, as Jeremy and Spaulding confront their vulnerabilities, revealing themselves to one another and the world for the very first time. by Jane Hirshfield The Beauty, an incandescent new collection from one of American poetry’s most distinctive and essential voices, opens with a series of dappled, ranging “My” poems—“My Skeleton,” “My Corkboard,” “My Species,” “My Weather”—using materials sometimes familiar, sometimes unexpected, to explore the magnitude, singularity, and permeability of our shared existence. With a pen faithful to the actual yet dipped at times in the ink of the surreal, Hirshfield considers the inner and outer worlds we live in yet are not confined by; reflecting on advice given her long ago—to avoid the word “or”—she concludes, “Now I too am sixty. / There was no other life.” by Peter Akinlabi, Viola Allo, Inua Ellams This elegant, limited-edition box set features nine chapbooks: eight volumes of poetry, plus an introduction chapbook by editors Kwame Dawes and Chris Abani. The eight African poets included are Peter Akinlabi, Viola Allo, Inua Ellams, Janet Kofi-Tsekpo, Liyou Mesfin Libsekal, Amy Lukau, Vuyelwa Maluleke, and Blessing Musariri. The box set is an annual project of the African Poetry Book Fund, in collaboration with Akashic Books, which seeks to identify the best poetry written by African authors working today, with a special focus on those who have not yet published their first full-length book of poetry. by Jason Carney; Kaylie Jones (Editor) Jaunty, frank, and compelling, Carney shares his instructive story with generosity and insight. —Booklist Carney will easily win sympathy for his life, in which he has persevered to show others the hard work of his salvation. —Kirkus Review A lyrical, mesmerizing debut from Jason Carney who overcomes his own racism, homophobia, drug addiction, and harrowing brushes with death to find redemption and unlikely fame on the national performance poetry circuit. Woven into Carney’s path to recovery is a powerful family story, depicting the roots of prejudice and dysfunction through several generations. by Brian Laidlaw The Stuntman relocates the myth of Echo and Narcissus to the mining town of Hibbing, Minnesota, and draws inspiration from the high-school relationship between Bob Dylan and Echo Helstrom – a.k.a. “The Girl From the North Country” – that took place there. At once whimsical and refreshingly earnest, playful and yet richly grounded in one of the founding myths of Western civilization, The Stuntman deploys images that are often as quirky as they are illuminating, explores the protean nature of the self, and the challenges of being a self in social and intimate relationships. by Machi Tawar and Julie Winters Carpenter (Translator) This internationally bestselling book took the world by storm on its first publication, selling 3 million copies in Japan and 9 million copies worldwide. Covering the discovery of new love, first heartache and the end of an affair, these poems mix the ancient grace and musicality of the tanka form with a modern insight and wit. With a light, fresh touch and a cool eye, Machi Tawara celebrates the small events in a life fully lived and one that is wonderfully touched by humor and beauty. This book will stay with you through the day, and long after you have finished it. by Eliza Griswold (Translator) and Seamus Murphy (Photographer) The landay, a folk couplet, is an ancient oral and anonymous form created by and for mostly illiterate people: the more than 20 million Pashtun women who span the border between Afghanistan and Pakistan. War, separation, homeland, love—these are the subjects of landays, which are brutal and spare, can be remixed like rap, and are powerful in that they make no attempts to be literary. From Facebook to drone strikes to the songs of the ancient caravans that first brought these poems to Afghanistan thousands of years ago, landays reflect contemporary Pashtun life and the impact of three decades of war. With the U.S. withdrawal in 2014 looming, these are the voices of protest most at risk of being lost when the Americans leave. After learning the story of a teenage girl who was forbidden to write poems and set herself on fire in protest, the poet Eliza Griswold and the photographer Seamus Murphy journeyed to Afghanistan to learn about these women and to collect their landays. The poems gathered here express a collective rage, a lament, a filthy joke, a love of homeland, an aching longing, a call to arms, all of which belie any facile image of a Pashtun woman as nothing but a mute ghost beneath a blue burqa. by Kevin Powers The award-winning author of The Yellow Birds returns with an extraordinary debut poetry collection. National Book Award finalist, Iraq war veteran, novelist and poet Kevin Powers creates a deeply affecting portrait of a life shaped by war. Letter Composed During a Lull in the Fighting captures the many moments that comprise a soldier’s life: driving down the Texas highway; waiting for the unknown in the dry Iraq heat; writing a love letter; listening to a mother recount her dreams. Written with evocative language and discernment, Powers’s poetry strives to make sense of the war and its echoes through human experience. by Carl Adamshick Saint Friend is that rare book that speaks in the voice of a generation. The voice comes from an acclaimed young poet who, after working years in obscurity, was fêted with the prestigious Walt Whitman Award for his first collection. This, his second book, is a freewheeling explosion of celebrations, elegies, narratives, psychologically raw persona pieces (one in the voice of Amelia Earhart), and a handful of punchy lyric poems with a desperate humor. It is, as the title suggests, a book exalting love among friends in our scattered times. by Parneshia Jones The imagination of a girl, the retelling of family stories, and the unfolding of a rich and often painful history: Parneshia Jones explores the intersections of these elements of experience with refreshing candor and metaphorical purpose. A child of the South speaking in the rhythms of Chicago, Jones writes across time and place, knitting “a human quilt” with her own identity at the center. She tells of the men and women she grew up with, from the awkward trip to Marshall Fields with her mother to buy her first bra to the late whiskey-infused nights of her father’s world. Vibrant descriptions unlock the smells and sounds of a place: in the South, “lard sizzles a sermon from the stove”; in Chicago, we feast on an “opera of peppers and pimento.” Jones also reaches into a shared history of struggle and growth, and the stories of her own family intertwine with those of historical Black figures, including Marvin Gaye and Josephine Baker. by Clive James With his customary wit, delightfully lucid prose style and wide-ranging knowledge, James explains the difference between the innocuous stuff so prevalent today and a real poem: the latter being a work of unity that insists on being heard entire and threatens never to leave the memory. A committed formalist and an astute commentator, James examines the poems and legacies of a panorama of 20th-century poets. In some cases he includes second readings or re-readings from later in life just to be sure he wasn’t wrong the first time. Whether demanding that poetry must be heard beyond the world of poetry or opining on his five favorite poets, James captures the whole truth of life’s transience in this unforgettably eloquent book on how to read and appreciate modern poetry. by Terrance Hayes A dazzling new collection of poetry by Terrance Hayes, the National Book Award–winning author of Lighthead. In How to Be Drawn, his daring fifth collection, Hayes explores how we see and are seen. While many of these poems bear the clearest imprint yet of Hayes’s background as a visual artist, they do not strive to describe art so much as inhabit it. Thus, one poem contemplates the principle of blind contour drawing while others are inspired by maps, graphs, and assorted artists. The formal and emotional versatilities that distinguish Hayes’s award-winning poetry are unified by existential focus. Simultaneously complex and transparent, urgent and composed, How to Be Drawn is a mesmerizing achievement. by Skila Brown A novel in verse inspired by actual events during Guatemala’s civil war, Caminar is the moving story of a boy who loses nearly everything before discovering who he really is. Carlos knows that when the soldiers arrive with warnings about the Communist rebels, it is time to be a man and defend the village, keep everyone safe. But Mama tells him not yet — he’s still her quiet moonfaced boy. The soldiers laugh at the villagers, and before they move on, a neighbor is found dangling from a tree, a sign on his neck: Communist. Mama tells Carlos to run and hide, then try to find her. . . . Numb and alone, he must join a band of guerillas as they trek to the top of the mountain where Carlos’s abuela lives. Will he be in time, and brave enough, to warn them about the soldiers? What will he do then? by Lesléa Newman Winner Of A 2013 Stonewall Honor On the night of October 6, 1998, a gay 21-year-old college student, Matthew Shepard, was lured from a Wyoming bar by two young men, savagely beaten, tied to a remote fence, and left to die. Gay Awareness Week was beginning at the University of Wyoming, and the keynote speaker was Lesléa Newman, discussing her book Heather Has Two Mommies. October Mourning, a novel in verse, is Newman’s deeply felt response to the events of that tragic day. Using her poetic imagination, the author creates fictitious monologues from various points of view, including the fence Matthew was tied to, the stars that watched over him, the deer that kept him company, and Matthew himself. More than a decade later, this stunning cycle of sixty-eight poems serves as an illumination for readers too young to remember, and as a powerful, enduring tribute. by Joohee Yoon Poetry and children belong together, and for a long time, the music and playfulness of verse wove itself through children’s days and lives. Beastly Verse aims to help return the wonder of poetry to children’s lives through sixteen exquisitely illustrated poems, four of which have the surprise and pleasure of being foldouts. Consisting of playful as well as powerfully memorable poems, the reader is transported to a richly worded world of tigers, hummingbirds, owls, elephants, pelicans, yaks, snails, and even telephones A playful romp through verse, rhyme, and gorgeous images, this book carries children into the poetic realm in a way that is not only fun and inviting, but inspiring as well. by Matthew Burgess and Kris Di Giacomo (Illustrator) Enormous Smallness is a nonfiction picture book about the poet E.E. cummings. Here e.e.’s life is presented in a way that will make children curious about him and will lead them to play with words and ask plenty of questions as well. Lively and informative, the book also presents some of Cummings’s most wonderful poems, integrating them seamlessly into the story to give the reader the music of his voice and a spirited, sensitive introduction to his poetry. In keeping with the epigraph of the book: “It takes courage to grow up and become who you really are,” Matthew Burgess’s narrative emphasizes the bravery it takes to follow one’s own vision and the encouragement e.e. received to do just that. by Jean-Pierre Simeon and Olivier Tallec (Illustrator) What is a poem? This is what Arthur must figure out. And then, will he be able to find one in time to save his fish? He asks the baker, his grandparents, old Mahmoud, even his canary, but each of them has something different to say, and no one seems able to explain just what a poem is. Clearly “poem” is a mysterious word, but it is a mystery that needs to be understood if Arthur is to save Leon’s life. Playful, funny, lyrical and yes…poetic.
<urn:uuid:5944259c-aeda-4f32-8b72-7a121b112438>
CC-MAIN-2021-43
https://readinggroupchoices.com/poetry-books-great-conversation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.952822
3,674
2.609375
3
Mickey Mouse first debuted in 1928, with an 8-minute film called ‘Steamboat Willie,’ the first Disney project to utilize synchronized sound. While the new sound technique captivated audiences, it was the film’s lovable main character that truly stole hearts. Mickey and his fixed world of anthropomorphic animals and objects became an instant sensation, quickly becoming the official mascot of the Walt Disney Company. The world of Mickey Mouse acted as a welcomed distraction from the heated international conflicts that arose throughout the subsequent decades. Somewhat inspired by silent film star Charlie Chaplin, Mickey’s positive attitude and unique ability to ignite cheer made him an enduring image for a hopeful future. Today, Mickey Mouse is an internationally recognized symbol, plastered across animated films, theme park rides, ships, shops, and merchandise worldwide, as well as fine art, architecture and high fashion. The king of the commercial art world, Andy Warhol, created an array of prints, sketches, and drawings throughout his career honoring the iconic Mickey Mouse. It makes sense that Warhol would gravitate towards a symbol that — quite literally — illustrates the merging of art and the commercial world. In his 1981 “Myths” series, Warhol created screen prints of iconic pop culture figures from 1950s television and old Hollywood films. Mickey, of course, shows up prominently in several prints and paintings throughout this body of work. Similarly, in 1961 American pop artist Roy Lichtenstein created a work titled ‘Look Mickey,’ demonstrating the first time Lichtenstein directly appropriated an existing pop culture image in his oeuvre. Yet the original graphic, borrowed from the children’s book ‘Donald Duck: Lost and Found,’ was delicately amended by Lichtenstein’s trademark style, primary color palette, and choosy omissions. Unsurprisingly, this wasn’t first time the artist flirted with the use of Mickey Mouse caricatures in his work, a 1958 pastel sketch renders an abstract image of Mickey transposed in movement. Like Lichtenstein’s sketch, other artists have taken to more loose interpretations of the Mickey Mouse motif, further demonstrating the enduring legacy of the image: even manipulated into various versions of itself, the symbol of Mickey Mouse is always recognizable. One artist whose work perfectly embodies this phenomenon is sculptor Claes Oldenburg, who has returned to the mouse theme numerous times throughout his career, the symbol becoming as attached to Oldenburg as the lobster is to Salvador Dali. The artist addressed this with a simple statement, remarking, “I am the Mouse.” The first appearance of Mickey Mouse in Oldenburg’s work was in a nefarious-looking sketch form on a 1963 poster promoting the artist’s show at Dwan Gallery, while numerous notebook pages throughout his career show various sketches of the mouse symbol. In 1970s Oldenburg’s geometric mouse figure escalated further, culminating in the Mouse Museum, an exhibit installation shaped in the geometric form of Mickey Mouse, for which Oldenburg became well known. Inside the exhibit were 400 colorful objects, knick-knacks, and memorabilia, arranged in vivid stories and visual puns. Some artists have collaborated directly with Disney to create representations of the iconic mouse. In 2012, at the invitation of The Walt Disney Company, Damien Hirst created “Mickey,” to be auctioned at Christie’s London as aid for the Kids Company. The painting, made entirely of colored circles – a figurative interpretation of the artist’s iconic dot paintings – shows the whimsical spirit of both Hirst’s work and Mickey Mouse as a representation for all of Disney. Hirst went on to create more images of Mickey, including a 2013 spin painting titled “Beautiful Mickey,” as well as “Mickey” and “Minnie” in 2016, which were new versions of his 2012 image, updated with glitter. Most recently, Hirst’s 2017 “Treasures from the Wreck of the Unbelievable” show featured a bronze statue of Mickey Mouse covered in coral that appeared to have been “rescued” from the bottom of the sea. Other artists who have used the Mickey Mouse symbol as a subject in their work include Wayne Thiebaud, Peter Max, and Romero Britto. Similarly, artist Brian Donnelly, better known by his street name ‘KAWS,’ took inspiration from Disney and the Mouse motif for a series of iconic works honoring the fictional character. In fact, Donnelly was an animator at Disney in the early years of his career, before abandoning corporate culture and taking his art to the streets. KAWS’ work quickly became synonymous with his cast of cartoonish characters. Beyond Mickey’s iconic ears, many of the artist’s paintings depict figures outfitted in Mickey Mouse’s signature white gloves and trousers. Like Andy Warhol, KAWS merges the gap between art and commerce, spinning his creative web to also include toys, clothing, and merchandise. The status of Mickey Mouse as an iconic and enduring image throughout popular culture makes him the perfect motif to incorporate in modern fashion. The most obvious collaboration between fashion and Disney is in the world of branded merchandise. Creating products that combine the spirit of a brand with Mickey Mouse’s legendary image is a surefire way to sell. In 2016 Coach debuted their Mickey Mouse handbag; a bright, red leather cross body satchel whose silhouette mimicked Mickey’s instantly recognizable ears. Coach’s version of Disney merchandise was understated yet obvious, with sneaky tags adding illusions of Mickey’s eyes to the design. In a more traditional ode to the fictional character, Jeremy Scott’s Mickey Mouse collaboration from Fall/Winter 2009 featured a limited edition high-top sneaker with Mickey’s likeness as the shoe’s tongue. The growing demand and PR success of Jeremy Scott’s design made a direct collaboration with Disney an attractive proposition for even the most established brands. One such example is Gucci, who celebrated the 90th birthday of Mickey Mouse with a Spring/Summer 2019 handbag honoring Disney’s token mascot. Since Alessandro Michele took the helm of the brand in 2015, the designer’s love for all things Disney has permeated numerous collections. As such, the Gucci x Walt Disney handbag, which begs comparison to a child’s lunchbox, used a 3-D render of Mickey’s face to form a bulbous shell into which the object’s wearer could stash their belongings, with the bag’s handle connecting his ears at the top. Some designers have gone so far as to incorporate the Mickey Mouse motif into entire collections. For Fall/Winter 2007, Comme des Garcons featured Mickey Mouse ear hats, superimposed baby dresses, Minnie Mouse bows and 3-dimensional glove decals. The collection focused on the development of the female psyche and, as such, the role that Minnie Mouse plays in developing femininity in popular culture and the public conscience. Rei Kawakubo cleverly examines the familiar children’s cartoon from a theoretical lens and allows her work to speak for itself. Similarly, Jean-Charles de Castelbajac incorporated his own sensibilities into his Spring/Summer 2012 Mickey Mouse collaboration. Castelbajac’s trademark tongue-and-cheek style is present in collection’s silhouettes, fabric choices and color scheme, combined with cartoony images of the show’s central character, he also created various pieces featuring Mickey Mouse motifs in the 1980s as well. Marc Jacobs has shown his appreciation for the whimsical world of Disney in past collections as well as through his numerous cartoon tattoos. For Spring/Summer 2013, Jacobs presented a mod collection honoring the 1960s, pairing a low-slung black skirt with a cropped logo sweater embroidered with a Mickey Mouse charactiture. Numerous other designers and brands have dipped their respective toes into the world of Mickey Mouse-branded merchandise, including Rag and Bone, Vans, Kate Spade, and even skate-wear brand Supreme. Perhaps it is Mickey Mouse’s everlasting power as a symbol of whimsy — or a symbol of corporate power — that draws designers to his image again and again. Notwithstanding Mickey’s enduring star-power, there is also something to be said for the simplicity of the character’s design: three combined circles instantly become his head and ears, now a universally recognized symbol of happiness. As such, it is no surprise that individuals working in the fields of graphic and industrial design would also choose to reference Mickey in their work. Garriris von Mariscal’s “Mickey Mouse Chair” combines the artist’s background in illustration and love for cartoons with his crafting skills as a furniture designer. The simple black chair features two circular ‘ears’ emerging from the back, as well as cartoonish metal shoes fitted to each of the chair’s four legs. The Italian furniture company Cappellini collaborated with Walt Disney on the “Mickey Mouse Ribbon Stool,” a red stool with a back made of the aforementioned three circles that form Mickey head and ears. The legs of the stool resemble looped ribbons, curving in a whimsical manner that screams Disney. Perhaps most notably, designer Ettore Sottsass reimaged Mickey Mouse through an abstract lens for a 1971 series of furniture. Inspired by the cartoon’s simple design, Sottsass’s table and chairs set combines bright colors with half-bubbles that hold the legs of the table and stool. Given Disney’s international presence, the company’s most iconic motif has of course inspired architecture throughout their many parks and headquarters. Disney’s administrative center in Florida, built by Japanese architect Arata Isozaki, is a primary example. Built in 1990, the buildings eclectic, post-modern style is a playful rendering of the Disney spirit onto a physical structure. The Disney Building is composed mostly of cubic wings skewed at contrasting angles in an array of bright colors and patterns finishes, while grids of reflective glass decorate the outside. The focal point, a conical tower that hosts a sundial inside, resembles a futuristic UFO landing platform. The building is harmoniously balanced despite its busy facade, and Mickey Mouse ears appear abstracted as an entrance canopy. A short drive away is another “hidden Mickey”. Disney World’s solar facility, a 22-acre solar farm, when seen from a bird’s eye view is instantly recognizable as Mickey Mouse. Consisting of over 500,000 solar panels, the farm provides solar power to Disney World as part of an initiative to make Disney’s Florida parks more energy efficient. In less than a hundred years, the simple design of a cartoon mouse has spawned a legacy that permeates nearly every aspect of modern-day culture. Mickey Mouse has become an iconic image, recreated, remodeled, and manipulated throughout the decades. From an illustration to a symbolic motif, the enduring power of simple lines on paper will forever be remembered in Mickey Mouse
<urn:uuid:485e9f8a-75a8-49d2-a09c-d7bbf8840d9d>
CC-MAIN-2021-43
https://www.minniemuse.com/articles/art-of/mickey-mouse
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.944792
2,319
2.78125
3
The effect of illegal ‘resources’ on conflict depends on the country context, group competition, and the government’s capacity to enforce laws The resource curse refers to a situation in which even though countries are endowed with an abundance of resources, they lag behind in development. One important reason for this is that resources are often linked to a higher probability of conflict (Bazzi and Blattman 2014, Berman et al. 2017). Some resources like specific minerals, mahogany, cocaine and opium, which are often illegally extracted, produced and traded, are particularly important for understanding ongoing conflicts in the often-poor producing countries. In Mexico alone, for example, estimates suggest that drug-related violence has caused 150,000 intentional homicides since 2006.1 While producer countries like Colombia, Mexico and Afghanistan share some common features, the degree to which laws are enforced and the extent of group competition over resource control are decisive in determining the net effect of illegal resources on conflict. Opportunity costs versus contest effects The literature on the economics of conflict describes two main mechanisms that link resource-related income shocks – caused by changing prices or new discoveries – to conflict: - According to the opportunity costs effect, conflict becomes less desirable if there are better outside options (Grossman 1991). - The contest model (Collier and Hoeffler 2004, Hirshleifer 1995), in contrast, posits that when resources become more profitable, group contests over resource control intensify and conflict increases. Theoretical framework: The role of law enforcement and group competition Based on these two main mechanisms (opportunity cost versus contest effects), we develop a new theoretical framework in Gehring et al. (2020) that helps explain the effect of illegal resources on conflict. We distinguish four different scenarios that differ along two dimensions: whether laws regarding the illegal production and trading of resources are enforced and the degree to which groups fight for resource control. We argue and show empirically that these distinctions are crucial. Consider the production of an officially illegal drug. In many developing countries, the government’s capacity to enforce bans on the production is limited to selected areas. If laws are enforced – for instance through eradication campaigns against drug production – producers only profit a little or not at all from higher drug prices. Hence, the conflict-reducing opportunity cost effects are small. In contrast, if there is no enforcement, de jure illegality matters little, producers benefit from higher prices, and strong opportunity cost effects help reduce conflict. The extent of contest effects depends on the degree of competition amongst different groups that are competing for control over lucrative production grounds. In many areas in Mexico or Columbia, for instance, several drug cartels are – often violently – fighting for control. Higher prices, meaning more profits, set an incentive for more fighting. In contrast, if one non-state or insurgent group controls an area, there is no reason to expect more fighting due to higher prices. Figure 1 Scenarios for illegal resources The first scenario, which we call the resource-conflict curse, seems to reflect the relationship between drugs and conflict in many regions in Mexico and Colombia rather well. In line with this, two existing studies suggest that higher prices of cocaine are indeed linked to more conflict in Colombia (Angrist and Kugler 2008, Mejía and Restrepo 2015). Our own study considers a different country, Afghanistan, and uses our framework to explain that higher drug prices do not necessarily lead to more conflict. Opium and conflict in Afghanistan Since 2002, more than 100,000 people have died as a consequence of the ongoing conflict in Afghanistan. The lack of stability and limited state capacity have detrimental effects on development. Most people work in the informal sector with opium being one of the few booming sectors. Estimates suggest that up to 1/7th of the workforce depends on the production of this illegal crop. The main alternative crop that can be feasibly produced across Afghanistan is wheat. If neither opium nor wheat are profitable enough, an alternative source of income is to work for pro-government forces or the Taliban, who are reported to pay a wage of about US$10 a day. Simply put, if the opium price decreases relative to the wheat price, landowners switch from opium to wheat production. Since the production of opium is about four times as labour-intensive as wheat, labour demand would decrease. In such a situation, risky options like supporting the Taliban, – either in the form of fighting or by providing shelter, become more lucrative – this is the opportunity cost channel. At the same time, depending on the degree of group competition, higher prices coincide with more intense fighting over control of districts that are well-suited to produce opium – this is the contest channel. To identify the causal effect of opium prices on conflict, we combined temporal variation in international drug prices with spatial variation in the suitability to grow opium across districts over the 2002-2014 period. We exploited the fact that drug prices have a greater effect on districts that are more suitable for growing opium. To capture this, we interact the international price for opium (heroin) with a suitability index developed by Kienberger et al. (2016). Our approach can be illustrated using two maps that depict the regional variation in conflict in two different years. Figure 2 Opium suitability, opium prices and changes in conflict intensity a) Conflict in 2004: High opium prices b) Conflict in 2009: Low opium prices Figure 2 plots each district’s extent of conflict along with its suitability to grow opium. Districts in darker green have a higher suitability. We therefore expect a stronger effect of price changes in these regions. The red dots depict the conflict intensity. The larger the dot, the higher the number of battle-related deaths in a district and year. Comparing a year with high prices on the left to a year with lower prices on the right suggests two things: - Lower prices seem to be related to more conflict. However, this could be due to many other factors that differ between those two years. - The increase in conflict due to lower prices was stronger in the districts with a higher suitability in the Northwest, the Northeast, and the East. Key finding: Higher opium prices increase household living standards and reduce conflict The results from our main empirical analysis are in line with the graphical evidence from the two maps. They consistently show that a higher opium profitability leads to a reduction in conflict in Afghanistan, which corresponds to the actual conflict trends in Afghanistan over the 2002-2014 period. This effect is sizeable: in a district with the highest possible suitability, a 10% increase in opium prices would reduce the number of conflict-deaths by about 6.75%. We used data at the province-level to verify that higher opium revenues do not, on average, spill over and fuel conflict in other districts. - The first explanation of our main finding is that a households’ living standard increases when opium prices go up. The figure below, based on the National Risk and Vulnerability Assessment household survey, shows that food consumption, assets, as well as self-assessed economic well-being increase with higher prices. Figure 3 Empirical test of opportunity cost channel at household level a) Food consumption b) Assets + Economic improvement - In a second step, we empirically tested the predictions from our model. To do so, we focused on scenarios B, C, and D (in Figure 1), given that scenario A does not reflect the Afghan context well. Using georeferenced data on the drug production network, and Taliban versus pro-government control, highlights the importance of opportunity cost effects, and reveals heterogeneous effects in line with our theory. We categorised all 398 districts of Afghanistan according to the scenarios in Figure 1. In line with our predictions, the strongest conflict-reducing effect is found in scenario D, where there is no law enforcement and only one group (the Taliban) is likely to control production. Figure 4 Empirical test of scenarios B, C and D from Figure 1 a) Scenarios B, C and D across districts b) Marginal effect of opium profitability Our results highlight that the effect of illegal ‘resources’ on conflict depends on the country context. In particular, group competition and the government’s capacity to enforce bans on the production matter. In an environment with weak labour markets and few outside opportunities, enforcement through eradication measures is unlikely to lead to the desired outcomes, and potentially leads to more conflict or a strengthening of insurgent groups. Editors’ note: We would like to encourage the readers to read columns on the rise of violent crimes in Mexico and the effect of exposure to illegal markets on the life of children alongside this column. Angrist, J D and A D Kugler (2008), “Rural windfall or a new resource vurse? Coca, income, and civil conflict in Colombia”, The Review of Economics and Statistics 90(2): 191–215. Bazzi, S and C Blattman (2014), “Economic shocks and conflict: Evidence from commodity prices”, American Economic Journal: Macroeconomics 6(4): 1–38. Berman, N, M Couttenier, D Rohner and M Thoenig (2017), “This mine is mine! How minerals fuel conflicts in Africa”, American Economic Review 107(6): 1564–1610. Collier, P and A Hoeffler (2004), “Greed and grievance in civil war", Oxford Economic Papers 56(4): 563–595. Gehring, K, S. Langlotz and S Kienberger (2020), “Stimulant or depressant? Resource-related income shocks and conflict", CRC-PEG Discussion Paper No. 269. Grossman, H I (1991), “A general equilibrium model of insurrections”, American Economic Review 81(4): 912–921. Mejía, Daniel, & Restrepo, Pascual. 2015. Bushes and Bullets: Illegal Cocaine Markets and Violence in Colombia. Documento CEDE Working Paper No. 2013-53. 1 See https://edition.cnn.com/2013/09/02/world/americas/mexico-drug-war-fast-facts/index.html (last accessed 04/21/2020).
<urn:uuid:0e6e1188-583d-444c-933e-63339736ff3f>
CC-MAIN-2021-43
https://voxdev.org/topic/resources-and-conflict-role-law-enforcement-and-group-competition
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.914553
2,159
3.125
3
Early this morning the world learned that the 2015 Nobel Prize in Physics has been awarded to Takaaki Kajita and Arthur B. McDonald for discovering that neutrinos can change from one type to another, evidence that—contrary to prior scientific consensus—they must have mass. Just what are neutrinos? They are ultralight subatomic particles, and they’ve been behind two previous Nobel prizes in physics for good measure. That’s how interesting they are. The poet John Updike paid humorous tribute to their best-known properties in his 1959 poem “Cosmic Gall”: Neutrinos they are very small They have no charge and have no mass And barely interact at all. One of those three properties has since been proved incorrect. Why do physicists care so much about neutrinos? For starters, solar neutrinos could shed light on the inner workings of our sun and other stars, because they don’t interact much with other particles as they travel from the sun’s core out into space. (Isaac Asimov dubbed them “ghost particles.”) So the information they carry is less warped by interference. Neutrinos may also provide clues about the nature of dark matter and dark energy—two of the biggest challenges facing physics in the 21st century. However, for much of the 20th century, they were the source of one of the most frustrating puzzles in particle physics. The sun produces trillions of the little devils every day, yet experiment after experiment revealed far fewer solar neutrinos than physicists expected. The work that led to the solution of that puzzle lies behind this year’s Nobel Prize in Physics. Wolfgang Pauli first proposed the existence of an unseen particle in a 1930 letter to colleagues, trying to explain conservation of energy in a type of radioactive decay (beta decay) in atomic nuclei. (Energy appeared to be missing in some early experimental results, and he contended this was simply not possible. He was correct.) Addressing them as “Dear Radioactive Ladies and Gentlemen,” he suggested the culprit was a very light particle that carried away some of the energy. Enrico Fermi later dubbed it a “neutrino.” Pauli thought such a particle might never be detected; he claimed he only proposed it in desperation to find a good theoretical explanation for the beta decay problem. It took 25 years to do so. In 1956 Clyde Cowan and Frederick Reines observed the first neutrinos, thanks to the rise of nuclear power plants capable of generating the necessary fusion reactions to produce them. They sent Pauli a telegram about their discovery, to which Pauli replied, “Everything comes to him who knows how to wait.” More waiting lay ahead. It would be another 10 years still before anyone detected neutrinos that hailed from the sun. Here’s the difficulty: neutrinos only interact with other particles via the weak nuclear force, which means the neutrinos have to be so close to the other particles as to nearly be touching their nuclei. That’s when the weak force kicks in. It’s what makes neutrinos so devilishly hard to detect. That’s why physicists started building neutrino observatories underground, to avoid interference from things like cosmic rays hitting the Earth’s atmospheres. (“It’s ironic, in order to observe the sun you have to go kilometers under ground,” newly crowned Nobel Laureate McDonald said in an interview early this morning.) They buried huge tanks of liquid in the earth, to increase the likelihood that a neutrino traveling to earth from the sun would strike one of the atoms in the fluid. This would initiate a decay process, changing the atom into a different chemical element. When that happened, an electron would be released, which could be easily detected. It was still like trying to find one particular grain of sand in the Sahara, but physicists are plucky that way. It was a physicist named Ray Davis Jr. who built the first underground neutrino observatory in 1967, in an abandoned mine in South Dakota called Homestake. (That’s Davis in the photo at right, taking a refreshing dip underground.) He filled his tank with 600 tons of dry-cleaning fluid, and waited for a neutrino to collide with one of the chlorine atoms, thereby changing it into an argon atom. And it worked! Davis was the first to detect solar neutrinos, snagging himself a Nobel Prize (shared with Masatoshi Koshiba and Riccardo Giacconi in 2002) in the bargain. There was just one problem. Homestake should have detected around one neutrino per day, per the theoretical calculations, but Davis only detected one neutrino every three days. Two-thirds of the expected solar neutrinos were missing. Physicists battled over why this was so for the next three decades. A vital clue appeared in 1962, when physicists discovered there was a second type (or flavor) of neutrino, the muon neutrino. This was so surprising that I.I. Rabi famously exclaimed, “Who ordered that?” A third type of neutrino, the tau neutrino, wasn’t directly observed until 2000, although its existence had long been suspected (since 1975). And therein lay the key to solving the case of the missing solar neutrinos. What if neutrinos could change their flavors? That hypothesis turned out to be right. Housed deep underground in a former zinc mine, it was Japan’s Super-Kamiokande collaboration—led by Kajita—that uncovered the first hint that neutrinos could oscillate (change flavor) in 1998. But their detectors couldn’t detect that switch directly. That’s where the folks at Sudbury came in, led by McDonald. The sun only produces one type of neutrino (electron, or solar neutrinos), and most detectors looked just for those. Sudbury designed its detector to hunt for the other flavors as well, by spiking their heavy water with extra salt when they wanted to detect tau or muon neutrinos. They added up all three flavors of neutrinos they detected and voila—that number exactly matched the theoretical predictions for how many solar neutrinos there should be. Something was happening to the solar neutrinos on their way to earth from the sun. In 2002, the folks at Sudbury announced that they had “found” the missing neutrinos. It turns out the solar neutrinos weren’t missing at all; they were just in disguise, changing flavors as they traveled from the Sun to the Earth, thereby escaping detection for decades. I like to compare the different flavors of neutrinos to piano strings, which are tuned to different notes, e.g., G, B and C. But just because a neutrino is born a “G”, that doesn’t mean it stays a G forever. Neutrinos can “de-tune” over time, such that a G gradually becomes a B or a C. This is really, really important, because if neutrinos can change flavor, that means they must have some tiny bit of mass. And that contradicts the Standard Model of particle physics. (That doesn’t mean the model is wrong and should be discarded, just that it’s incomplete. It doesn’t account for gravity or dark matter either.) Why is mass to blame? Well, it’s complicated. But back in 2007, I was present at a demo by physicist Janet Conrad, who explained how mass could de-tune neutrinos with the help of a couple of simple tuning forks. The forks were tuned to the same frequency, except she stuck a tiny bit of mass to one of the forks. She struck one fork, then the other, and a “wah-wah-wah” sound rang through the room. Conrad explained that like most subatomic particles, neutrinos have both particle and wavelike natures. Waves oscillate back and forth and can combine in interesting ways. Play two very similar notes together, and you’ll get an interference effect: the sound will wobble between loud and soft. Neutrinos oscillate in the same way. Their waves combine in different ways as they travel through space, and it is those minuscule differences in their masses that gives rise to telltale interference effects, causing a flavor change over time. Since then, more and more evidence has come in that neutrino oscillation is a very real phenomenon. In 2010, scientists with the OPERA experiment at Gran Sasso National Laboratory found four tau neutrinos in a stream of billions of muon neutrinos generated at the nearby CERN laboratory—they had clearly changed flavors en route. And in 2011, the Japanese Tokai to Kamioka (T2K) experiment found the first evidence of muon neutrinos turning into electron neutrinos on their journey between the two laboratories. Neutrinos continue to surprise us, even now. Chances are this won’t be their last Nobel win. Fukuda, Y. et al. (1998) “Evidence for Oscillation of Atmospheric Neutrinos,” Physical Review Letters 81:1562. (Super-Kamiokande collaboration) Ahmad, Q.R. et al. (2001) “Measurement of the rate of v+d -> p+p+e interactions produced by 8 solar neutrinos at the Sudbury Neutrino Observatory,” Physical Review Letters 87: 071301. (Sudbury collaboration) Ahmad, Q.R. et al. (2002) “Direct evidence for neutrino flavor transformation from neutral-current interactions in the Sudbury Neutrino Observatory,” Physical Review Letters 89:011301. Jayawardhana, R. Neutrino Hunters: The Thrilling Chase for a Ghostly Particle to Unlock the Secrets of the Universe. New York: Scientific American/Farrar, Straus and Girous, 2013.
<urn:uuid:6eeb5ecd-fb11-488f-861d-ada38cd85031>
CC-MAIN-2021-43
https://gizmodo.com/neutrinos-change-their-flavor-and-snag-another-nobel-pr-1734935090
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.944085
2,161
3.6875
4
A review of the week’s plant-based nutrition news 27th June 2021 This week I cover an important study on how red meat damages DNA, fatty liver and severity of COVID-19, the impact of healthier lifestyles on inflammatory bowel disease, timing of meals and health outcomes and the diet of early humans. RED MEAT CONSUMPTION DAMAGES DNA: This is a really important study. We have known for several years that processed and unprocessed red meat increases the risk of cancer, particularly bowel cancer, but the actual mechanism has been less certain, with many theories proposed. In this study, DNA was analysed from matched normal and colorectal tumour tissues from 900 patients with colorectal cancer who had participated in one of three prospective cohort studies from the US, the Nurses’ Health Studies and the Health Professionals Follow-Up Study. All participants had previously provided information on their diet, lifestyle, and other factors over the course of several years prior to their colorectal cancer diagnoses. DNA sequencing data showed different mutation signatures in the different tissues. One particular mutation signature found in the colorectal cancer tissue is associated with alkylation of DNA, indicating a type of DNA damage. This particular alkylating signature was associated with high intakes (more than 150g per day) of processed and red meat prior to the diagnosis of colorectal cancer. However, other dietary factors such as poultry and fish consumption, and lifestyle factors such as body mass index, alcohol, smoking and physical activity was not associated with this alkylating signature. The tumours that displayed the alkylating DNA signature were more likely to have mutations in genes associated with driving the development of colorectal cancer. In addition, this higher levels of alkylating damage was associated with a 47% greater risk of dying from colorectal compared to patients with lower levels of damage. The lead author was quoted by the American Association for Cancer research to say ‘Our study identified for the first time an alkylating mutational signature in colon cells and linked it to red meat consumption and cancer driver mutations. Our data further support red meat intake as a risk factor for colorectal cancer and also provide opportunities to prevent, detect, and treat this disease.’ The level of red meat consumption considered high in this study really was pretty high at 150g per day, but with all risk factors there is a dose effect and when it comes to red and processed meat, any consumption above 0g is considered to increase the risk of chronic disease. The average level of intake in the UK is around 85g (mostly processed red meat) and in the US 100g per day. It’s interesting to note that many chemotherapy drugs also work by alkylating and thus damaging DNA, with the hope that the impact is greatest on tumour rather than normal tissue. We have so many better food choices to make that act to protect our DNA and prevent cancer. These foods are all the healthy whole plant foods. Red meat is best left off the plate. FATTY LIVER DISEASE AND COVID-19: Early on in the pandemic it was clear that people with underlying health conditions were more likely to be hospitalised and ultimately die of COVID-19. Overweight and obesity is a risk factor for a more severe illness with the SARS-CoV-2 virus. This study investigated whether fatty liver disease, often, but not exclusively, associated with obesity, is an independent risk factor for severe COVID-19. The retrospective study included participants from the UK Biobank study of whom 41,791 underwent MRI (aged 50–83) for assessment of liver fat, liver fibro-inflammatory disease, and liver iron. Positive COVID-19 test was determined from UK testing data, starting in March 2020 and censored in January 2021. 4,458 had a COVID-19 test result available with 1,043 testing positive and 3,415 testing negative. Thirty-two (3.1%) patients who tested positive were hospitalized either 1 week or 1 month after the positive test result. These patients were mostly male and had significantly higher BMI and liver fat compared to other positive patients who did not require hospitalization. The 8 patients were admitted to intensive care had significantly higher liver fat, liver inflammation and BMI compared to all other positive participants who did not require intensive care. Even after a multi-variate analysis with other risk factors, including, age, male gender, non-Caucasian ethnicity, lower socio-economic class and high BMI, fatty liver disease remained a risk factor for more severe disease. Overall, having severe fatty liver disease of greater or equal to 10% was found to be a significant risk factor for testing positive for COVID-19 and hospitalisation. Those participants with fatty liver disease defined by liver fat ≥10% and BMI ≥ 30 kg/m2 were 5.14 times more at risk of being hospitalised with severe disease. This equate to around 11% of the UK population being at risk. The mechanisms by which fatty liver leads to more severe disease still needs to elucidated and of course the numbers with severe COVID-19 in this study were very small. In the meantime the authors conclude that the results highlight the ‘importance of “de-fatting” the liver to reduce susceptibility’. The good news is that a healthy diet and lifestyle habits are very effective at preventing and reversing fatty liver. I have summarised this topic frequently. Below is the evidence-based approach for prevention and reversal of fatty liver. - Calorie restriction with a 500–1,000 kcal daily deficit is an extremely effective lifestyle intervention for both the prevention of NAFLD and histological improvement in patients with established disease. The goal of calorie reduction should be to achieve ≥10% overall body weight loss. - Reduce intake of red and processed meats - Reduce/eliminate refined carbohydrates and especially fructose - Increased fibre intake through the consumption of fruits, vegetables, whole grains and legumes - Replace dietary saturated fatty acids with mono-unsaturated and poly-unsaturated fatty acids - Coffee consumption is protective against the development of NAFLD and disease progression. Moderate to heavy alcohol consumption should be avoided in the presence of obesity, NAFLD, and other metabolic risk factors. Abstinence is advised for patients with advanced fibrosis. HEALTHY LIFESTYLES REDUCE THE RISK OF DEATH IN PEOPLE WITH INFLAMMATORY BOWEL DISEASE (IBD): This study highlights how important a healthy lifestyle is even if you already have a chronic illness. The paper reports data from the Nurses’ Health Study and the Health Professionals Follow-up study and assessed the impact of 5 healthy lifestyle factors on risk of death in people with IBD, both Crohn’s disease (CD) and ulcerative colitis (UC). The 5 healthy lifestyle factors were never smoking, normal body mass index, vigorous physical activity, adherence to a Mediterranean diet and light alcohol consumption [0.1–5.0 g/d]. The study included 363 and 465 patients with CD and UC and during follow up 83 and 80 deaths occurred in CD and UC respectively. The results showed that the main causes of death were cardiovascular disease and cancer. There was an inverse relationship between healthy lifestyle factors and risk of death. Compared to no healthy factors, those with 3–5 healthy lifestyle factors had a 71% reduced risk of death during the follow up. This positive impact of healthy lifestyle factors was not related to severity of the IBD because the relationship held true when taking into account use of immunosuppressive treatment or need for surgery as markers of disease severity. A healthy Mediterranean diet, which emphasises whole plant-based foods and fish, whilst limiting red and processed red meat and processed foods, reduced the risk of dying by about one third. Maintaining a healthy body mass index, exercising regularly, and light alcohol use of up to 5g per day, were also beneficial. Not smoking improved survival by a factor of four. Although people with IBD don’t often die of the disease itself, they are at increased risk of dying from cardiovascular disease and cancer when compared with the general population. At a very basic level, these diseases all share in common increased levels of inflammation. Healthy diet and lifestyles are very effective at reducing inflammation and addressing other mechanisms of chronic illness including oxidative stress, unhealthy gut microbiome, insulin resistance, unhealthy body weight, dyslipidaemia, endothelium dysfunction. altered gene expression and shortened telomeres. Anyone of these mechanisms could be at play here. The topic of IBD and cardiovascular disease has been highlighted recently by the American College of Cardiology with this excellent review article. Risk factor modification through adopting healthy lifestyles is their first and foremost recommendation. Of course, there healthy habits can be difficult for people with IBD but that should not mean we don’t do our best to support patients adopt the healthiest lifestyle possible. It is never too late to make a positive impact on health outcomes. The authors conclude ‘Assessment of healthy lifestyle behaviors should be routinely performed in IBD patients and adherence to such behaviors should be encouraged to improve longevity and promote healthy aging’. IMPACT OF TYPE AND TIMING OF FOOD ON RISK OF DEATH: This is a interesting hypothesis-generating analysis. It examines the impact of dietary pattern and timing of consumption on risk of death from all causes, cardiovascular disease and cancer. Prior studies have suggested that consuming more calories earlier in the day and less as the day progresses to match our circadian rhythm is associated with better health outcomes. The study used data from 21,503 participants in the National Health and Nutrition Examination Survey from the US between 2003 to 2014. Dietary data were collected and food patterns were grouped into the following: Western breakfast (high in refined grains, legumes, added sugar, solid fats, cheese and red meat), starchy breakfast (high in white potato, other starchy foods, milk and eggs), fruit breakfast (high in fruits, whole grains, yogurt and nuts), Western lunch (refined grains, solid fats, cheese, added sugar, cured meats), vegetable lunch (total vegetable, red and orange vegetable, tomato and dark vegetable), fruit lunch (fruit and yogurt), Western dinner (refined grain, cheese, solid fats, added sugars, and eggs), vegetable dinner (total vegetable, red and orange vegetable, tomato, and dark vegetables), and fruit dinner (fruits and yogurt). For the snacks, grain snack (refined grain, whole grain, added sugars, cheese, and eggs), starchy snack (white potato and other starchy food), fruit snack, and dairy snack (dairy products, milk, and cheese) were identified as main snack patterns after main meals. There were not too many surprises with the results. For main meals, the study found that meal patterns of fruit lunch and vegetable dinner were associated with decreased risks of cancer, CVD, and all-cause mortality , whereas Western lunch was associated with elevated risks of CVD and all-cause mortality. For snack patterns, the study found that snack patterns of fruit after breakfast and dairy products after dinner were associated with decreased mortality risks of cancer, CVD, and all-cause; whereas the starchy snack pattern (mainly a reflection of white potato consumption) after main meals was associated with elevated risks of CVD and all-cause mortality. However, the impact of meal timing was greatest in those with the lowest quality diet. Interestingly, vegetables at dinner was significantly associated with lower risks of cancer, CVD, and all-cause mortalities, whereas vegetables consumed at lunch did not have these beneficial effects. The authors hypothesise that this may be due to the circadian pattern of metabolism and gut microbiota. For example, abundance of bacteria that use dietary fibre from vegetables to generate short-chain fatty acids is frequently highest at night, and it gradually decreases in the daytime. It may also be due to the fact the vegetable-based meals are lower in calories which is beneficial for meals later in the day. The authors hypothesise that the association of dairy consumption after dinner and reduced mortality could be due to better sleep quality because of the high levels of high levels of tryptophan, which is the precursor of serotonin and melatonin. Overall, the authors conclude ‘In conclusion, higher intake of fruit at lunch, and higher intake of vegetables and dairy products in the evening were associated with lower mortality risks of CVD, cancer, and all-cause; whereas higher intake of refined grain, cheese, added sugars, and cured meat at lunch, and higher intake of potato and starchy foods after main meals were associated with greater CVD and all-cause mortalities’. These are interesting findings but I am not sure it is going to change the way I eat. The data has found a number of associations in a population that has one of the worst diet qualities globally and where more than 60% of food consumed is ultra-processed. Concentrating on diet quality first and foremost is more important. Then eating in tune with your circadian rhythm can be addressed with avoidance of eating large meals late at night. Of course, those of you who follow me know I am not suddenly going to start recommending dairy products as an evening snack. When comparing dairy consumption in the evening to a more typical American post dinner snack of cakes or ice-cream for example then dairy will definitely appear better. If you want to boost your tryptophan levels you can easily do so through consuming plant foods such as pineapple, tofu, nuts and seeds. Foods high in melatonin include cherries, goji berries, pistachios and almonds. TURNS OUR WE HAVE BEEN EATING CARBS WELL BEFORE DOMESTICATION OF CROPS: This is a really interesting article dispelling some myths about our ancestral diet prior to domestication of crops. Rather than being hunters and reliant on meat, there is evidence from more than 10,000 years ago that our ancestors were cooking porridge and stews made from grain. By examining residues on ancient tools, such as grinding tools, and dental plaque, it has become apparent that even going back as far as 100,000 years ago people were consuming starchy vegetables and cereal grains, with evidence even for the preparation of bread. It now seems clear that plant-strong diets have been the norm for most of human history and ‘that early humans were cooking and eating carbs almost as soon as they could light fires’. This puts into question the Paleo way of eating which excludes grains, legumes and starchy vegetables, some of the healthiest foods humans can eat. I highly recommend this talk by Dr Christina Warinner, PhD, from the University of Oklahoma.
<urn:uuid:44ac2055-acff-43e1-ac70-d0854072cc8b>
CC-MAIN-2021-43
https://plantbasedhealthprofessionals.com/review-of-the-weeks-plant-based-nutrition-news-27th-june-2021
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.963926
3,051
2.90625
3
McDADE HISTORICAL WALKING TOUR The History of McDade, Texas A view of Main Street in downtown McDade. The Houston and Texas Central Railroad brought about many changes in Bastrop County. A new town sprang up along the railroad. Early writings note the town was called “Tie Town,” most likely because ties for the railroad were stored here. At some point the name changed to McDade, named for James W. McDade who at that time lived in Brenham. The town was formally established in 1869 and officially platted two years later, becoming a center for shipping cotton and other freight traveling to and from Bastrop and Travis counties. It was incorporated in 1873. McDade became a thriving railroad town with a saloon, post office and a cotton gin; however, but by the end of the Civil War, it had become an outlaw hotspot, specifically for a gang the locals referred to as “the notch cutters.” The first known incident took place in 1875 when vigilantes hung two men who were suspected of being outlaws. Then came the murder of two vigilantes, possibly in retaliation. In turn, a third outlaw was hung. A year later, in 1876, two men were shot and killed when they were found in the possession of a cow skin carrying the brand of a local ranch family, the Olives. Once again, retaliation may have been behind the murder of two men at the Olive ranch five months later, after which the Olive family home was set on fire. A year after that incident, in 1877, vigilantes removed four men from a local dance and hung them. In 1883 came perhaps the most well-known incident. It began with the murder of two men in Fedor that November and later, the beating of another man. Deputy Sheriff Heffington was shot while investigating the two crimes. Four men, suspected of having taken part, were hung. That Christmas Eve, three local men were removed from the saloon and taken to another location and hung. Said Melba McLemore, an ancestor of those hung and those who retaliated the next day, “The story most often told about Christmas Eve has 40-50 men standing guard at Oscar Nash's Saloon while masked vigilantes led three men out of the bar room. Brothers Thad and Wright McLemore, along with family friend Henry Pfeiffer were mounted on their horses, hands tied behind their back, and led into the woods. There, vigilantes left all three men swinging on low lying limb of a hickory tree.” The Nash Saloon is now the McDade Historical Museum. McDade grew with the addition of a broom factory and a pottery. The business eventually became known as McDade Pottery. There were also coal mines. Blacksmiths, milliners and physicians came to the area. Schools and churches were established. In 1890, the same year the Elgin Courier began, the McDade Mentor, a weekly newspaper, was founded. The town grew from 150 in the early days to 600 in 1925. Then came World War II. The population began declining. McDade Pottery, along with other businesses, closed. McDade became known for its watermelons. In the 1940s, residents created a popular county event which became the McDade Watermelon Festival, which celebrated 75 years in 2018. In this tour, you’ll learn the history of the historic buildings comprising the downtown of McDade. 1. Guaranty State Bank/McDade Post Office, Lot 1, Built 1913; George Milton family home, Circa 1876 Lot 1 was owned by the George Milton family from 1876 until 1913. Before the building on Hwy 20 was built, the bank opened in the Milton’s home. A sign, McDade Guaranty State Bank, was painted on the side and is still visible today. In 1913, George Milton and wife, Emaline Petty Milton, sold to A. C. Harvey, president of the new bank, 60 feet on the front (east) end of the property. A charter was issued in 1913 and would continue for 50 years. The bank opened for business in 1913 and remained open for 20 years, closing in 1933. All who had accounts received their money when the bank closed. In 1935, J.F. Metcalfe, bank president, and his wife, Louise Taylor Metcalfe, deeded the space to J. H. Watson. Watson rented the building to the U. S. Government as a post office until 1945. 2. & 3. George Milton’s Store, Lots 2 & 3, Built 1883 George Milton’s store was located here in 1883. Milton’s store is mentioned in the story, “Shoot-out on Christmas Day” in the Frontier Times Magazine. It was where Haywood Batey kept his money in Mr. Milton’s safe, and it was in the street in front of Lot 2 where a fight broke out on Christmas morning in 1883, killing the two Batey brothers, Jack and Asbury, and wounding several others. George Milton and Thomas Bishop were engaged in this affray. The next building was on this lot was the Julius Kastner General Store and, in recent years, Seigmund’s General Store. 4. Building erected for the movie, True Women, Lot 4, Built 1996 Lot 4 was left vacant after the Julius Kastner Store burned in1935, until the movie people built this. This wooden building was erected by a movie company in 1996 to be a blacksmith shop in the movie, True Women. 5. S. W. Billingsley & Co, aka DeGlandon Barber Shop, 565 Old Hwy 20, Lot 5, Circa 1870s, Rebuilt 1909 The earliest owner of Lot 5 was J. P Billingsley. In 1878, he and S. W. Billingsley made an agreement and the store became known as S. W. Billingsley and Co. Billingsley sold to Felix H. McLemore in 1880. Sheriff H. N. Bell seized the estate of McLemore in 1887 and was sold in Bastrop for $25 to J. W. Westbrook. Sales of the property continued. In 1909, Otto Ehlo and his second wife Caroline, sold the property for $150 to E. F. Brown of Harris County. (Lucy Ehlo died in 1895 due to labor following a fire that burned the Ehlo Saloon.) Brown built the present brick building as his private bank. The Brown Bank folded four years later. This building served as the DeGlandon Barber Shop in 1913. Albert DeGlandon was known as “Bud” and was elected to represent Bastrop County in the Texas House of Representatives in Austin, serving in the 45th legislature. 6. Ehlo & Wynn Saloon, aka Dungan Drugs, 561 Old Hwy 20, Lot 6, Circa 1871 If this building could talk, you would hear more scary stories here than from any others. It is in the center of the block and has been many saloons such as the Ehlo Saloon, Otto Ehlo’s store, Ehlo & Wynn Saloon, Herman Klemm Saloon, Sam Walker Grocery, S. T. Hillman’s Confectionery, the Dungan Drug and possibly others. A refreshing soda, ice cream and other goodies could be purchased here. The McDade News was written here by Mrs. Sam Dungan, known to all as “Ms. Emma.” Everyone took their news to “Ms. Emma” for her weekly column in the Elgin Courier. The drug store has been closed since 1992. 7. Dr. E.S. McMullen Pharmacy, aka Southern Pharmacy, Lot 7, Circa 1872 Lot 7 is very popular. It has sold at least 25 times to date. In 1881 King Henry Barbee sold Henry M. Green Lot 7 for $120 who deeded it to John W. Kennedy the next year for $50. Kennedy and his wife, Fannie, sold the lot in 1885 for $300. Five years later Lots 6, 7, 15, 16 and 17 were sold to Otto Ehlo for $2,000. Ehlo sold to A. E. Wynn in 1896 who sold to T. P. Bishop in 1902. Dr. Ezra Smith McMullen came to McDade sometime after 1916. He worked as a border guard and then a medical doctor in McDade. A sign painted over the windows and on the front of the building once read “The Underwood Pharmacy” over one window and “Dr. E. S. McMullen over the other window. Prescriptions, patent medicines, stationery, candies and cigars were sold there. 8. Nancy Boswell Hodges Townhouse, Lot 8, Rebuilt circa 1907 This building has a long history of ownerships. The lot completely burned due to arson and was rebuilt by Mr. R. L. Williams in about 1907 or 1908. In about 1985 it was rebuilt by Nancy Boswell Hodges, an interior decorator. She loved her home and filled it with beautiful furnishings. The yard was a private “Garden of Eden.” 9. Koppel & Bro., aka “Barroom,” 559 Old Hwy 20, Lot 9, Circa 1873 In 1869, before there was a McDade, the firm of Koppel & Bro. had a store in Bastrop. The building it occupied belonged to McDade’s John D. Nash. This lot was sold by A. Groesbeck and F.A. Rice, trustees of the Houston & Texas Central Railway Company of Houston to Koppel & Bro for $300. Jacob Koppel sold his interest in the Bastrop and McDade stores to Henry and Samuel Koppel in 1873. The Koppels were offered a “trade-out” by John W. Brown, who owned Lot 3 in Block 9 and agreed. In 1874, John W. Brown moved his business to Lot 9. On February 15, 1881, a fire consumed lots 7, 8, 9 and 10. The fire was believed to be arson. The Browns went backrupt. Apparently John Hancock and Charles S. West held the lien as they sold Lot 9 to John D. Nash. The McDade Historical Society currently owns Lots 9 and 10 in Block 9. 10. Captain John Nash Saloon, aka Rock Front Saloon, 557 Old Hwy 20, Lot 10, Built 1874 This historic sandstone rock building, with its two-foot thick walls, was built by Captain John Dempsey Nash in 1874. It is the oldest building in town, but before it was built, two wooden structures, both saloons, burned here. This building was once gutted by fire as well. Mr. R. L. Williams restored the rock building in about 1907 to sere as the U.S. Post Office. It was the infamous Nash Saloon, better known as the Rock Front Saloon, a freight office, stage stop, U.S. Post Office, drugstore, Dr. D.C. Atkinson’s office, telephone office, Quinton Allen’s Café, the Royston Grocery, T. E. Dungan Grocery, storage and, now, the McDade Historical Museum Depot Museum. Many thanks to Audrey Rother of the McDade Historical Museum.
<urn:uuid:450b4918-6703-4ce2-ad9f-f20c709b952e>
CC-MAIN-2021-43
https://www.bastropcountyhistoricalcommission.com/madade-walking-tour
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.978992
2,451
2.640625
3
This book arose from Nicolas Lamare’s doctoral dissertation, submitted to the Université Paris-Sorbonne in 2014. A specialist in ancient water supply and display, the author has been engaging at length with issues related to Roman-period fountains across North Africa, as his range of publications shows. This volume collects the results of painstaking bibliographic, archival, and field research that Lamare carried out during his doctoral and subsequent studies. The outcome of his work — it must be stressed straightaway — is a valuable academic contribution, which will be of use to archaeologists and historians of North Africa and beyond. Lamare’s book represents a major update of the evidence of North African nymphaea discussed by Pierre Aupert in the 1970s, supplementing other recent works, such as that of Francesco Tomasello on the fountains and smaller nymphaea of Lepcis Magna. This field of research continues to be of current scientific relevance, as demonstrated by the collection of studies on water distribution across the ancient and medieval Maghreb and, more recently, by the Agence Nationale de la Recherche (ANR) – EauMaghreb programme, which has reunited interdisciplinary essays on water management in North African cities and their territory under the Roman Empire. The volume features a discussion of the monumental fountains of North Africa (pp. 1-291), followed by a catalogue of the recorded monuments (pp. 293-384) and the associated epigraphic corpus (pp. 387-405). The discussion is broken down into three parts: (1) “L’étude des fontaines: histoire des recherches et methodes” (pp. 7-83); (2) “L’archéologie des fontaines: architecture et hydraulique” (pp. 85-205); (3) “Les fontaines au quotidien: histoire et fonctions” (pp. 207-291). The book includes a useful index of ancient sources and sites (pp. 453-461), as well as two out-of-text plates illustrating Roman coins with depictions of water monuments, and a set of plans of the principal North African fountains. Part 1 opens with an assessment of North African archaeology and the study of fountains (Chapter 1, pp. 11-31). It starts with the works of Arab geographers as early as the tenth century, continuing with the notes and observations made by European travellers, especially from the nineteenth century onwards. The majority of ancient sites and monuments were brought to light during excavations carried out under western colonization in Libya, Tunisia, Algeria and Morocco. Lamare carefully reviews the activity of those colonial investigations, particularly the French ones, while also examining the impact and development of post-colonial scholarship (pp. 13-31). The role of colonialism in the study of North Africa and the subsequent reactions to it are topics that have been extensively treated by modern scholars; in the Anglophone world, the works of David Mattingly are fundamental in this regard. Chapter 2 reviews previous research and approaches to fountains in the ancient world (pp. 33-56), engaging with the terminology used in the written sources to refer to these buildings (fons, lacus, salientes, and nymphaeum). An important point raised by Lamare, which defines his own approach to this topic, is the critique of the use of typological classifications to attempt to delineate a chronological evolution of fountains (pp. 51-56). Modern, conventional typologies do not reflect the complexity of reality in antiquity — an observation which is also valid for other categories of North African buildings, such as the so-called “Romano-African” or “Eastern-type” temples, just to cite one example. Chapter 3 concludes this first section of the book with a broad overview of the development of private and public fountains across the Mediterranean, from the sixth century BCE to the sixth century CE (pp. 57-83). In Part 2, the archaeological and architectural features of the North African fountains are assessed. Chapter 4 is concerned with construction techniques (pp. 89-119). Overall, builders opted for the use of locally sourced materials, while marble was preferred for the decorative elements of fountains. The evidence reveals a wide range of masonry types employed in the construction of these monuments, which depended on both regional and interregional patterns. The author remarks, however, that a complete study of building techniques attested in the North African provinces remains a major desideratum. This technological variability is hardly surprising for a territory as vast as North Africa. Modern studies are also revealing that some techniques traditionally believed to have a North African origin, like the opus africanummasonry documented in some of these fountains, actually had a pan-Mediterranean occurrence from the moment of their first appearance, with a broad range of types and variants. Chapter 5 looks at the elevation and architectural sculpture of fountains (pp. 121-170). As Lamare rightly observes, reconstructing the elevation of ancient monuments is a task that is subject to a high degree of speculation, thus requiring a cautious approach. The first part of the chapter is dedicated to the analysis of the iconographic, literary and epigraphic sources that can provide elements of support for attempting a reconstruction (pp. 125-134). With regard to their architectural layout, North African fountains are divided into four broad groups: (1) Fontaines de plan centré, as exemplified by the model of the meta sudans in Rome; (2) Fontaines-édicules, such as the Fountain of the Tetrarchy at Cuicul; (3) Fontaines à niche semi-circulaire, like the lacus of the theatre at Lepcis Magna; (4) Fontaines “à façade”, the most renowned example being the Great Nymphaeum of Lepcis Magna. As already pointed out, these subdivisions do not have any chronological implications. The preserved architectural sculpture associated with fountains is not abundant in North Africa. For this reason, the author looks in parallel at the more conspicuous evidence from Asia Minor to reconstruct the iconographic programmes, which featured mythological subjects, deities, portraits of local citizens and emperors (pp. 152-156). The architectural analysis of the fountains is supplemented by the examination of water supply and hydraulic technology in Chapter 6 (pp. 171-205). This points to the identification of a system that made use of different methods for sourcing and distributing water (aqueduct branches, cisterns, and pits), some of which must have been in use already in pre-Roman times. Part 3 further expands the discussion by placing the North African fountains within a broader socio-historical context. This is particularly welcome, as it complements recent works on other regions of the Roman world, such as Brenda Longfellow’s study of monumental fountains in imperial Rome and the Eastern Mediterranean. Chapter 7 looks at the relationship between fountains and urban history (pp. 211-247). Lamare addresses the issue of the visibility of these monuments in antiquity; the author explains that unlike fora, theatres and other enclosed spaces, the full iconographic details of fountains were immediately visible to passers-by on the street. For this reason, they must have constituted a focal gathering point for the local communities. While the absence of precise epigraphic data is problematic for establishing secure chronologies, the architectural and archaeological evidence seems to show that fountains and other related buildings were still built and rebuilt at least until the fifth century CE (pp. 233-247). This adds another piece of information to the development of the North African cityscapes in Late Antiquity — a crucial topic that was often neglected in past studies, but is now being addressed in a more systematic way by modern scholarship. Chapter 8 takes into account the economics of fountains, especially issues of civic patronage and euergetism (pp. 249-261). In the first three centuries CE, the construction of fountains was achieved mainly through private patronage; municipal financing, on the other hand, witnessed a marked rise from the Diocletianic period onwards. Finally, the religious connotation of some water monuments is explored in Chapter 9 (pp. 263-291). The analysis of the buildings referred to as septizodia, or septizonia, would suggest that the monuments of this type attested in the Roman provinces (not only in North Africa) might have been built in honour of Septimius Severus. The septizodium erected by the emperor as a majestic façade of the Palatine palace in Rome must have influenced the layout of the provincial buildings, although some aspects of its plan and elevation remain conjectural. The catalogue of monumental fountains is detailed and well organized. It includes 51 monuments, which are grouped geographically, from Mauretania Tingitana to Tripolitania (sites are listed alphabetically within each province). This is supplemented by a useful epigraphic corpus of 49 known inscriptions mentioning water monuments; full concordances with other corpora and previous publications are indicated (sometimes even epigraphers overlook this aspect) and a good commentary of the texts is provided. Given the abundance of data presented in the catalogue, each entry should be seen more correctly as a mini-essay on the respective monument. The history of the monument’s discovery is outlined, alongside a thorough description, the analysis of the hydraulic systems and building techniques, the sculptural decoration, the hypothetic reconstruction of the elevation (when at all possible), and any chronological information available. Understandably, future research on the individual monuments will update or revise the content of some of these entries. For example, the nymphaeum enclosed in the enigmatic “curia Ulpia” at Sala in Mauretania Tingitana (catalogue no. 1) is now being investigated in the context of broader research led by the University of Siena. The building’s architectural stratigraphy and construction phases were recently recorded, and 3D digital models of the proposed elevation were created, in view of the monument’s final publication. With regard to the Arch of Caracalla at Volubilis (catalogue no. 5), if one accepts Lamare’s hypothesis that the annexed fountains were a later addition, it would be interesting to carry out a new comprehensive study. This should look at the arch’s architectural and construction features, all of the decorative and sculptural elements that were associated with it, as well as the structural modifications that the water supply would have required. In conclusion, Lamare is to be congratulated for producing an important book that, while engaging with a specialized topic, also features various sections accessible to a broader readership, particularly in regard to Chapters 7-9. The analysis of the North African monuments pays due attention to the historical, architectural, and archaeological evidence available for other contexts across the ancient Mediterranean, thus making this publication relevant also to scholars who work outside of North Africa. The exhaustive catalogue of fountains is undoubtedly one of the principal strengths of this volume; its richness of information will make it an indispensable resource for any subsequent study of these monuments. P. Aupert, Le nymphée de Tipasa et les nymphées et « septizonia » nord-africains, Collection de l’École française de Rome 16. Rome, 1974: École française de Rome. F. Tomasello, Fontane e ninfei minori di Leptis Magna, Monografie di archeologia libica 27. Rome, 2005: “L’Erma” di Bretschneider. Contrôle et distribution de l’eau dans le Maghreb antique et médiéval, Collection de l’École française de Rome 426. Rome, 2009: École française de Rome. V. Brouquier-Reddé and F. Hurlet (eds.), L’eau dans les villes du Maghreb et leur territoire à l’époque romaine, Collection Mémoires 54. Bordeaux, 2018: Éditions Ausonius. For example: D.J. Mattingly, Imperialism, Power, and Identity: Experiencing the Roman Empire. Princeton, 2011: Princeton University Press (in particular pp. 43-72, 146-166). See S. Camporeale, “Merging technologies in North African ancient architecture: opus quadratum and opus africanum from the Phoenicians to the Romans”. In N. Mugnai, J. Nikolaus and N. Ray (eds.), De Africa Romaque: Merging Cultures across North Africa. London, 2016: Society for Libyan Studies, 57-71. B. Longfellow, Roman Imperialism and Civic Patronage: Form, Meaning, and Ideology in Monumental Fountain Complexes. Cambridge, 2011: Cambridge University Press. See especially A. Leone, The End of the Pagan City: Religion, Economy, and Urbanism in Late Antique North Africa. Oxford, 2013: Oxford University Press. This study was undertaken by Rossella Pansini and forms part of her recently submitted doctoral dissertation, “Le aree pubbliche e monumentali africane in età romana. Il foro di Sala (Chellah/Rabat, Marocco)”, Universities of Pisa and Siena, 2019.
<urn:uuid:6d370f16-3885-4dd4-bacf-f54fa0cf46fe>
CC-MAIN-2021-43
https://bmcr.brynmawr.edu/2020/2020.03.23
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.902555
2,890
3
3
Why We Divide the Day Into Seconds, Minutes, and Hours Today I found out why we divide the day in to seconds, minutes, and hours. The concept of needing to divide up the day seems second nature to even the smallest kid who asks, “is it snack time”. The reality is, even though we’ve decided that there is a need to divide up time, the actual process and the way we go about it has been changing for millennia. The cruel irony is that even though we know we need to measure time, there has never been a consensus on what time really is. Throughout all of history there have been two main schools of thought on what time is, and even many more opinions on how we should measure it. The first concept of time is one that most current physicists tend to subscribe to, and that is time is a fundamental dimension in the universe. The 4th dimension in which the other three dimensions of space (length width and height) can move through in sequence. The second concept of time argues against the idea that it is a dimension, but rather an intellectual concept that allows people to sequence and compare events. That time does not exist on its own, but is a way in which we represent things. While many physicists tend to view time as a dimension, I assume because they are trying to hold fast to Einstein’s theories on Space-Time, I prefer to view it as a tool. This is because our universe is constantly changing. From one moment to the next, it is always in motion. From electrons moving around atomic nuclei, to the Basketball player trying to get their shot off before the game-clock runs out, everything in our universe is in motion. To be able to understand it, we need a tool. If you view the universe as a car and time as a very important tool in a toolkit, you can see how time would not be a dimension. You need tools to take apart a car and just like the socket set is needed to take apart and understand all the inner-workings of that automobile, so too time is needed to take apart and understand the change in our universe from one moment to the next. But just like the socket set will never be a part of the car, so too time will never be a part of the universe, just a needed tool to understand it. Whatever your position on what time actually is, one constant has always remained; how do you measure it? In chronometry (The science of the measurement of time) there are two distinct forms of measurement, the calendar and the clock. The calendar is used to measure the passage of extensive periods of time, and the clock is used to count the ongoing passage of time and is consulted for periods of less than a day. We obviously will focus on periods of less than a day, because if we go into the calendar debate, we would inevitably decide our world was ending in 2012!! Today the most widely used numerical system is a base 10 system (decimal). This seems appropriate given we all have 10 fingers and toes, so grade-schoolers and myself, after a few beers, can do math easily! Unfortunately for us, the pre-Dewey Decimal civilizations either never tried to count their sheep drunk, or just plain hated their kids, but all seemed to use other more complicated systems like a base 12 (duodecimal), or base 60 (sexagesimal) The first society credited with separating the day out into smaller parts was the Egyptians. They divided a day into two twelve hour sections; night and day. The clock they used to measure time was the sundial. The first sundials were just stakes in the ground and you knew what time it was by the length and direction of the suns shadow. Advances in technology, namely a t-shaped bar placed into the ground, allowed them more accurately measure the day in 12 distinct parts. (Damn duodecimal system!!) It was thought that one explanation for this base system was that one could get to twelve easily by counting the knuckles on all four fingers with their thumb. (Apparently they did not have DUI patrols for drunken camel driving and ancient cops performing field sobriety tests having folks touch their thumbs to their fingers; otherwise, they would realize that this method for counting was not a good idea!) The drawback to this early clock was that at night there was no real way to measure time. Egyptians, like us, still needed to measure time after dark. After all, how else would we know when the bars close? So their early astronomers observed a set of 36 stars, 18 of which they used to mark the passage of time after the sun was down. Six of them would be used to mark the 3 hours of twilight on either side of the night and twelve then would be used to divide up the darkness into 12 equal parts. Later on, somewhere between 1550 and 1070 BC, this system was simplified to just use a set of 24 stars, of which 12 were used to mark the passage of time. There were many other methods, in ancient times, for measuring the passage of time after dark. The most accurately known clock was a water clock, called a clepsydra. Dating back to approx. 1400-1500 BC, this device was able to mark the passage of time during various months, despite the seasons. It used a slanting interior surface that was inscribed with scales that allowed for a decrease in water pressure as the water flowed out of a hole at the bottom of the vessel. Since the day and night could now be divided up into 12 equal parts, the concept of a 24 hour day was born. Interestingly enough, it wasn’t until about 150 BC that the Greek astronomer Hipparchus suggested the idea of a fixed set of time for each hour was needed. He proposed dividing the up the day into 24 equinoctial hours observed on equinox days. Unfortunately for the bean-counters in charge of overtime hours, most laypeople continued to use seasonally varying hours for several centuries to come. It wasn’t until about the 14th century, when mechanical clocks were commonplace, that a fixed length for an hour became widely accepted. Hipparchus himself, and other astronomers, used astronomical techniques they borrowed from the Babylonians who made calculations using a base 60 system. It’s unknown why the Babylonians, who inherited it from the Sumerians, originally chose to use 60 as a base for a calculation system. However, it is extremely convenient for expressing fractions of time using 10, 12, 15, 20 and 30. The idea of using this base 60 system as a means of dividing up the hour was born from the idea of devising a geographical system to mark the Earth’s geometry. The Greek astronomer Eratosthenes, who lived between 276-194 B.C., used this sexagesimal system to divide a circle into 60 parts. These lines of latitude were horizontal and ran through well-known places on the Earth at the time. Later, Hipparchus devised longitudinal lines that encompassed 360 degrees. Even later, the astronomer Claudius Ptolemy expanded on Hipparchus’ work and divided each of the 360 degrees of latitude and longitude into 60 equal parts. These parts were further subdivided into 60 smaller parts. He called the first division “partes minutae primae”, or first minute. The subdivided smaller parts he called “partes minutae secundae”, or second minute, which became known as the second. Once again, these measuring techniques were lost on the general public until around the 16th century. The first mechanical clocks would divide the hour into halves, quarters, or thirds. It wasn’t practical for the layperson to need the hour divided up into minutes. Advances in technology and science over the centuries have required that there be a more precise defined value for the measurement of a second. Currently, in the International System of Units (SI), the second is the base unit for time. This then is multiplied out to get a minute, hour, day, etc. etc. The first accurately measurable means of defining a second came with the advent of the pendulum. This method was commonly used as a means of counting time in early mechanical clocks. In 1956, the second was defined in terms of the period of revolution of the Earth around the Sun for a particular epoch. Since it was already known that the Earth’s rotation on its axis was not a sufficiently uniform standard of measurement, the second became defined as; “The fraction 1/31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.” With the development of the atomic clock, it was decided that it was more practical and accurate to use them as a means to define a second, rather than the revolution of the Earth around the Sun. Using a common-view measurement method based on the received signals from radio station, scientists were able to determine that a second of ephemeris time was 9,192,631,770 ± 20 cycles of the chosen cesium frequency. So in 1967 the Thirteenth General Conference on Weights and Measures defined the second of atomic time in the International System of Units as; “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom.” Unfortunately for laypeople, scientist with their constant need to be correct and absolutely accurate, found the effects of gravitational forces cause the second to differ depending on the altitude at which it was measured. A uniform second was produced in 1977 by correcting the output of each atomic clock to mean sea level. This, however, lengthened the second by about 1×10−10. This correction was then applied at the beginning of 1977. Today, there are atomic clocks that operate in several different frequency and optical regions. While state-of-the-art cesium fountain atomic clocks seem to be the most widely accurate, optical clocks have become increasingly competitive in their performance against their microwave counterparts. What seems to remain true is that as technology becomes more and more advanced, the need to more accurately measure time will continue to evolve. What remains true for most of us however is that we get to use easy ghetto math and simply know that there are 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day! - Because the second is based on the number of times the cesium atom transitions between the two hyperfine levels of its ground state compared to ephemeris time, and the fact that the earth’s rotation is slowing down, it becomes necessary to add periodic “leap seconds” into the atomic timescale to keep the two within one second of each other. - Since 1972 to 2006 there have been 23 leap seconds added, ranging from one every 6 months to 1 every 7 years. - The International Earth Rotation and Reference Systems Service (IERS) is the organization which monitors the difference in the two timescales and calls for leap seconds to be inserted or removed when necessary. - Although it is not a standard defined by the International System of Units, the hour is a unit accepted for use with SI, represented by the symbol h. - In astronomy, the Julian year is a unit of time, defined as 365.25 days of 86400 SI seconds each. - It is though that the moon was used to calculate time as early as 10,000-28,000 BC. Lunar calendars were among the first to appear, either 12 or 13 lunar months (either 346 or 364 days). Lunisolar calendars often have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. |Share the Knowledge!|
<urn:uuid:cc246436-5cb5-442c-a1c6-c070faec3991>
CC-MAIN-2021-43
https://www.todayifoundout.com/index.php/2011/08/why-we-divide-the-day-into-seconds-minutes-and-hours/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.969846
2,500
2.53125
3
One of the biggest advantages commercial brewers have over homebrewers is an ample and ready supply of yeast (Saccharomyces cerevisiae). They routinely “harvest” yeast from a recent batch and pitch it onto a new batch of beer. It’s common practice to do this multiple times and then reculture the yeast from a pure stock in order to minimize the possibility of contamination and mutations that can cause the properties of a yeast strain to “drift” and change the character of the beer. As a homebrewer, the easiest way to reuse yeast is to time your brewing schedule so that you are brewing a new batch at about the same time you are racking the beer from a previous batch. This can be either the transfer to a secondary fermenter or for bottling or kegging. At that point, you can harvest the yeast and repitch it into the fresh wort. Furthermore, yeast sediment can be stored under beer or distilled water and refrigerated, to be revived anywhere from a day to a year later, depending on the storage technique and the health of the yeast itself. With reasonable sanitation, these methods will allow reuse of yeast at least several times before it needs to be discarded. If you can’t reuse your yeast in a timely fashion, there are a number of ways to store it, then grow it up to pitchable amounts later. It’s seldom worth the effort to reuse dry yeast, which is relatively inexpensive and convenient and the packages can be stored in the refrigerator for a very long time. Commercially available “liquid” yeasts have a shorter storage life, but are still relatively cheap. Conversely, if you have cultured yeast from a bottle of your favorite beer or otherwise obtained a yeast strain that is not commercially available, knowing how to store and propagate this yeast can be very valuable. Long-term maintenance of your stock of yeast strains demands more stringent quality control and greater involvement in the process. If you want to continue to reuse yeast over an extended period and through repeated pitchings, you need to become more scientific about it. Current procedures for yeast culturing are adapted from the biomedical and microbiology fields, which ironically have their origins in brewing science. (Early microbiological pioneers Hansen and Pasteur studied beer and wine.) When using these techniques, sanitation is more critical than with ordinary brewing. It’s important to have a clean space in which to work, one that is relatively free of airborne contaminants. Commercial labs employ laminar flow hoods and partial vacuums. This is not necessary for the homebrewer — but, in general, avoid areas such as kitchens and basements that may have a high level of bacteria or humidity. Close nearby windows, especially during warm weather. A very clean bench or table surface washed and rinsed with sanitizer before being allowed to dry is desirable. A flame source for sterilizing is also a good idea. This can be a small alcohol lamp, or alternately a butane lighter for a gas grill or fireplace. You can also use small propane bottles with fan-style burners. A spray bottle of alcohol or sanitizer is handy for quick tasks. Obviously, keep any alcohol sanitizer away from the flame. Some people wear surgical gloves, which may be overly compulsive, but at least wash your hands well with an antibacterial soap. When using yeast from long-term storage, the yeast population is relatively small to begin with and increases many-fold as it grows and multiplies. Any other living microorganisms contaminating your culture — which can include wild yeast, molds and various bacteria — will multiply along with your yeast, and sometimes more quickly. It’s not unusual to pitch a total yeast population into your wort that is hundreds or thousands of times greater than that of the initial stock. You certainly want to minimize the presence of any organisms other than pure brewing yeast. Some of the required materials you likely already have or can improvise, but in other cases you will need laboratory equipment and supplies. (See the “materials” sidebar for a list of useful equipment.) If you’re lucky, you may have contacts at a university, medical or biotech lab. If not, there are several scientific suppliers that continue to sell to individuals; among them are Fisher Scientific and Cynmar. It’s a little more difficult since September 11, 2001, though, and you may be asked to explain the purpose of your order. Also check the yellow pages and Internet search engines for “laboratory equipment and supplies.” Another source for certain items is your local full-service pharmacy. A comment is in order about agar, one of the supplies you will need. This is available from scientific supply houses, but also at Asian grocery stores. It comes in small sticks or sheets. An alternative is unflavored gelatin, but gelatin begins to melt at a temperature of about 78 °F (25 °C), while agar will not do so below 122 °F (50 °C). Going to the source The first step in yeast culturing is to start with a relatively pure source of the yeast itself. For most homebrewers this is a vial, tube or smack pack of liquid yeast, but it may also be the yeast sediment from a bottle-conditioned beer or a container of yeast from your local brewpub. The objective is to “borrow” a small amount of this yeast for growth, storage and later use. Whether you are making a starter (recommended for larger batches and moderate to high gravity beers) a couple of days before a brewing session, or pitching the yeast directly from the package into your chilled wort, this is also the time to make a yeast culture. Save a small amount of the yeast and do the culturing very soon after pitching the rest into the starter or your batch of beer. At this point you have two options. You can culture the yeast on agar plates for refrigerated storage for a few months, or prepare the yeast for freezing and store it for a year or longer. Plates are also preferred if your yeast source is sediment from a previous batch or a bottle-conditioned beer, commercial or homebrew. This will allow you later to isolate, select and propagate from a single yeast colony, virtually ensuring that you have an uncontaminated form of the strain. For culturing on plates, you will have to prepare a growth medium. Start by heating 1 cup (a little less than 250 mL) of tap water in a Pyrex flask or saucepan on the burner. Dissolve one-quarter cup (20 g) of dried malt extract into the hot water and bring this wort to a boil for about 15 minutes (be careful about boilovers). Then turn down the heat and stir in one-half teaspoon of agar (or unflavored gelatin powder) until it is completely dissolved. Again bring to a boil, watching carefully so that it doesn’t boil over, for another 15 minutes. Remove the flask from the burner and allow the flask or pan to cool in the air rather than in a cold water bath. The mixture will thicken — but not solidify — as it cools below 122 °F (50 °C). Sterilize from three to six Petri dishes, vials or clean baby food jars and lids by steaming them in a pressure cooker for 10-12 minutes. As a less sanitized alternative, they can be immersed in boiling water for 30 minutes. Sterile, plastic Petri dishes are also available, although obviously these are single use items. When the Petri dishes (or other containers) are cool enough to touch, sterilize the mouth of the flask or lip of the saucepan with the flame source and pour the medium into each, filling it to about one-fourth of its capacity. Put the lids on the Petri dishes, or cover the container with plastic wrap, and let them cool a little longer, perhaps 30 minutes. Eventually the medium will solidify to the point where the color lightens somewhat and the plate can be tilted without running. To save time, the covered plates can be prepared ahead of time and stored in sanitized plastic bags. (Sterile Petri dishes come in plastic sleeves.) It’s best to store the plates upside down. Otherwise, condensation may form on the lids and drip into the agar. You can store poured plates in a cool, dry place for up to several weeks. If the medium turns hard and brittle it has been stored for too long and dried out. To be useful, it should remain somewhat soft and pliable. The next step in culturing is to inoculate the agar plate with yeast. Sterilize the inoculation loop by heating it in a flame until it glows red. Then, cool the loop by dunking it in a shallow dish of alcohol. As an alternative, you can wipe it with a paper towel or cotton ball moistened in sanitizer or alcohol. Take a deep breath and draw the loop through the yeast sediment, collecting some of it on the surface. (You don’t need — and in fact don’t want — a visible amount of yeast on the loop. Just touch the yeast lightly and the loop will have enough yeast on it.) While holding the loop in one hand, remove the cover from one of the agar plates with the other hand. Quickly streak the plate by lightly drawing the loop across the agar surface of the plate. Quickly close the cover when you are done and once again turn the plate upside down. Resterilize the loop and repeat the process for however many plates you plan to streak. The purpose of inoculating multiple plates is to avoid problems with infection or failure of the yeast to grow on one or more of them. It also provides more than a single yeast source for later reculturing. Keep the plates covered, upside down and in a somewhat warm (70–80 °F/21–27 °C) undisturbed location. Within several days, the yeast should multiply and grow. A milky layer will develop on the surface of the medium, and you may notice trails of small “dots,” which are individual yeast colonies. Contamination by molds, which can occur, will be obvious by the appearance of “fuzz” or “balls.” Discard any such plates. You now have successfully cultured the yeast on agar plates. Seal the covers or lids of the plates with electrical tape (in labs, they use the shrink wrap Parafilm), label them with masking tape and a permanent marker, and store them in a sealed plastic bag in the refrigerator. They will survive for several months or a little longer. Baby, it’s cold inside The other method of serious yeast storage is in the freezer. Merely freezing the yeast in water, beer or wort will rupture the cells and kill them. However, if glycerin is added to the yeast in the proper proportion, it will inhibit the formation of ice crystals and minimize damage during freezing. When yeast is frozen in glycerin, a large amount of yeast is stored (relative to the amount present on the surface of a Petri dish). As a consequence, the potential for contamination is higher when yeast is stored this way. It is recommended to first prepare a plate from the frozen yeast if there have been more than a few repitchings since the last culture was performed. Careful sanitation of the work area and all utensils, tools and materials for freezing is just as important as when preparing agar plates. The first step is to treat the yeast sediment. If you are using sediment from a previous batch or a bottle of commercial beer, it is a very good idea to wash the yeast. This is accomplished by stirring the yeast into boiled and cooled distilled water in a sterilized container, covering it with a sanitized lid or plastic wrap and letting the sediment settle before pouring off the liquid. In some cases it may be desirable to do this more than once. It is not necessary to wash the yeast from a new package. Next prepare a 30% solution of glycerin and distilled water. Use a graduated cylinder or beaker to measure 250 mL (8.5 fl. oz.) of distilled water and 100 mL (3.5 fl. oz.) of glycerin. Stir until mixed well, then boil for about 10 minutes. Cover with sanitized plastic wrap and cool to room temperature. Pour the cooled glycerin/water solution into sterilized test tubes or small vials about one-third full. Just as when culturing on agar plates, it’s best to use several tubes or vials as insurance against contamination or non-survival during storage. Then carefully add the yeast slurry, again about one-third of the total volume of each tube or vial, using a sterilized pipette or eyedropper. Screw the sterilized caps or lids on tightly, shake well to distribute the yeast and mark each one with a masking tape label. Once prepared in this manner, the yeast is ready to be frozen and stored. The problem with most home freezers is that they are frost-free. A heater periodically warms the refrigerant lines to melt frost on the freezer walls. This has a minimal effect on frozen foods, but will greatly shorten the storage life of yeast. There are two methods of preventing this from occurring. The first is to find a way to disable the defrost cycle, which requires some knowledge of refrigeration and electrical expertise. The other method is to place the test tubes or vials of yeast inside a small, covered Styrofoam cooler between frozen packs of “blue ice.” Store the cooler in the freezer. This will greatly minimize the temperature changes that occur during the defrost cycle and prevent damage to the frozen yeast. Set the freezer to its lowest temperature setting; most home freezers can reach temperatures of about -15 °F (-20 °C). The frozen yeast can be stored in a home freezer for up to several years. This assumes there are no power outages that would allow it to thaw. In a laboratory freezer at -80 °C (-112 °F), frozen yeast has successfully been stored for decades. Waking the dead Of course, at some point in the future, you will want to revive the refrigerated or frozen yeast and use it again. For yeast that has been cultured and stored on agar plates, the procedure is to find a single colony of pure yeast and use it as the starting point, growing it up until you have a “pitchable” population for your brewing session. With frozen yeast, you may have a similar purpose, or you may wish to culture a plate from the frozen sample in order to ensure that it is pure. If frozen, remove the test tube or vial from the freezer and first place it in the refrigerator. Allow three to five days for it to thaw. At that point, the thawed yeast is treated the same as a refrigerated agar plate. Remove from the refrigerator and keep at room temperature overnight. Now the process is one of increasing the population by making yeast starters while maintaining good sanitation throughout. If you have experience with making a starter, you should be familiar with the instructions for preparing sterile wort. A frozen tube or vial can be stepped up first to about 70 mL (2 oz.) of wort, then to about 500 mL (one pint) and finally to as much as a gallon (3.8 L), if desired. Allow each starter to incubate for about 48 hours in a somewhat warm (70–80 °F/21–27 °C), undisturbed location between steps. Reviving yeast from agar plates requires an additional step. In that case, start with a single colony on the surface of the plate. Select a round and relatively uniform “bump” or “dot” that is physically isolated from the others. Prepare a sterilized test tube with about 10 mL (0.33 fl. oz.) of sterile wort. Use the sterilized inoculation loop to gather the single yeast colony from the plate and immerse it in the starter wort, swirling it until the yeast is mixed well. Place the cap loosely on the tube, but do not seal it. Set it in a warm location for about 48 hours. You may see bubbles during that time, but the only definitive indication of activity will be yeast sediment at the bottom of the tube. This is used as the source for successive starters in the same manner as frozen yeast. Yeast culturing is somewhat involved and may make you seem like a “lab rat.” But, it is also the key to a ready supply of yeast at lower cost and greater flexibility. If you are a conscientious, detail-oriented brewer, you can become an experienced and committed yeast rancher and successfully maintain your own relatively pure source of this most valuable of brewing ingredients.
<urn:uuid:bc8190bb-e6f7-4939-b529-46428e334059>
CC-MAIN-2021-43
https://byo.com/article/yeast-ranching-advanced-homebrewing/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.942439
3,529
2.765625
3
Ever heard of Methylation? Its okay if you never have…most doctors haven’t either! It is a big subject and connects the dots between your genes, the environment, and why some people get sick with chronic diseases and others don’t. Methylation controls how your body removes toxins, grows and repairs, and functions properly. Methylation basically affects every cell in your body from the moment you are conceived until the moment of death. This process has to do with genes – how they turn on and what processes they control. Current research is now pointing to certain genes as the underlying cause of a wide variety of chronic diseases such as heart disease, stroke, cancer, depression and many, many more. Most of these genes have a connection to the methylation cycle, so understanding how to treat patients with these genes becomes an important tool for creating health naturally. Dr. Rostenberg is one a handful of natural doctors in the country who specialize in this important area of study. The philosophy behind natural health is to treat the underlying cause of disease – not cover it up with band-aids. By treating methylation problems, Dr. Rostenberg is helping to not only fix complex health problems, but to also prevent them from ever starting. Keep reading as you will see the topic of methylation is far from boring. This pathway and the genes associated with it are at the very heart of many of the modern health challenges facing our society. In order to be truly healthy in the 21st Century, we must have a balance between our genes and our environment. Only by working holistically through diet and alternative health care can this balance ever be achieved. An Overview of Methylation While it can be a very confusing subject, one way to view the methylation cycle is to see it like the flow of traffic through a city. Instead of cars on paved roads we have molecules moving along biochemical pathways – but the same rules apply. Just as a city can have excellent streets with free-flowing traffic , so too can a body have balanced biochemical pathways with high levels of nutrients available. Taking this example further, we can see that the same issues which would decrease the flow of traffic in a city would cause problems for our own biochemistry. If a city had something blocking the main roads, then drivers would spend more time just trying to get from point A to point B. Important deliveries would get behind schedule and pollution would increase as all those cars sat bumper-to-bumper waiting. It’s the same inside the body. If a person’s body has slowed methylation pathways, then it cannot run important cellular processes at full speed. When the supply of methyl groups is somehow reduced either by internal problems or dietary deficiency, toxic molecules build up and dysfunction soon results. Anything that slows the delivery or internal production of methyl groups to the body will cause the body to function at a lower level. And who wants that? Some readers may recognize that methylation and the methyl cycle is also referred to in some studies as “one-carbon metabolism” and this is no accident. The word methylation is named according to the rules of organic chemistry where different compounds are named based on how many carbon atoms are attached. The word methyl- means one carbon, ethyl- means two carbons, butyl- means three, propyl- means four, etc. So when we talk about methylation we are talking about a one carbon molecule with three hydrogen attached to it – CH3. This simple molecule is like the currency that runs the economy of our body. Without money, or methyl groups, in circulation the commerce of the body can slow or come to a halt all together causing widespread problems. You might be wondering just how important is the methylation cycle? These methyl groups are absolutely critical for life inside the human body. As we examine the various systems of the body it will clear how methylation is a key component to health living at any age. The methylation pathway must deliver methyl groups to every cell in the body otherwise our health will suffer. In fact, methylation is so important that it plays a role in everything from miscarriage, Down’s Syndrome, and autism to cardiovascular disease, stroke, depression, cancer, and more. MTHFR – A Very Common Problem Methylation problems are not well known. But that doesn’t mean they aren’t common. For example, approximately 44% of all North American Caucasians have one copy of the gene MTHFR C677T while 47% of all Caucasians have one copy of the gene MTHFR A1298C.1 What this means is that about half of all European-Americans have one MTHFR C677T mutation, and if you happen to the lucky half that doesn’t have a C677T mutation, you still have a 50% chance of having the A1298C mutation. Just looking at these two genes by themselves we can see that over 75% of the target population would be affected. This doesn’t take into account all the other methylation genes besides MTHFR like COMT, PEMT, MTR, MTRR, etc. I think it’s safe to say that methylation problems are a bigger issue than the standard healthcare model has recognized. Now that we know the MTHFR genes are common, we need to look at how they can slow down human biochemistry. Both C677T and A1298C are different types of polymorphisms that change the shape of the enzyme MTHFR. The enzyme depends on a specific shape in order to perform a specific function. If the shape of an enzyme changes, it may slow down or speed up, which can alter its function with grave consequences to human health. A single copy of an MTHFR polymorphism, termed heterozygous, is often less damaging than inheriting both copies which is called homozygous. For example, if a person is born with one copy of C677T (heterozygous) then they have an enzyme that is slowed down roughly 30-40% while inheriting both copies (homozygous) reduces enzyme speed 60-70%.2 Inheriting a single copy of both MTHFR C677T and A1298C (compound heterozygous) will reduce the speed of the MTHFR enzyme around 60%.3 We don’t all have to be genetic scientists to know that a 30% reduction in biochemical pathways is better than a 60 or 70% reduction. This reduction in enzyme speed means homozygous or compound heterozygous carriers are more sensitive to toxins, susceptible to digestive problems, and likely to experience perplexing health challenges. Does that sound like anyone you know? When scientists use this information about MTHFR genes and then look at the prevalence of disorders such as Autism, the evidence is even more alarming. Since methylation pathways are involved in the removal of toxins and the growth and development of the brain and nervous system, genes such as MTHFR have a big effect on Autism. It has been discovered that 98% of all children with Autism have at least one polymorphism of the MTHFR gene which is a much higher ratio than in the general population.1 Glutathione is the body’s main antioxidant and it is necessary to protect the body from internal and external toxins that may damage cells and slow development. It is known that autistic children have less glutathione in their cells than healthy children their same age.4 Lower glutathione levels would hurt the autistic children because they would be less able to detoxify and high inflammation would interfere with the development of their brain, their personality and ultimately their social interactions. Even though Autism doesn’t become recognized until early childhood, the problem often begins before birth. Mothers are like many people in our society – they are living in a toxic world that exposes them to high stress, terrible food, strong hormones and dangerous toxins. It makes sense then that MTHFR-related genes cause mothers to be more likely to give birth to Autistic children. For example, mothers of Autistic children often carry a gene RFC1 which slows uptake of folate from the small intestine and often have higher levels of homocysteine and other markers of poor methylation status.5 It should come as no surprise that slower uptake of folate in the mother makes it more likely she will become deficient and not be able to provide enough for the baby. That will make it more likely she will be methylation deficient and have a child that doesn’t develop properly, which sadly, is a phenomenon that is on the rise. But its not all bad news. The same research that is connecting the dots for methylation genes and Autism is highlighting the way for us to prevent Autism all together. It makes more sense, both morally and financially, to prevent a problem from happening than to try and clean it up after it has already happened. Interestingly the science is in complete agreement with the idea that Autism can be prevented with nutrition. Giving mothers the adequate levels of folate before and during early pregnancy can reduce the risk of having a child with Autism, esp. if the mother has methylation-related gene polymorphisms.6 Methylation is very common. Up to 75% of the population has at least a one copy of a gene that can slow down this important part of human biochemistry. When these pathways slow down, the methyl account dwindles and may actually be depleted. If this happens, then the methyl groups in the body cannot keep up with all the body’s needs. This creates inflammation, lowers detoxification, and may even lead to developmental problems like Autism. The solution is found in the problem itself. Methylation-related problems like these are both treated and prevented by the right combination of methyl nutrients, esp. folate. 1 Boris M, Goldblatt A, Galanko J. et al. Association of MTHFR Gene Variants with Autism. Journal of American Physicians and Surgeons. 2004;9:106–8. 2 Gokcen C, Kocak N, Pekgor A. Methylenetetrahydrofolate Reductase Gene Polymorphisms in Children with Attention Deficit Hyperactivity Disorder. Int J Med Sci. 2011; 8(7): 523–528. 3 Weisberg IS, Jacques PF, Selhub J, et al. The 1298A–>C polymorphism in methylenetetrahydrofolate reductase (MTHFR): in vitro expression and association with homocysteine. Atherosclerosis. 2001 Jun;156(2):409-15. 4 James SJ, Rose S, Melnyk S, et al. Cellular and mitochondrial glutathione redox imbalance in lymphoblastoid cells derived from children with autism. FASEB J. 2009 Aug;23(8):2374-83. Epub 2009 Mar 23. 5 James SJ, Melnyk S, Jernigan S, et al. A functional polymorphism in the reduced folate carrier gene and DNA hypomethylation in mothers of children with autism. Am J Med Genet B Neuropsychiatr Genet. 2010 Sep;153B(6):1209-20. 6 Schmidt RJ, Hansen RL, Hartiala J, et al. Prenatal vitamins, one-carbon metabolism gene variants, and risk for autism. Epidemiology. 2011 Jul;22(4):476-85. Copyright © 2013 · All Rights Reserved · beyondMTHFR.com
<urn:uuid:724808bf-ba9f-4127-b6a8-234f344cba2d>
CC-MAIN-2021-43
https://www.beyondmthfr.com/a-primer-on-methylation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.947875
2,388
2.875
3
Here to return to the previous page - Xiangtan University, Online Postgraduate (Phd Doctors) English Lesson Plans, Lesson Material and Ideas... Speaking Lesson: Business Communications and Telephone The purpose of this lesson is to increase the vocabulary of students using a business communication theme. The lesson aim is also to increase the students knowledge of how British business works, through discussion about the methods of communication available for business. Business Communication: Many businesses have separate departments or divisions of the company, in order for the business to function correctly there has to be effective communication between these departments. The internal communication of a company can be in many forms (spoken, written etc.) but whatever method is used the employees must be able to communicate with each other effectively. The external communication of a business must also be efficient, communicating with customers and suppliers must be a top priority for any company. The main methods of communication for any business are as follows: Written Communication (Internal): (To Suppliers / Customers) Annual Report / Graphs / Charts Intranet / Extranet Processors / Office 2000 etc.) The telephone is the most common method of business communication. External telephone calls are used by a business to contact customers suppliers etc. A business could not function without a telephone. A business may also have an internal telephone system, so that employees can communicate with each other, this system is known as an intercom system or telephone network. The advantages of the telephone are that it is faster and more flexible than letters or memos, you can be sure that the message gets to the correct person. However the telephone does not keep a permanent record of the communication, so sometimes it may be necessary to use letters. (For more information on Telephone English see the bottom of this page) Machines: A fax machine requires a telephone line in order to be used for communication. It uses the telephone line to transmit pictures from one fax to another fax machine or a computer. Like a phone call the communication is instant. Fax can be used to transmit graphs, charts, diagrams etc. The advantage is that a fax can be kept as a permanent record. first e-mail message was sent in 1971. In the beginning and even today, e-mail messages tend to be short pieces of text, although the ability to add attachments now makes many e-mail messages quite long. Email has many advantages to business including the following: Email is instant You can ask for a read receipt can be sent or received as attachments (pictures, sound etc) confidential (using encryption) It can be used to send a message to any computer anywhere in the world It can store any time spent on writing letters etc. Email can be accessed from any machine (at home or work) Email is very There is no doubt that the Internet has changed the way businesses communicate. For many companies e-mail has virtually replaced traditional letters and even telephone calls as the choice for correspondence. Every day, billions of e-mail messages are sent out. E-mail has been the most rapidly adopted form of communication ever known. In less than two decades, it has gone from obscurity to mainstream dominance. The newest development is Instant Messaging - a form of email, however instant messaging is interactive - you can chat in live time. Most of the popular instant-messaging programs provide a variety of features: - Send notes back and forth with a friend who is online Chat - Create your own custom chat room with friends or co-workers Web links - Share links to your favorite Web sites Images - Look at an image stored on your friend's computer Sounds - Play sounds for your friends Files - Share files by sending them directly to your friends Talk - Use the Internet instead of a phone to actually talk with friends content - Real-time or near-real-time stock quotes and news increasingly introducing customised versions of Instant Messaging programs to allow employees to communicate with one another. Telephone English: There are a number of phrases and idioms that are only used when telephoning. Here is an example phone call: Hello, Xiangtan Normal University, how can I help you? Caller: This is Paul Sparks. Can I speak to Mr Xiang? Secretary: Certainly, hold on a minute, I'll put you through... Mr Xiang's office: Mr Xiang's office, how can I help? Caller: This is Paul Sparks calling, is Mr Xiang in? Mr Xiang's office: I'm afraid he's out at the moment. Can I take a Caller: Yes, Could you ask him to call me. I need to talk to him, Mr Xiang's office: Does Mr Xiang have your number? Mr Xiang's office: Thank you Mr Sparks, I'll make sure Mr Xiang gets Caller: Thanks, bye. Mr Xiang's office: Bye. As you can see, the language is rather informal and there are some important differences to everyday English. See below for key language and phrases used in telephone English: yourself -"This is Paul" or "Paul speaking" Asking who is on the telephone - "Excuse me, who is this?" or "Can I ask who is calling, please?" Asking for Someone - "Can I have extension 321?" or "Could I speak to...?" (Can I - more informal / May I - more Connecting Someone - "I'll put you through" (put through - phrasal verb meaning 'connect') or "Can you hold the line?" How to reply when someone is not available - "I'm afraid he is not available at the moment" or "He isn't in at the moment" Taking a Message - "Can I take a message?" or "Could I tell him who is calling?" or "Would you like to leave a message?" Exercises for Practicing Speaking on the Telephone: Real life situations - Businesses are always interested in telling you about their products. Find a product you are interested in and research it over the telephone. You can ... call a store to find out the prices and specifications. company representative to find out details on how the product consumer agency to find out if the product has any defects. service to find out about replacement parts, etc. Leaving a Message: Sometimes, there may not be anyone to answer the telephone and you will need to leave a message. Follow this outline to make sure that the person who should receive your message has all the information "Hello, this is Paul." OR "Hello, My name is State the time of day and your reason for calling - "It's ten in the morning. I'm calling to let you know that ....." Make a request - "Could you ring me back?" Leave your telephone number - "My number is ...." OR "You can reach me at ...." Finish - "Thanks a lot, bye." OR "I'll talk to you Here's an example Telephone: (Ring... Ring...) Hello, this is Paul. I'm afraid I'm not in at the moment. Please leave a message after the beep..... (beep) Mr Xiang: Hello Paul, this is Mr Xiang. It's about noon and I'm calling to see if you are busy this afternoon. Could you call me back? You can reach me at 123-45467 until five this afternoon. I'll talk to you later, bye. As you can see, leaving a message is pretty simple. You only need to make sure that you have stated all the most important information: Your Name, The Time, The Reason for Calling, Your Telephone Number Role Playing using the Telephone: Student A: Choose a city in your country. You are going to travel to this city for a business meeting over the next weekend. Telephone a travel agency and reserve the following: Hotel room for two nights Prices and departure times Student B: You work in a travel agency. Listen to student A and offer him/her the Round-trip flight: Air JW $450 Coach, $790 First Class Hotel room for two nights: Hotel City $120 a night in the downtown area, Hotel Relax $110 a night near the airport Restaurant Recommendation: Chez Marceau - downtown - average price $70 a person Student A: You need to purchase six new computers for your office. Call JA's Computer World and ask for the following information: Current special offers on computers Computer configuration (RAM, Hard Drive, CPU) Possibility of discount for an order of six computers Student B: You work in at JA's Computer World answer student A's questions using the following information: Two special offers: Multimedia Monster - with latest Pentium CPU, 256 RAM, 40 GB Hard Drive, Monitor included - $2,500 AND Office Taskmaster - cheaper CPU, 64 RAM, 10 GB Hard Drive, Monitor not included - $1,200 1 Year guaranty on all computers Discount of 5% for orders of more than five computers Leaving a Message: Student A: You want to speak to Ms Braun about your account with her company, W&W. If Ms Braun isn't in the office, leave the following Telephone number: 347-8910 (or use your own) Calling about changing conditions of your contract with W&W You can be reached until 5 o'clock at the above number. If Ms Braun calls after 5 o'clock, she should call 458-2416 Student B: You are a receptionist at W&W. Student A would like to speak to Ms Braun, but she is out of the office. Take a message and make sure you get the following information: Name and telephone number - ask student A to spell the surname Message student A would like to leave for Ms Braun How late Ms Braun can call student A at the given telephone number
<urn:uuid:8b8edf7e-26d7-406b-b816-2d4a6e83b58f>
CC-MAIN-2021-43
http://xiangtan.co.uk/doctors4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00191.warc.gz
en
0.901367
2,197
3.703125
4
A partire dal 6 agosto 2021 sarà necessaria una “certificazione verde“ per visitare il museo. The Pennsylvanian flora from the Italian Carnic Alps, stored in the Museo Friulano di Storia Naturale in Udine, Italy, was revised taxonomically. Plant fossils come from the Bombaso Formation and Pramollo (Auernig) Group (Late Pennsylvanian) that correspond to the lower part of the paralic to shallow marine Carboniferous–Permian Pramollo (or Pramollo-Nasfeld) Basin succession. Most of the ~2500 studied plant fossils and also the highest number of species come from the the Bombaso, Meledis and Pizzul formations, whereas the middle and upper part of the Pramollo Group in the Italian Carnic Alps yielded only few plant remains, contrarily to the successions on the Austrian side. In total 73 plant taxa have been identified, which represent about 59 biological species. The ferns, especially marattialen ferns, are the most diverse plant group (33 species in total) followed by pteridosperms (15 species). The stratigraphic range of the Bombaso Formation and the Pramollo Group have been re-evaluated based on presence of stratigraphically important species from both the Italian and Austrian parts of the Carnic Alps. The studied interval ranges from a middle Barruelian to middle Stephanian B sensu Wagner and Álvarez-Vázquez (2010a) and spans about 3.5 Ma. The diversity of the Carnic Alps flora is comparable with well-documented contemporaneous floras in NW Spain, French Massif Central and the Czech Republic. Floral richness and diversity together with intercalations of plant-rich horizons with fossiliferous marine limestone bands makes the Carnic Alps a potential candidate as a stratigraphically important reference section for non-marine to marine correlations. Stoneworts (Characeae) have never been the subject of a systematic survey in South Tyrol. While there was a certain interest in this group of plants in the 19th century, further study of this group of plants failed to take place for almost a whole century. Since Characeae now play an important role in nature conservation, due in part to the Fauna-Flora-Habitat Directive of the European Union, it was necessary to compile an inventory of this plant group in South Tyrol. In this work we present a checklist of all Characeae species hitherto known to South Tyrol, discussing each species and its distribution. The ninth article in the series again presents taxa that are new to the flora of SouthTyrol or whose status has changed since the publication of the catalogue of the vascular plants in 2006. Due to the increase in the number of members of the “Flora of South Tyrol” working group, a comparatively large number of new records has been obtained in the last few years. Among the new finds are the adventitious and most likely established species Cotoneaster dielsianus, Elodea nutallii, Erigeron bonariensis, Oenothera adriatica, Oe. deflexa, Oe. cf. latipetala, Oe. oakesiana, Oe. royfraseri, Oe. stucchii, Verbascum sinuatum, the locally established cultural relics Cistus albidus and C. laurifolius, as well as the casual garden refugees Allium tuberosum, Aloë maculata, Carex muskingumensis, Chaenostoma cordatum, Eranthis hyemalis and Hyacinthoides non-scripta. Amsinckia menziesii, Ornithopus sativus and Sesamum indicum derived from seed mixtures or their impurities and are also unstable, while the mode of introduction appears unclear in the case of Scrophularia scopolii. The casuals Dracocephalum moldavica and Plantago coronopus have already been historically proven. The status of Sisymbrium austriacum and Delosperma cooperi, also classified as adventitious, and Juncus capitatus is unclear for the time being. Among the new finds to be classified as native are Sorbus austriaca and Ranunculus peltatus, the latter recently being proven to have historically occurred in South Tyrol. After many decades, the indigenous or archeophytic species Calamagrostis canescens, Centunculus minimus, Lathyrus aphaca, Orobanche minor, Papaver argemone, Plantago holosteum, Ranunculus sardous, Rorippa amphibia, Rumex aquaticus, R. pulcher and Scirpoides holoschoenus were found and reconfirmed, respectively. New occurrences of Crepis rhaetica, Plantago atrata, Potentilla multifida, Saxifraga cuneifolia and Trichophorum pumilum have been discovered, some of them far outside the previously known South Tyrolean distribution area Red lists evaluate the short-term extinction risk of given taxa, a very important information for conservation. The IUCN Red List Categories and Criteria represent a widely recognized and highly objective procedure to evaluate extinction risk at both global and sub-global levels. In this work, we assessed the extinction risk of birds breeding in South Tyrol, an inner Alpine area in Italy, based on the IUCN guidelines for regional assessments. Out of 143 evaluated species, 59 (41%) were classified as Least Concern (LC), 10 (7%) as Near Threatened (NT), 25 (17%) as Vulnerable, (VU), 16 (11%) as Endangered (EN), 14 (10%) as Critically Endangered (CR) and 2 (1%) as Regionally Extinct (RE), while for 17 species (12%) data were not sufficient to perform the assessment (Data Deficient – DD). In many cases, our local assessments were consistent with the species conservation status at larger scales. We strongly encourage a more wide, long-term and properly designed local bird monitoring to improve the information available for conservation. The genus Aeropedellus (Hebard 1935) currently comprises 22 nominal species, whereas all of them are typical elements of the Holarctic (Orthoptera Species File, accession date 12th November 2020). The largest part of these 22 species is occurring in the Asian part of the Palearctic (20 species), while only two species are native to the Nearctic. The region harboring most Aeropedellus species worldwide is Northern China and Mongolia (15 species). Only two species, Aeropedellus variegatus (Fischer von Waldheim, 1846) and Ae. volgensis (Predtechenskii, 1928) are occurring in Europe. While the latter is a xerophilic endemic of the steppe grasslands of the lower Volga basin (Bey-Bienko & Mishchenko 1951), Ae. variegatus has the widest distribution of all palearctic Aeropedellus species. As such, Ae. variegatus occurs from Northeastern Russia to Western Europe (Ebner 1951). Ebner (1951) critically evaluated the distribution of Ae. variegatus and found that the species occupies a more diverse set of habitats in its Northern distribution than would be expected for a purely arcto-boreal species. Given this, he concluded that the attribute „arcto-boreal distribution“ largely oversimplifies the species‘ complex ecology and distribution in Asia, and he emphasized that Ae. variegatus has very strong ties to the xeric steppes of Asia. The species’ European distribution, on the other hand, reflects a classic arctic-alpine disjunction pattern (Schmitt et al. 2010). Biodiversity Day 2019 in Altprags (municipality of Prags/Braies, South Tyrol, Italy) The 20th South Tyrol Biodiversity Day took place in Altprags in the municipality of Braies in the Puster Valley and yielded a total of 884 identified taxa. Four of them are new for South Tyrol. The freshwater jellyfish Craspedacusta sowerbii Lankester 1880 is a cryptic cosmopolitan invasive species, which occurs in all continents except Antarctica. Recent molecular studies suggest the existence of at least three very different genetic lineages of Craspedacusta: the “sowerbii”, the “kiatingi”, and the “sinensis” lineages. We report the presence of both medusae and polyps of this alien taxon in the Large Lake of Monticolo / Montiggl, a meso-eutrophic natural lake in the Province of Bolzano / Bozen in Northern Italy. Molecular analyses of mitochondrial 16S sequences showed that this population belongs to a different lineage than that recently described for Sicily (Southern Italy). Therefore, there are two different genetic lineages of C. sowerbii in Italy. In the Large Lake of Monticolo / Montiggl medusae were observed in 6 consecutive summers (2015–2020), from July to September. All the examined medusae were males. The stomach content analyses showed that zooplanktonic copepods and cladocerans with size range between 0.3 and 0.8 mm were the preferred prey of medusae. Polyps of C. sowerbii were recorded in the lake on the zebra mussel Dreissena polymorpha in shallow water and on the underside of artificial substrates. The analyses of zebra mussels would therefore be a simple method to check for the presence of the polyp stage of C. sowerbii in various aquatic environments.
<urn:uuid:85a115ad-f76b-4ab6-8d70-618f3c068eed>
CC-MAIN-2021-43
https://www.natura.museum/it/ricerca/pubblicazioni/?verfasser=Petra%20Kranebitter
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.889904
2,099
2.828125
3
The traditions of wedding celebration in Russia are incomparable with any other country. A wedding in Russia always resembled a real theatrical spectacle, each action of which was filled with a certain meaning. Any deviation from the accepted order was considered a bad omen. At all times, the wedding ceremony in Russia was divided into several parts. Each stage required not only certain words and actions, but also other mandatory attributes – clothes, gifts, decoration of premises or vehicles. The whole process took from one week to three months. This article will tell you about the main episodes of weddings in Russia: matchmaking, hand-making, collusion, bachelor party, bath, morning before the wedding, paying the ransom, wedding, crowning, wedding tables. Wedding Clothes in Russia: How It Was. An important role in the Russian wedding was given to the clothes of the participants of the ceremonies. The main colors are red and white. Red symbolizes male power, and riches, and white – women’s purity, innocence and beauty. Woven items were decorated with fancy embroidery with symbolic patterns. By the way, only very rich people could afford red clothing in Ancient Rome and Medieval Europe. The dye was extracted from the shells of Mediterranean clams and was expensive. The Russians made red paint from Carmine (a substance extracted from cochineal insects). Therefore, even poor Russian bride could afford a chic outfit of a beautiful, dark red color. Russian Wedding Suit of the Groom. The main element of a man’s wedding suit was a red shirt. In the cold season, it could be replaced by a caftan of the same color. Masters often used thin and elegant linen fabric for sewing a suit. The groom’s shirt was also decorated with embroidery, but in smaller quantities than the bride’s. Rich people wore fur coats in winter. The groom often wore black pants, and boots. The lower part of a man’s suit didn’t really matter. The groom’s headdress was necessarily a hat, regardless of the season. Furs were always expensive and meant a sign of wealth. Therefore, the groom could wear a fur hat decorated with velvet or pearls even in summer. Ordinary people wore hats made of felt. Russian Wedding Outfit of the Bride. Russian bride wore a shirt made of homespun cloth under the main dress. There were no underwear in those days, its functions were performed by this part of the wardrobe. The bride began to decorate and embroider her outfits even before the date of the wedding was determined. Red and yellow threads were often used. Russian wedding sarafan (a dress with straps, without sleeves) was worn on the shirt. The color was almost always red, in rare cases – white or black with an abundance of colorful embroidery. An apron was worn over the sarafan. It served as a kind of “business card” of the bride. Girls spent years decorating it with embroidery. The whole suit was fastened with one or more belts. Russian woman put on sandals, bast shoes or woolen felt boots, depending on the season. Closer to the beginning of the twentieth century, leather boots were often used. Separately, it is worth noting the headdress of the bride. In almost all regions of Russia, women wore kokoshniks. Only the shape or decorative elements could differ. According to tradition, the bride had to remove the kokoshnik only in front of her future husband, at the wedding ceremony. The priest placed crowns on the bowed heads of the newlyweds and began the ceremony. Matchmaking. Combat reconnaissance. Every Russian wedding started with the matchmaking. Matchmakers came to the bride’s parents and agreed on the upcoming wedding. If the parties made a joint decision, then the groom’s parents came to the bride’s house. They brought a collusive cake – a kind of parting word to the future bride. The bride show ended with hand-making: the fathers of the bride and groom put the hands of the newlyweds into each other and hit them with a mitten. The parents sometimes beat the cake on the cake or broke the cake in half during this ritual. After that, the wedding could not be canceled. Then the period of preparation of the gifts came. The bride usually sewed or knitted some clothes for the groom: a scarf and gloves, underwear or even the entire wedding suit. A “chest” (a dowry) was also prepared for the celebration. Towels, dresses and bed linen were sewn and decorated with embroidery by the bride herself with close relatives. Traditions Of The Russian Bachelorette Party. Russian bachelorette party was arranged a few days before the wedding. A woman said goodbye to youth and parental care on this day. The bride gathered her friends, the girls sang traditional songs. They also sang at the bachelorette party “thanks” to their parents. The culmination of the bachelorette party was the ritual of losing beauty (“will”). Singing a long song, the friends took off the bride’s headscarf, worn during the conspiracy, and unwound the girl’s braid, taking out the “will” – a ribbon that symbolized the girl’s beauty and freedom. As a rule, the “will” was passed on to the bride’s younger sister, and if it was not there, then to the unmarried girlfriend. After this ritual, the bride, surrounded by her friends, went to the heated bathhouse. Wedding Divination. Who is the Master of the House? Before the wedding, the bride, the groom, and their relatives wondered to find out how the newlyweds’ life together would turn out. Some wedding ceremonies in Russia helped to create a psychological portrait of the chosen one better than a horoscope. Relatives of the bride brought a chicken decorated with ribbons to the groom’s house on the morning of the wedding day. It run under the table and ate millet. If the hen clucked loudly and flapped its wings, then the groom should be prepared: the wife will be grumpy. With the help of loaves, they “decided” who would be the owner of the house. Whose matchmaker (the bride’s or groom’s one) raised the loaf higher, this person will dominate the family. Wedding train and Paying the Ransom. Someone from the groom’s side (brother or friend) came to the bride on the morning of the wedding day. He brought a gift with him and left with a gift in return. Then the groom appeared on the doorstep. The groom went to the bride on the wedding “train”. The first horse in the team was the most elegant – with ribbons, bells and flowers. Relatives of the bride tried to block the path of the procession and “sold the road”. The groom paid off with liquor or gifts. This tradition is called a “ransom”. Then girlfriends did not let the groom into the house to the bride until he gives them gifts, or does not perform certain tasks (contests, games). When the groom gave money and did all tasks – the entrance was available. On the wedding day, the bride was waiting for a special ceremony. The girl was combed. Then she changed her headdress for a new one – a married one. The matchmaker covered her from the evil eye with a veil, braided two braids and laid them in a bun or “horns”. And on top of the hairstyle, a headdress was worn, denoting a new social status. The wedding in Russia consisted of two parts (the betrothal and the crowning). The crowning is a ceremony of marriage according to the rite of the Orthodox Church. The ritual is the same nowadays. During the crowning, the chants of the Church choir sound, the priest blesses the common cup, and then the young people taste the wine diluted with water from it three times. The Holy veil and crowns are placed on the heads of the bride and groom, the priest joins their hands and circles them three times around the lectern, then the young people exchange wedding rings. A marriage sealed by the Church is considered indissoluble in Russia. Arriving from the Church to the groom’s house, the guests took their place at the wedding tables. Relatives were seated in two rows (on one side – men, on the other – women). Food was brought to the tables, women sang great songs, and after the celebration, the young people were escorted to the bedroom. Wedding Traditions in the USSR. The Soviet period created another wedding tradition – the rite of visiting attractions. After the registration of marriage in the registry office, the bride and groom in the company of guests go for a ride in cars. Stops are made and ritual actions are performed at certain “special” points of the route (usually local attractions). Usually only young people participate in this activity. Older guests meet young people at the wedding table, where they come after skating. Wedding Traditions in Russia Today. Today, some elements of the traditional wedding ceremony (matchmaking, ransom) are preserved, but they are played out and perceived as a wedding entertainment. Modern weddings are registered in the department of public services (known as ZAGS) by civil servants in the hall to solemn music (Mendelssohn’s March). The wedding is preceded by a betrothal, when those who wish to marry announce their intention, after which they are considered the bride and groom. Today the wedding train is a pre-hired transport, decorated with ribbons and balloons with the image of crossed rings (wedding symbol in Russia). After the marriage registration ceremony, the bride and groom are considered newlyweds. They accept congratulations, drink champagne and sometimes break glasses “for luck”. When leaving the place of the ceremony, the newlyweds throw coins under their feet, scatter rice, flower petals or something else, according to beliefs, bringing happiness or giving strength to the union of young people. The young wife usually throws the bride’s bouquet over her shoulder, turning her back to the guests. After the ceremony of marriage registration, the newlyweds arrange a photo session for the wedding photo album and then arrive at a pre-ordered cafe or restaurant for a wedding dinner. During the wedding dinner, the newlyweds sit at the head of the table, and the feast is accompanied by periodic exclamations of “Gorko” (“Bitter!”), urging the young people to kiss. A special part of a Russian wedding is the presentation of gifts. Guests are looking for unique and prestigious wedding gifts. Such gifts are often porcelain sets, which are produced by the best porcelain factories in Russia: Gzhel, Imperial (Lomonosov) and Dulevo porcelain factory. A wedding is the first family holiday. Today, there are no specific rules for celebrating this day. Depending on their preferences, circumstances, desires, social status and financial capabilities, each couple chooses the option that suits them best. However, many traditions of wedding celebrations have been preserved to this day. And in no other country is a wedding celebrated as boldly and brightly as in Russia.
<urn:uuid:e2c44961-c333-4ea0-bafb-81a3ec884a50>
CC-MAIN-2021-43
https://bestwonderstore.com/2020/12/29/traditions-wedding-celebration-russia/?add_to_wishlist=28979&_wpnonce=173059f2bd
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00350.warc.gz
en
0.96164
2,435
2.96875
3
I have seen pictures of strawberries grown in gutters. I think people are growing strawberries in gutters so they are off the ground so the rain and soil don’t rot them. They are cleaner and look beautiful. Do you have any information on this method? I want to transplant my strawberries into this system. They don’t do as good when they are on the ground. My old raised beds, the wood has rotted and I need to move them soon. Plus, I have the strawberries with the runners. If I do it in an “A” structure I would cut the runners off. Put one gutter on the top of the “A” and two gutters down the sides of the “A” and so forth. Have you seen this done? I would really like to try it for this summer. How would I deal with the plants through the winter. Take the structure apart and store the plants in my basement? I would want to save them some how for the next year. Thank you for any advice or articles you can send me on the subject. Also, I have seen grapes grown over an old dog kennel. It worked very slick. The grapes grew on top and were in the kennel where the birds couldn’t get them. It looked great. I am trying to get more strawberries, and with all the rain we got last year through the summer. They didn’t do so good. Answer to: Growing Strawberry Plants in Gutters? Growing strawberries in gutters is definitely a viable way to grow strawberries. There are specific challenges, however. There are a lot of people posting pictures of their gutter systems for growing strawberries on the internet. You have probably seen some of these. There are several benefits to using gutters, and several drawbacks. In order to best answer your question, I think several areas should be discussed. Benefits of Growing Strawberry Plants in Gutters There are definitely benefits to growing strawberries in gutters. Like you mentioned, having the strawberry plants raised off the ground helps keep them clean, which is a major benefit. Here are the primary benefits of growing strawberry plants in gutters: 1. Minimize soil contact Strawberry plants are small. They don’t have woody stems, so their growth is usually only about 12 to 18 inches at most. The fruit trusses that have the flowers (and eventually strawberries) are often a bit shorter and shaded by the canopy of strawberry leaves. These fruit trusses are also non-woody. When the strawberries begin to grow and ripen, they are too heavy for the fruit trusses to support. They end up bending the truss until the berries rest on the soil. This is one of the reasons straw is used in traditional strawberry beds: to keep the berries off the soil. The soil can harbor pathogenic fungi and other creepy-crawlies that can infect or infest your strawberry planting. So, keeping strawberries off the dirt is important. Additionally, rain can splash muddy soil up onto strawberries and contaminate them that way. Using gutter systems minimizes this risk. 2. Make strawberry picking easier Since strawberry plants are short, regular picking requires either a lot of bending or kneeling. That is a recipe for sore backs and sore knees. Most gutter systems place the height of the gutters to minimize the strain on the body during picking. So, growing strawberries in gutters is a great idea for people with back and knee problems. It is important, however, to keep this in mind when designing or choosing a gutter system. 3. Strawberries grown in gutters are easy to protect Humans aren’t the only creatures that love to nibble on strawberries. Rabbits, raccoons, possums, and especially birds of all kinds (turkeys and crows can do a number on them) all pose a threat. While no protective system short of a locked greenhouse will keep a starving animal at bay, growing strawberries in gutters does make protecting strawberry plants easier in some ways. Birds pose the greatest threat to strawberries in most places. Once they find a strawberry bed, they come back over and over as if it is their own personal bird feeder. A-frame gutter systems, wall-mounted systems, PVC systems, and other trough-based systems are usually very amenable to covering with bird netting. Bird netting will generally keep birds away and makes it much more unpleasant for rabbits and other critters to try to get at your strawberries, especially when they are off the ground. 4. Growing strawberries in gutters makes some maintenance easy Just like picking is easier with most gutter systems, growing strawberries in gutters allows for ease during certain maintenance tasks. Snipping first-year blooms is easier, snipping runners is easier, and the renovation tasks that are still applicable for gutter systems are usually easier as well. 5. Gutter strawberries are very flexible systems Probably the greatest benefit of is their ability to adapt to virtually any situation or environment. Because of increasing utilization of gutter systems, I’ve devoted an entire section to the flexibility of gutter strawberry systems below. But first: the drawbacks… Drawbacks of Growing Strawberries in Gutters There are a lot of benefits to growing strawberries in gutters. But, it isn’t all wine and roses. There are some significant drawbacks to growing strawberries in gutters also. Here are the major challenges you will likely face if you decide to “go gutter.” 1. Water problems People often plant strawberries in gutters that are level and have end-caps. This will often result in strawberry death due to lack of sufficient drainage and/or pathogenic fungal infection. To minimize this, a slope of approximately 7% should be built into the construction, and/or drain holes should be placed to prevent the gutters from filling up with water to saturation. Additionally, it can be more difficult to maintain adequate water levels as the gutters are usually more exposed. This can result in the soil drying out more rapidly. And, not enough moisture will kill the plants just as surely as too much will. 2. Insulation problems Ground-planted strawberry plants have well-insulated roots. Many feet of contiguous ground horizontally and down protect the plants from both rapid shifts in temperature and the extremes of winter and summer. Gutter-planted strawberries have no such natural insulation. Consequently, the soil can get too hot in the summer in many places and too cold in the winter. So, keeping the roots cool enough to produce a good crop of strawberries can pose a problem for strawberries planted in gutters. And, especially in Zones colder than Zone 6, extra insulation will almost certainly be needed during the winter months to keep the roots and crowns from freezing through and dying. During the winters, you can bring the gutters into an unheated garage for extra protection. If a basement is unheated also, it would probably work during the coldest months as well, but it needs to be cold enough to keep the plants in dormancy. You must water the plants regularly to ensure that the soil doesn’t dry completely as long as they are under cover, however. Another option for those with yards or ground space that can be utilized to to dig gutter-deep and gutter-wide trenches in the dirt, then place the gutters in them, replace soil around the gutters for insulation, then mulch with clean straw. In the springtime simply remove the straw, take out the gutters, and re-assemble them wherever they were during the previous growing season. If both of those are not an option, the gutters should be wrapped with an insulating material to protect them. The material will likely have to be removed periodically to water the plants and then reapplied. 3. Growth/root problems Strawberry roots can grow downward up to 12 inches in the right conditions. Even in poor or heavy soils, they will usually grow downward 6 inches. Many types of gutters are less than 6 inches deep and less than 6 inches wide. While the majority of strawberry roots grow in the top 3-4 inches of soil, the rest of the roots contribute significantly to both plant growth and strawberry production. By constricting the root area to the size of the gutter used, many gutter systems inadvertently limit both plant growth and strawberry production. 4. Temperature problems The insulation problem mentioned above is similar to this. Strawberries are temperate by nature and need cooler roots to produce well. Elevating plants in gutters can make them overheat in the summers and die or produce no strawberries. Likewise, soil warmed prematurely in the late winter or early spring can induce plants to leave dormancy too early and then suffer cold injury with sharp, rapid temperature shifts downward. 5. Perennial problems Strawberry plants are perennial by nature. The primary problems above affect the ease with which a gardener can enjoy them year after year. Additionally, strawberry plants have a productive span of about 4 years. In order to maximize production each year, utilizing the runner plants produced each year is a good idea. Rooting and then transplanting them helps keep the strawberries coming each year. This process is simply more difficult when you grow strawberries in gutters. Flexibility of Gutter Strawberry Systems As mentioned above, growing strawberries in gutters is a great option for people with limited space or no available soil. Gutter systems have proliferated in urban settings, rooftop gardens, and deck/patio/porch gardens all over the world. Literally, the locations for gutter gardens are almost limitless. Anywhere you can affix a gutter can be a strawberry-growing location. Gutters can be nailed to sunny-side barn walls, built into vertical or A-frames, or almost anywhere else. The systems can be adapted and modified to fit virtually any vision or desire! Growing Strawberries in Gutters: Conclusion Obviously, growing strawberries in gutters is a great option for a lot of people. Otherwise, they wouldn’t do it. But, it isn’t without challenges too. With proper planning and execution of a good gutter system, great harvests can be obtained. But, they aren’t just plant and forget systems. They require monitoring just like in-ground strawberry beds do. And, the specific challenges you might run into when growing strawberries in gutters typically mean that the monitoring process is a bit more involved and frequent. But, if you are up for the challenge, it can be very rewarding to pick ripe strawberries with minimal picking effort! So, good luck! And, if you want more tips, see this: 4 Secrets to Growing Loads of Organic Strawberries. This is a question submitted to StrawberryPlants.org by a reader. See the Strawberry FAQ for more questions and answers.
<urn:uuid:5b0994d8-7c88-432d-8cc6-b7eb6aafcf1a>
CC-MAIN-2021-43
https://strawberryplants.org/growing-strawberries-in-gutters/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.949697
2,263
2.8125
3
In The Age Of Smartphones, Parents Are Encouraged To Be Media Mentors, Not Gatekeepers AUDIE CORNISH, HOST: Parenting in the age of smartphones can be really stressful. Health experts from the World Health Organization on down say we should limit kids' screen time to a, quote, "healthy level." But infants aside, that doesn't mean zero. There's a growing push to encourage parents to be media mentors rather than gatekeepers. NPR's Anya Kamenetz has looked into this for our Life Kit parenting podcast. She's here to talk us through it. Welcome back to the program, Anya. ANYA KAMENETZ, BYLINE: Thanks, Audie. CORNISH: Define media mentoring for us. KAMENETZ: So the philosophy behind this is pretty simple, right? It's trying to use digital devices together with your children as much as you can and assisting them in understanding what it is that they're doing on those devices. So one proponent of this is Mimi Ito. She's a researcher at the University of California, Irvine. And she says we need to face the facts that media, especially things like video games, are a major source of fun for kids. MIMI ITO: Unless parents can find a way to somehow understand and engage with that in a positive way, video games can often become a source of tension between parents and kids. And so we see time and time again that parents aren't engaged in the kind of mentoring and guidance around video games that they do for other parts of kids' play and growing up and friendship relationships. KAMENETZ: So she says you need to get in there and play video games with your kids. And she also says that this is fun. ITO: It's a lot more fun than clocking screen time and, you know, doing the finger-wagging thing. CORNISH: I thought it was pediatricians who told us to do the finger-wagging thing. (Laughter) I'm a little bit offended by this. KAMENETZ: I know, right? CORNISH: So how does this work out in real life? KAMENETZ: So I visited a family in Washington, D.C. - Chris Wallace, Latoya Peterson and their son Gavin, who's 5. This is Gavin. (SOUNDBITE OF KOJI KONDO'S "OVERWORLD THEME") CORNISH: Good to know that game is still popular. I recognize the tune (laughter). KAMENETZ: Oh, my gosh. Nintendo's having this incredible comeback. And that's his favorite stuffed animal ever - that somehow matches up the "Mario Bros." game plus Captain Marvel. Anyway, almost every night after dinner, this family jumps on the couch and plays big, complex PlayStation video games. GAVIN: Oh, my gosh. You want to see what happens? Hit X, and it'll make a sound. See. They're trying to fake. He's a darkness guy. CORNISH: What are they playing there? KAMENETZ: OK, so it's a big game called "Kingdom Hearts" that has all these different Disney characters kind of on one universe. And they play other games together, too, even some that are not necessarily meant for young kids, like one called "Persona." Latoya Peterson says it's just certain parts of that game that are age-appropriate. She stresses this. LATOYA PETERSON: Normally, he's playing with me. Normally, we play together. KAMENETZ: And I should say, you know, all of this comes really naturally to Peterson. You know, she grew up playing video games, even though her dad didn't necessarily want her using his system. PETERSON: I would just wait until dad wasn't home, sneak into the room (laughter) and play. KAMENETZ: And today, she's been really successful in new media. And she's the co-founder of an all-women-of-color-led video game company called Glow Up Games. CORNISH: Can I ask something, Anya, here? Essentially, are they arguing that you can play video games along with your kids the same way you would read along with your kids and get some kind of benefit from it? KAMENETZ: That's exactly right. When you are sharing media time with your kids, you're giving them the chance to understand better the messages that are coming across. You can learn social and emotional skills from this, just as you would from a story. CORNISH: How does this square with health recommendations that kids should actually limit screen time, especially before bed? KAMENETZ: So being a media mentor doesn't mean that you say yes all the time, and you're always handing out candy. The American Academy of Pediatrics says parents should keep your schedule, prioritize kids' sleep, outdoor play and family meals. And Latoya Peterson and Chris Wallace actually do all of this. CORNISH: Are there certain things parents should be doing when they're using screens with their kids? KAMENETZ: So consistently having conversations about what they're playing or watching is what experts call active mediation. And Latoya Peterson sees video games as an opportunity. She sees them as a way that Gavin can get comfortable with technology, to pick up new skills, not just tech skills, either. PETERSON: One of the big things we're working on right now is the concept of resiliency and not quitting when something is hard. And games are great with that because the whole idea - like, I think we were in some castle. And he's like, Mom, this castle - 'cause I died, like, twice in this castle, like, immediately. And Gavin's like, Mom, this castle's too hard. We should stop. And I was like, Gavin, this is the point. Like, sometimes, things are hard, and you have to go back and try again, or you try something different. And I've noticed he does that in his real life. GAVIN: Sometimes, you lose and lose and lose. And in "Persona," sometimes, when a monster kills us and gets our blue heart, we die. We lost. And that means our battle game is over. CORNISH: Gavin sounds amazingly sweet. There are parents, though, who, let's say, use screens to occupy their kids so that they can get some stuff done. KAMENETZ: I don't know what you're talking about. I've never done that. CORNISH: I don't know parents like this. I know they're out there. So what if you can't make time to have this kind of hands-on interaction the way Chris Wallace and Latoya Peterson are doing? KAMENETZ: So this is a key point. I'm glad you brought it up. Dr. Jenny Radesky - she is the pediatrician who lead-authored that American Academy of Pediatrics guidelines on kids and media. So she's the rule maker. And she says that, yes, sometimes, kids are going to use screens by themselves. And what happens after that is you try to have a dialogue with them and ask them questions about what they're watching, what they're playing. JENNY RADESKY: What do you like about this? And what seems annoying or creepy about it, too? KAMENETZ: And Dr. Radesky says through these conversations, we can help our kids develop a bit of self-regulation around screen time, also. RADESKY: Do you think it's OK to sit and watch slime videos for an hour? Like, what's good about that? What's not good about that? CORNISH: This all makes sense when they're very young. As kids get older, they can be less interested in hanging out with their parents. Does this media mentoring idea work at older ages? KAMENETZ: Absolutely, it can, according to Mimi Ito. She's the researcher at UC Irvine. Her children now are 18 and 21. But when her son was a teen she saw her role shifting. ITO: To me asking a lot of questions and observing my son's gameplay and being more of a interested observer, supporter, cheerleader rather than somebody that was actually playing the same games. KAMENETZ: So being that cheerleader and supporting her kids' interests - she credits that with kind of leading to both of her children now studying computer science, for example. CORNISH: Finally, Anya, we've been talking about this in the context of video games. But for many parents, it's more likely to involve our smartphones and our tablets. How should we be mentoring our behavior with those? KAMENETZ: That's a great point. So the point is here - our kids are watching and learning from us 24 hours a day, even when we're not being exemplars. So if you are constantly kind of pulled into your smartphone, they're going to absorb that that's an OK way to treat your family members. On the other hand, on the positive side, you know, most of us use technology in the course of our work, our personal passions to learn about the world, to discover new music, to keep in touch with friends and family. And those are all positive things that we can share with our kids by modeling that, as well. CORNISH: That's NPR's Anya Kamenetz. Anya, thanks so much. KAMENETZ: Thanks, Audie. CORNISH: And Anya hosts NPR's Life Kit parenting podcast. The Life Kit series has practical tips on all sorts of things. You can find it at npr.org/lifekit. Transcript provided by NPR, Copyright NPR.
<urn:uuid:a84db75d-5a48-4324-838e-00a579964d75>
CC-MAIN-2021-43
https://news.wjct.org/tech/2019-08-06/in-the-age-of-smartphones-parents-are-encouraged-to-be-media-mentors-not-gatekeepers
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.968748
2,115
2.609375
3
Equus Forma Mechanica: The Parmigiani Fleurier Hippologia The horse, an imposing majestic creature, is our earliest form of long-distance vehicle. Equus ferus caballus is inextricably linked to all kinds of history and has played an extremely important role in the development of modern civilization. Only in the last century has the horse lost its status as the primary mode of transportation except in the most remote areas of the world. While dogs, cats, sheep, cattle, goats, and pigs were domesticated first, domestication of the horse around 3000 BCE was a big turning point, one that allowed humankind to travel quickly. Horses allowed for much faster migration and the eventual clashing of cultures thanks to territories overlapping with previously unknown peoples. As well as providing transportation, horses became ever more familiar to humans and animals and their owners formed lifelong bonds. This added to the growing reverence for the horse, one that lasts to this day in much of the world. Horses are symbols of freedom, power, and wisdom to people around the world. And as such, depictions of horses remain a mainstay to artistic endeavors. And nothing about this animal is more intriguing than the magnificent sight of a horse in motion, and that’s why recreating that movement in a machine has been a goal of many skilled automata builders throughout history (see Why Independent Russian Watchmaker Konstantin Chaykin Is A Movie Star). One of the most fantastical examples of a horse automaton I have ever seen was at SIHH 2016, at none other than the Parmigiani Fleurier stand. The study of horses While that might sound like a lofty name for a simple automaton, it really gets to the heart of the piece. This automat is the culmination of extensive study into the horse and its movements. One viewing of the Hippologia in action is enough to confirm that the horses’ gaits seem completely fluid and natural. While this may seem like a relatively simple task, creating this movement takes a huge amount of analysis and testing. Multiple articulated armatures are finicky, requiring slight adjustments to pivot points and linkage lengths to achieve largely different motions. The Hippologia displays two horses, a mare and a foal, taking a stroll around a Lalique glassware cabinet enclosing the highly complicated automaton and eight-day clock movement. The mare makes its rounds on the outside track at a slow trot, taking an oval path, all the while moving in a leisurely trotting gait; the foal moves in a circle on the inside path. And since the little foal is smaller and needs to catch up with its mother, it runs along in a short gallop. I bring your attention to their chosen paths and their differing movement styles because to make them happen required a huge amount of work that barely overlaps. The secret is in the gait Making two automaton horses move with two different types of gaits − trot and gallop − along two different-shaped paths requires almost completely independent solutions. That is because, with linkages driven by gears and pinions, the approach for each must be individually calculated, tested, and adjusted for completely different goals. Equine nerd side note: gait, as it refers to horses, is the pattern of movement of the limbs specific to different paces of locomotion over the ground. Basically, how a horse moves its legs differently to move more or less quickly or carefully. When walking, a horse only has one hoof in the air at a time in a one-beat movement; when trotting, a horse has two hooves in the air in a two-beat movement; when loping or cantering it has three hooves in the air in a three-beat movement; and when galloping all four hooves are in the air for a time in a four-beat movement. Creating two very different types of motion (trot and gallop) requires separate drive trains for each, and the linkage design of the legs is modified as the motion of each leg differs from a trot to a gallop. I’m sure there were many discussions about whether or not to have both horses use the same gait as this would have made development much easier, but in the end realism won out over simplicity. The length of linkages, the size of off-center cranks, the gear ratios of the mare and foal’s drive trains all needed to be calculated separately to achieve the very realistic, but very different, gaits of each horse. Each leg can have up to six linkage sections with over a dozen pivot points that are extremely critical to the desired path and smoothness of movement. If a linkage were to be a little too long, or a pivot in the wrong spot – or if a pivot were to bind at just the wrong moment – the entire movement of the leg might be thrown out of whack. More than meets the eye This critical movement is only made more difficult by the pattern in which each leg needs to move relative to the others, which also is different for each horse based on the desired gait. This is managed by the carefully calculated gear train that resides in each horse’s body. That gear train also creates a movement for the solid silver head and tail, adding to the realism of each horse’s gait. But how are these horses mechanically powered if they are mounted on solid pillars that help them travel around the clock? This is probably one of the defining features of complex automata throughout history: the hidden axle that powers the seemingly disconnected mechanical marvel. Inside the pillars that the horses are mounted on, an axle rotates with a bevel gear mating to the horse mechanism, and a gear or sprocket on the other end mating with something to drive the motion. If the horses were stationary, the motion would be driven by a gear meshing with an external drive train. But since the horses both race around the glass cabinet, they needed to mesh with stationary racks upon which they are driven past, activating the automaton. Again, it would have been much easier if the same system could be used for both horses but, alas, since they take different paths in different shapes, that was not an option. The solution brings us back to my mention of a gear OR sprocket attached to the bottom end of the drive axles. The foal in the center rotates in a circular path, which makes driving it relatively easy (compared to the mare). The bottom of the axles feature a gear that meshes with an internal ring gear rigidly mounted to the upper plate of the base mechanism. The horse’s path of rotation is pivoted around a central axis and the ring gear drives the axles transferring the motion into the foal’s internal mechanism. The solution for the mare is less straightforward. You had to make it difficult, didn’t you? Since the mare needs to take an oval path around the Hippologia, a couple different solutions could be used. If a gear-only solution was to be utilized, a hypotrochoidal configuration could be used to provide an oval path where one gear rotates around the interior of an internal ring gear and the object being driven is found at a position away from the center of the smaller gear. This system could create oval movement, but the system itself would interfere with its ability to also drive two axles separated by a large distance, not to mention it would intersect with the foal’s mechanism as well. So that solution was out. The solution Parmigiani used is so much more interesting with its outside-the-box thinking. To create an oval movement, drive the separate axles, and not interfere with the inset mechanism of the foal, a pulley driven carriage slides around a curved track that doubles as a rack that drives two sprockets attached to the axles of the mare. What? I know, let’s break it apart. The larger trotting horse is mounted to twin sliding carriages that are connected to maintain proper spacing, but allowed to rotate separately and follow the curve of the track. The carriages wrap around the track in the shape of a C because the track is mounted to posts on the exterior surface. The track is a very thin strip of metal that is rather tall, probably around a few centimeters (an inch or so) taller compared to its thin profile, which is most likely 0.5 mm or thinner. The track is pierced on the top and bottom with evenly spaced rectangular holes, perfect for the teeth of a sprocket to mesh with. These holes are what provide the rotation for the axles as the sprocket rolls around the ellipse driven by meshing with the thin rack. The ratio of these holes to the diameter of the ellipse is the same calculation as any other gear train, it just takes a different form. The most difficult aspect of this solution is actually making sure the carriages are rigid enough to maintain smooth and steady motion around the track, while being free-moving enough to eliminate any binding issues and keep the energy required to drive it at a minimum. The action is completed by a pulley system that continuously pulls the carriages around the track while not interfering with the action of the mechanism. The entire solution is an astonishing success and provides two horses with very different yet extremely beautiful movements. Even more incredibly, the automaton mechanism is almost completely separate from the eight-day clock movement and uses twice as many components: more than 2,200 for the pair of equines. Style and grace The eight-day clock is almost overshadowed by the awesomazing automata above it, but it shines when you understand what it does as well. Providing two sliding scales for the elegant passing of hours and minutes, the clock display is plated in gold and surrounded by glittering diamonds. The automaton can be set to run at a given time or on command. Before the automaton runs, a gong will chime signaling the start of the show. Since the automaton can be set to run at a given time based on the clock movement, this means that the entire thing can technically be considered a chiming alarm with automaton alarm display. The automaton can run three times for 40 seconds when fully wound, so without additional manual activation, the automaton must be wound three times before the clock needs to be wound again. Knowing how much I would want to play with it, I would need to rewind the automaton every couple of minutes! And that is because the Hippologia inspires awe in the grace of the horses’ movements and adoration for the style of the presentation. The Lalique glass cabinet is a swirl of gold and translucent shapes. The mother-of-pearl cabochons, white and champagne-colored diamonds, anthracite, gold, white gold, polished silver, nickel-palladium, and rhodium plating make the Hippologia a dazzling beauty to behold, even if it was just a visual object. But once you consider everything that the mechanisms can do and how beautiful the sounds and displays are, this object becomes a piece of magic and a window to beauty. It is a mixture of the mechanical and natural worlds, something that deserves to be heralded as a wonder of the art world, of horology, and of our scientific culture. If these types of objects continue to be made, our world will only get better. For more information, please visit www.parmigiani.ch/hippologia. Presentation: 300 x 550 x 350 mm, Lalique glass cabinet, diamonds, mother-of-pearl, white gold, nickel-palladium, rhodium, silver Movement: manual winding Caliber PF239 Functions: hours, minutes; twin automata manually or time activated, gong indication Limitation: one piece unique Price: 2.4 million Swiss francs
<urn:uuid:14bb1efd-03fe-4d3c-9787-12608209c699>
CC-MAIN-2021-43
https://quillandpad.com/2016/04/11/equus-forma-mechanica-the-parmigiani-fleurier-hippologia/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00231.warc.gz
en
0.949645
2,505
3.296875
3
Marxism and Evolution: 150 years since publication of Darwin’s “The Descent of Man” February 24th is the 150th anniversary of Charles Darwin’s ground-breaking 1871 book, “The Descent of Man”, which situated humanity firmly within the natural world and not as some kind of special divine creation. Marx and Engels greeted Darwin’s work enthusiastically as confirmation of their materialist outlook, even though Darwin’s version of evolution, one of countless incremental steps to produce a gradual evolution, was not fully dialectical or dynamic. After the publication of “Origin of Species” Engels wrote to Marx “Darwin, by the way, whom I’m reading just now, is absolutely splendid. There was one aspect of teleology that had yet to be demolished, and that has now been done. Never before has so grandiose an attempt been made to demonstrate historical evolution in Nature, and certainly never to such good effect.” Marx was just as enthusiastic, calling it “The book which contains the basis in natural history for our view.” In a letter to the German socialist Ferdinand Lasalle, Marx wrote “Darwin’s work is most important and suits my purpose in that it provides a basis in natural science for the historical class struggle” Wilhelm Liebknecht, a friend and comrade who often visited the Marx family in London, later recalled: “When Darwin drew the conclusions from his research work and brought them to the knowledge of the public, we spoke of nothing else for months but Darwin and the enormous significance of his scientific discoveries.” Stephen Jay Gould: Stability and revolutionary change Despite the conservatism of most mainstream biologists who preferred to see gradual evolution in both the natural world, and in society as a whole, there were always scientists who pointed out that the fossil record was best explained by long periods of stability with occasional rapid changes and the emergence of new forms. Stephen Jay Gould, who was at least influenced by the Marxist philosophy of dialectical materialism, set down the idea of ‘punctuated equilibrium’ in his key 1972 paper “Punctuated equilibria: an alternative to gradualism” and he did it so well that it has more or less become mainstream. He laid the groundwork for perhaps the best application of Marxist ideas to Darwinism, by the British biologist John Maynard Smith. John Maynard Smith The book “Major Transitions in Evolution” (1997) by Maynard Smith and Hungarian biochemist Eors Szathmary gives a startling confirmation of the theories of development outlined by Marx and Engels, and set down in outline by Engels in his “Dialectics of Nature”, published in 1883. John Maynard Smith in interviews even referred to his ideas as a theory of “revolutionary development” in evolution as opposed to a gradual evolution purely by the accumulation of small changes. The authors see the evolution of life proceeding by periods of gradual change, adaptation and variation, punctuated by huge transformations giving rise to new and more complex forms. Smith and Szathmary identify nine major transitions, or, in the language of dialectics, qualitative changes that have taken place in the evolution of life. None of these changes can be explained by gradual evolution from one state to another – a new level of biological organisation had to come through a dramatic re-organisation of living material and of the information required to transmit and reproduce life. Nine key transitions in evolution of life on earth In order, these transitions are: (1) the origin of life itself as self-replicating chemicals, (2) the grouping of single replicating molecules or genes into chromosomes which contain many self-replicators, (3) the transition from life coded by RNA as an enzyme to DNA as purely an information-storing molecule; then (4) the origin of bacteria as cells containing chromosomes, (5) the transition from bacteria to single-celled organisms and then (6) the transition from single-celled organisms to multi-cellular animals, plans and fungi. Once these groups had evolved, many species, though not all, moved from reproduction by asexual budding or cloning to (7) reproduction by sex. He also sees two further transitions within the animal kingdom – (8) the rise of social insects; and (9) the development of human societies. None of these qualitative changes can be explained by a simple evolution from what went before. 1) The origin of life The first transition, from chemicals to self replicators, is one of the knottiest problems in biology. The simple organic compounds which are the building blocks of life – amino acids, aldehydes and fatty acids and even sugars – were shown in the 1950s to be a likely outcome of the interaction of lightning and methane, ammonia and nitrogen in the atmosphere of the early earth. Some of these compounds have also been found in comets, asteroids and other planets. However, it is still unclear how or where the resulting organic chemicals first became able to reproduce themselves – most scientists favour deep sea vents, but alternatives include volcanic mud pools or metallic surfaces. This mystery remains unsolved. Nevertheless, we are here, so it must have been solved. 2) Chromosomes as templates The next step was for the self-replicating chemicals to be able to organise themselves into templates ready for use in protein replication, what we see today as chromosomes. This step is constrained by what is termed ‘Eigen’s Paradox’ – that for effective replication enzymes are required and yet the genetic code required to make enzymes is too long to be made without enzymes. Another mystery for science to solve. 3) RNA to DNA The third transition is from an ‘RNA world’ where RNA carries out both the functions of enzyme activity and that of storing genetic information, to a ‘DNA world’ where DNA stores the genetic code and RNA is confined to translating the genetic code to sequences of amino acids which then form to form proteins. RNA can work as both code and enzyme, although it does each inadequately, whereas DNA is a much better information store, being more resistant to mutation and damage. We can see the traces of this early division of labour when we examine a modern cell – DNA in genes binds to RNA molecules in the cell nucleus and the RNA then moves off into the cytoplasm where it binds to amino acids in the order originally coded by the DNA. The resulting sequences of amino acids form proteins which fold into particular shapes to form enzymes and structural proteins. It is these enzymes which create and destroy chemicals for cell metabolism, which pump harmful ions out of the cell and useful ones in, and which build cell structures and even repair the DNA itself. They are very basis of life. 4) The origin of the cell Once this step had been accomplished the stage was set for the first forms of life observable today – the bacteria. A bacterium has a DNA chromosome which synthesises the proteins with help of RNA, and a cell membrane and wall to keep the living material separate from the environment. The cell wall and membrane have to be created and maintained by proteins coded for by DNA. By some criteria, bacteria are the most successful life forms on the planet, inhabiting almost all known environments from the edge of space, to the Antarctic, volcanic pools, the deep sea, and as confirmed recently, the deep earth, kilometres below the surface. Fossils of bacterial colonies first appear in the record just less than 4 billion years ago. They were the dominant life form for the next 2 billion years. 5) Emergence of animals, plants and fungi The next stage, around 2 billion years ago, was the emergence of more complex, but still single celled, organisms such as amoebae, algae and yeasts. It is now accepted that this happened through the co-operation and interpenetration of different forms of bacteria. The idea was first put forward by American biologist Lynn Margulis in the 1970s, based on earlier Russian work. Margulis’ Theory was that a large bacterial cell had formed a partnership with a small energy-producing bacterium, akin to modern Purple Non-Sulphur Bacteria, so that eventually neither could live without the other. Eventually the small cell took up residence in the larger cell, and we see the descendants of these small energy producing cells inside each cell of modern animals, plants and fungi in the form of mitochondria. In the case of plants these two partners were joined by a third type of bacteria closely related to modern Blue-Green Algae which was capable of photosynthesis. Proof of this partnership or symbiosis is the fact that both mitochondria and the photosynthetic components, chloroplasts, of plant cells retain their own separate DNA. Also at the present time the lichens are composed of at least two organisms – a fungus and an alga – living together, interpenetrated, and each unable to survive without the other. 6) Multicelled animals and plants For a further billion years life on earth was restricted to bacteria and single celled plants, animals and fungi. However at a certain stage the single celled organisms began to associate with each other into colonies and strands as we see in modern jellyfish, sponges and sea-weeds, and around 500 million years ago true multi-cellular animals and plants can be seen in the fossil record. The exact mechanism is still obscure, but in what is termed the ‘Cambrian Explosion’ most of the modern families of plants and animals burst onto the scene. Darwin himself admitted that the Cambrian Explosion posed the greatest difficulty for his theory of gradual evolution – the theory simply does not fit with the facts. Smith and Szathmary speculate that the drastic change in earth’s atmosphere due to the production of oxygen by algae may have allowed larger animals to survive, or else the ‘division of labour’ by the different tissues of multi-cellular animals drove their explosive evolution. Other scientists believe that a process of hybridisation between the different animal groups lay behind the sudden appearance of insects, worms, molluscs, crustaceans and vertebrates. One thing is certain – it was no gradual process. 7) Sexual reproduction Once multi-celled organisms had evolved there was a further transition from reproduction by simple budding or cloning to reproduction by sexual means. This may have evolved once but more likely several times as not all animals, plants and fungi are sexual, and many can use either method. For instance, amongst aphids some species are asexual, some are sexual and some reproduce by cloning in summer when food is plentiful and turn to sexual reproduction as autumn approaches. Turkeys can occasionally reproduce asexually and recently a lonely Komodo Dragon in Chester Zoo produced young with no male present. Asexual reproduction is seemingly the less complex and difficult process but the population will lack genetic diversity and be vulnerable to disease and less adaptable to changes in the habitat. 8) The social insects Once multi-cellular life had evolved Smith and Szathmary trace two further major transitions. The first is that of the social insects which evolved separately on at least three occasions – termites, bees and wasps, and ants. In many ways the insect colony behaves like one super-organism with sterile workers dividing the work of gathering food, defence and care of the young for the benefit of a single queen. 9) Human society The final transition they trace is the transition to human society. It is here I part company with their analysis. They see speech and language as the crucial development but I would see the development of a ‘mode of production’ as the unique characteristic of human society. The use of fire and cooking may have been the gateway to living through a mode of production rather than just existing in nature. In ‘The Role of Labour in the Transition from Ape to Man’ Engels describes animals as having a “mode of being” contrasting with the human “mode of production”. Of course speech will have been closely linked with the adoption of a mode of production but I think here Engels was closer to the reality than even Smith and Szathmary. Rough timeline of life on earth Dialectics confirmed in nature It seems strange to think that for three quarters of earth’s history, billions of years, there was no life- form on the planet more advanced than slime. But even slime has cells with an amazing and interlinked cycle of metabolism and self-sustaining biochemical activity which is still only superficially understood by modern science. “Major Transitions in Evolution” explains this very well and strikingly verifies the theoretical framework outlined by Frederick Engels in “Dialectics of Nature” over 140 years ago but on a much higher plane of facts and evidence. Each of the transitions is a dialectical transformation of “quality into quantity”; we see an “interdependence of opposites” in the derivation of single celled animals, plants and fungi from simpler bacteria; also the “whole is greater than the sum of its parts” in the case of multi-cellular animals and plants, and in the social insects and human societies. “Major Transitions” is well worth reading by any socialist with an interest in dialectics or biology. They also wrote a shorter book “The Origin of Life” which covers much of the same ground but in a more accessible format.
<urn:uuid:bff5ca94-51fd-4a7c-ac6a-5c28b0b384ea>
CC-MAIN-2021-43
https://www.socialistalternative.net/2021/02/23/marxism-and-evolution-150-years-since-publication-of-darwins-the-descent-of-man/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00631.warc.gz
en
0.958205
2,809
2.578125
3
File Name: research and development and innovation .zip You will need to revisit this research regularly as customer preferences frequently change. New product design and development is often a crucial factor in the survival of a company. In a global industrial landscape that is changing fast, firms must continually revise their design and range of products. This is necessary as well due to the fierce competition and the evolving preferences of consumers. A system driven by marketing is one that puts the customer needs first, and produces goods that are known to sell. In general, research and development activities are conducted by specialized units or centers belonging to a company, or can be out-sourced to a contract research organization, universities , or state agencies. New product design and development is often a crucial factor in the survival of a company. In a global industrial landscape that is changing fast, firms must continually revise their design and range of products. This is necessary as well due to the fierce competition and the evolving preferences of consumers. A system driven by marketing is one that puts the customer needs first, and produces goods that are known to sell. In general, research and development activities are conducted by specialized units or centers belonging to a company, or can be out-sourced to a contract research organization, universities , or state agencies. Bank ratios are one of the best measures, because they are continuously maintained, public and reflect risk. In the United States, a typical ratio of research and development for an industrial company is about 3. Generally such firms prosper only in markets whose customers have extreme high technology needs, like certain prescription drugs or special chemicals, scientific instruments , and safety-critical systems in medicine, aeronautics or military weapons. On a technical level, high tech organizations explore ways to re-purpose and repackage advanced technologies as a way of amortizing the high overhead. Research and development are very difficult to manage, since the defining feature of research is that the researchers do not know in advance exactly how to accomplish the desired result. In general, it has been found that there is a positive correlation between the research and development and firm productivity across all sectors, but that this positive correlation is much stronger in high-tech firms than in low-tech firms. Research and innovation in Europe are financially supported by the programme Horizon , which is open to participation worldwide. A notable example is the European environmental research and innovation policy , based on the Europe strategy which will run from to , a multidisciplinary effort to provide safe, economically feasible, environmentally sound and socially acceptable solutions along the entire value chain of human activities. In , research and development constituted an average 2. From Wikipedia, the free encyclopedia. General term for activities in connection with corporate or governmental innovation. For the video game mod, see Research and Development mod. This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Further information: Public research and development. Retrieved Harvard Business Review November-December Biotechnology Fundamentals. CRC Press. Center for Strategic and International Studies. December 5, Retrieved 6 August Empirical Economics. IZA Discussion Papers. IZA : 1— Working Papers. Research Policy. The Journal of Finance. The Accounting Review. Library of Congress, Congressional Research Service. Retrieved 20 February Office of the Undersecretary of Defense Comptroller. February Categories : Research and development Innovation Product development Research. Hidden categories: Webarchive template wayback links Articles containing potentially dated statements from All articles containing potentially dated statements Articles with short description Short description matches Wikidata Articles needing additional references from March All articles needing additional references All articles with unsourced statements Articles with unsourced statements from March Commons category link from Wikidata Wikipedia articles with GND identifiers Wikipedia articles with NDL identifiers. Namespaces Article Talk. Views Read Edit View history. Help Learn to edit Community portal Recent changes Upload file. Download as PDF Printable version. Wikimedia Commons. Wikimedia Commons has media related to Research and development. Abstract The concept of globalization accelerates the transfer of the trade from the local point to the international dimension. In today's information age, besides getting the information, it is important to use the information effectively and create value. This context increases the value of the innovation that means renewal of science and technology that provide economical and social benefits. The goal of innovation is positive change, to make someone or something better. Innovation leading to increased productivity is the fundamental source of increasing wealth in an economy. The following paper is the product of a joint research effort between the Brookings Institution's Anne T. The U. These recommendations include:. Bolster institutions supporting tech transfer, commercialization, and innovation. Expand technology transfer and commercialization-related programs and investments. It is the product of intentional human action, and, to have more of it, we must enact public policies that connect research and development investments to firms and inventors in the communities where they are located. It seems that you're in Germany. We have a dedicated site for Germany. His research interests are in Industrial Organization, firm innovation, technological change and applied econometrics. He has published his work in leading international field journals. Box Sharjah, United Arab Emirates aelnasri sharjah. Fox unsw. Manuscript received 19 December ; revision received 24 December ; accepted 14 February C36 , O31 , O33 , O Productivity growth, as per the standard statistical agency definition, is the ratio of output growth to input growth, that is, the amount of growth in output that cannot be explained by the growth in measured inputs. For decades the U. Tax incentives like the Research and Experimentation tax credit are one tool to encourage investment, and experts say policymakers should do more to support U. Technology and Innovation. United States. Roughly two-thirds of all U. Basic or pure research does not have an immediate commercial objective, but is rather focused on developing new principles and theories that explain the natural world. Actively scan device characteristics for identification. Use precise geolocation data. Select personalised content. Create a personalised content profile. Measure ad performance. Беккер мрачно кивнул невидимому голосу. Замечательно. Он опустил шторку иллюминатора и попытался вздремнуть. Но мысли о Сьюзан не выходили из головы. ГЛАВА 3 Вольво Сьюзан замер в тени высоченного четырехметрового забора с протянутой поверху колючей проволокой. Молодой охранник положил руку на крышу машины. Беккер снисходительно покачал головой: - Иногда все выглядит не так, как есть на самом деле. Лицо немца стало белым как полотно. Беккер был доволен . Молодой программист из лаборатории Белл по имени Грег Хейл потряс мир, заявив, что нашел черный ход, глубоко запрятанный в этом алгоритме. Черный ход представлял собой несколько строк хитроумной программы, которые вставил в алгоритм коммандер Стратмор. Они были вмонтированы так хитро, что никто, кроме Грега Хейла, их не заметил, и практически означали, что любой код, созданный с помощью Попрыгунчика, может быть взломан секретным паролем, известным только АНБ. Стратмору едва не удалось сделать предлагаемый стандарт шифрования величайшим достижением АНБ: если бы он был принят, у агентства появился бы ключ для взлома любого шифра в Америке. Сьюзан набрала полные легкие воздуха и задала неизбежный вопрос: - И где же теперь этот канадец. Стратмор нахмурился: - В этом вся проблема. - Офицер полиции этого не знает. Management accounting information system pdf chapter 8 and 9 in global marketing 9th edition pdf free
<urn:uuid:a611c3aa-399a-485e-a1c2-1bbd17cc5c28>
CC-MAIN-2021-43
https://ebezpieczni.org/and-pdf/1842-research-and-development-and-innovation-pdf-879-860.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.716991
2,656
2.625
3
Except for moves that deal direct damage, the damage dealt when a Pokémon uses a damaging move depends on its level, its effective Attack or Special Attack stat, the opponent's effective Defense or Special Defense stat, and the move's effective power. In addition, various factors of damage modification may also affect the damage dealt. More precisely, damage is calculated as - Level is the level of the attacking Pokémon (or twice the level for a critical hit in Generation I). - A is the effective Attack stat of the attacking Pokémon if the used move is a physical move, or the effective Special Attack stat of the attacking Pokémon if the used move is a special move (ignoring allGen. II/negativeGen. III+ stat stages for a critical hit). - D is the effective Defense stat of the target if the used move is a physical move or a special move that uses the target's Defense stat, or the effective Special Defense of the target if the used move is an other special move (ignoring allGen. II/positiveGen. III+ stat stages for a critical hit). - Power is the effective power of the used move. - Targets is 0.75 if the move has more than one target (except in Battle Royals), 0.5 in Battle Royals if the move has more than one target, and 1 otherwise. (In Generation III, it is 0.5 for moves that target all adjacent foes with more than one target, and 1 otherwise.) - Weather is 1.5 if a Water-type move is being used during rain or a Fire-type move during harsh sunlight, and 0.5 if a Water-type move is used during harsh sunlight or a Fire-type move during rain, and 1 otherwise. - Badge is applied in Generation II only. It is 1.25 if the attacking Pokémon is controlled by the player and if the player has obtained the Badge corresponding to the used move's type, and 1 otherwise. - Critical is applied starting in Generation II. It is 2 for a critical hit in Generations II-V, 1.5 for a critical hit from Generation VI onward, and 1 otherwise. In Generation II, it is instead applied before the +2 of the base damage formula. - random is a random factor between 0.85 and 1.00 (inclusive): - From Generation III onward, it is a random integer percentage between 0.85 and 1.00 (inclusive) - In Generations I and II, it is realized as a multiplication by a random uniformly distributed integer between 217 and 255 (inclusive), followed by an integer division by 255. Flail and Reversal are exempt from this factor. - STAB is the same-type attack bonus. This is equal to 1.5 if the move's type matches any of the user's types, 2 if the user of the move additionally has Adaptability, and 1 if otherwise. - Type is the type effectiveness. This can be 0 (ineffective); 0.25, 0.5 (not very effective); 1 (normally effective); 2, or 4 (super effective), depending on both the move's and target's types. - Burn is 0.5 (from Generation III onward) if the attacker is burned, its Ability is not Guts, and the used move is a physical move (other than Facade from Generation VI onward), and 1 otherwise. - other is 1 in most cases, and a different multiplier when specific interactions of moves, Abilities, or items take effect: |Moves interacting with Minimize||2||Applies to up to 9 moves per generation, 14 moves in total (see this list)| If this is the used move, and the target had previously used Minimize (Note: applies to "Power" rather than "other" in some generations) |Aurora Veil||0.5 *||If in effect on the target's side, the move is not a critical hit and the user's Ability is not Infiltrator (does not stack with Light Screen and Reflect)| |Earthquake and Magnitude||2||From Generation V onward, if this is the used move, and the target is in the semi-invulnerable turn of Dig*| |Light Screen||0.5 *||From Generation III onward, if in effect on the target's side, the move used is special, not a critical hit and the user's Ability is not Infiltrator (does not stack with Aurora Veil)| |Reflect||0.5 *||From Generation III onward, if in effect on the target's side, the move used is physical, not a critical hit and the user's Ability is not Infiltrator (does not stack with Aurora Veil)| |Surf and Whirlpool||2||If this is the used move, and the target is in the semi-invulnerable turn of Dive*| |Protecting moves except Max Guard||0.25||If the used move is a damaging Z-MoveGen VII/Max MoveGen VIII the target has protected against| |Fluffy||0.5||If the target has this Ability, and the used move makes contact and is not Fire-type| |2||If the target has this Ability, and the used move is Fire-type and does not make contact| |Filter, Prism Armor and Solid Rock||0.75||If the target has this Ability and the used move is super effective (Type > 1)| |Friend Guard||0.75||If an ally of the target has this Ability| |Ice Scales||0.5||If the target has this Ability and the used move is special| |Multiscale, Shadow Shield||0.5||If the target has this Ability and is at full health| |Neuroforce||1.25||If the user has this Ability and the used move is super effective (Type > 1)| |Punk Rock||0.5||If the target has this Ability and the used move is sound-based| |Sniper||1.5||If the attacker has this Ability and the move lands a critical hit| |Tinted Lens||2||If the attacker has this Ability and the used move is not very effective (Type < 1)| |Chilan Berry||0.5||If held by the target, and the used move is Normal-type| |Expert Belt||~1.2||If held by the attacker and the move is super effective (Type > 1)| |Life Orb||~1.3||If held by the attacker| |Metronome||>1||1 + (~0.2 per successful consecutive use of the same move) if held by the attacker, but no more than 2| |Type-resist Berries||0.5||If held by the target, the move is of the resisted type and super effective (Type > 1)| - If multiple effects influence the other value, their values stack multiplicatively. For example, if both Multiscale and a Chilan Berry take effect, other is 0.5 * 0.5 = 0.25. During the calculation, any operations are carried out on integers internally, such that effectively each division is a truncated integer division (rounding towards zero, cutting off any decimals), and any decimals are cut off after each multiplication operation. If the calculation yields 0, the move will deal 1 HP damage instead (unless Type is equal to 0); however, in Generations I and V, different behavior may occur due to apparent oversights: - In Generation I, if the calculation yields 0because the target has two types that both resist the move's type, the move will miss as if it is ineffective; - In Generation V, a move may deal 0 HP damage when other is less than 1, because the routine to prevent 0 HP damage is erroneously performed before applying the other factor. Imagine a level 75 Glaceon that does not suffer a burn and holds no item with an effective Attack stat of 123 uses Ice Fang (an Ice-type physical move with a power of 65) against a Garchomp with an effective Defense stat of 163 in Generation VI, and does not land a critical hit. Then, the move will receive STAB, because Glaceon's Ice type matches the move's: STAB = 1.5. Additionally, Garchomp is Dragon/Ground, and therefore has a double weakness to the move's Ice type: Type = 4. All other (non-random) modifiers will be 1. This effectively gives That means Ice Fang will do between 168 and 196 HP damage, depending on luck. If the same Glaceon holds a Muscle Band and its Ice Fang lands a critical hit against Garchomp, Ice Fang's effective power will be boosted by the Muscle Band by (approximately) 10% to become 71, and it will also be Critical = 1.5: That means Ice Fang will now do between 268 and 324 HP damage, depending on luck. In Pokémon GO, damage is calculated differently due to different variables existing in the game. - is the power of the move used - is the Attack stat of the attacking Pokémon - is the Defense stat of the Pokémon being attacked - For Shadow Pokémon: - is applied to - is applied to - is the type effectiveness, which is calculated differently in GO, using multipliers of base 1.6 instead of 2. - is the same-type attack bonus. This is equal to 1.2 if the move's type matches any of the user's types, and 1 if otherwise. - The following variables are applied in Gym and Raid Battles only, and are 1 otherwise. - is 1.2 if the move used has a weather-boosted type, and otherwise. - is applied when battling with Friends and varies depending on the Friendship level. - 1.03 if Good Friends - 1.05 if Great Friends - 1.07 if Ultra Friends - 1.1 if Best Friends - 1 otherwise - is if the attack was successfully dodged, and if otherwise. - Gym defenders and Raid Bosses will never dodge a player's attacks - is greater than 1 when there is one or more Mega-Evolved Pokémon on the battlefield. - 1.1 if none of the Mega-Evolved Pokémon have the same type as the move - 1.3 if one or more Mega-Evolved Pokémon have the same type as the move - The following variables are applied in Trainer Battles only, and are 1 otherwise. - is for all attacks used in a Trainer Battle. - is applied only for Charged Attacks, and its value depends on the player's score during the minigame. The possible ranges are - if "Excellent!" - if "Great!" - if "Nice!" - In Pokémon Ruby and Sapphire, if the player's Pokémon deals over 33037 HP damage, the Pokémon will faint, but the HP bar will not be drained; if it deals exactly 33037 HP, the HP bar will be drained automatically. - In Generations V through VIII, the amount of damage that can be dealt in a single attack is capped at 65535. In addition, an overflow can occur during the calculation of very high damage amounts, causing the actual damage dealt to be much lower than expected. - In Pokémon Battle Revolution, the HP bar will change with a different animation depending on the move's type (recovery, recoil damage and indirect damage use the Normal-type animation), as shown below. In other languages |This game mechanic article is part of Project Games, a Bulbapedia project that aims to write comprehensive articles on the Pokémon games.|
<urn:uuid:8e51ebdf-3ebd-4a64-b96e-1f5a48e07175>
CC-MAIN-2021-43
https://m.bulbapedia.bulbagarden.net/wiki/Damage
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.893299
2,453
2.609375
3
A review of Adaptation and Invention During the Spread of Agriculture to Southwest China, by Jade D’Alpoim Guedes. The research presented in this dissertation focuses on the ways in which humans adapt to novel environment and how they modify agricultural strategies to suit ecological niches not previously amenable to agricultural production. To address these issues, this research seeks to explain why it took more than 3000 years for agricultural food production to spread from the Middle and Lower Yangzi River valley to the foothills and highlands of Southwest China. This study therefore explores of the details – the ecological, technological, and social variables – that made agricultural food production possible in some settings but utterly impractical in others, and how the nature of these possibilities changed through time. Methodologically and theoretically, the author frames this approach in the logic of decisions about subsistence strategies, specifically the evolutionary end economic logic of human behavioral ecology (specifically the relationship between agricultural risk and human survival). To establish the cost structure behind these decisions, the author uses ecological niche modeling. This practice makes clear the differences between the costs and constraints of agricultural food production in different areas using different crops. In particular, the author pays considerable attention to differences in the nature of agricultural subsistence between lowlands and highlands, and how the ecological attributes of different areas either promoted or prevented the use of different crops. The model is considerably more complex than this short explanation, but the details of which are far too numerous and nuanced to recount in a single review. The method applied in this dissertation focuses on the insights generated through constraints on plant phenology, specifically the cumulative heat requirements of plant growth, much of which the author establishes with data collected through controlled agronomic experimentation from around the world. The resulting models of spatial variation in growing potential provide insights into the barriers and the possibilities of prehistoric food production, and illustrate how both external (e.g. hemispheric climate change) and internal change (e.g. habitat alteration or technological innovation) in the controlling variables might promote or constrain the diffusion of agricultural products by altering the economic risks associated with producing them, and therefore the decision to adopt them or not. The author argues that it is “necessary to construct models…capable of dealing with the constraints faced by ancient agriculturalists if we are to understand how and why humans adapted and invested in agricultural systems during the spread of agriculture” (p. 37). The models of agricultural potential are then compared with detailed archaeobotanical data on plant use throughout the spread zone. The modeling process Guedes employs focuses on the constraints of practicing agriculture (specifically plant-based agriculture) in novel environments. To do this she uses “ecological niche modeling” to evaluate the productive capacity of different settings, and asks whether or not these areas possess the requisite attributes for stable or predictable growth and maturation of specific plant taxa. Questions about stability and predictability are framed in the context of risk, meaning inter-annual variation in the expected productivity of the harvest, which is assessed using different measures of climate. Utility (or rather, the economic feasibility) of plant-based agriculture is further assessed by an examination of the labor investments required for plant cultivation in different settings. This approach to ecological niche modeling evaluates the relationship between climate variables and species distributions, and in so doing harnesses a body of literature that explores the effects of past and present climate change on range shifts in different taxa. The author is careful to distinguish between correlative and mechanistic models for understanding species distributions. While the former may be useful for understanding extant taxa in contemporary settings, she argues this method is generally unsuitable to studies of prehistoric agriculture because the archives of the past are simply too spotty to correlate with prehistoric environmental variables (which are also spotty) for any predictive power. Instead the author focuses on mechanistic models tuned to an understanding of the factors (such as temperature, precipitation, frost injury, and competition) that limit or promote the physiological dimensions of plant growth, survival, and reproduction. From such an understanding, the modeler establishes the “fundamental niche” for the existence of the plant (following G.E. Hutchinson’s 1957 notion of an n-dimensional hypervolume that provides for the existence of any organism), which can be altered by processes such as human niche construction to create a “realized niche” which is then used to illustrate the potential distribution of the agro-economic species. Though the author does admit that the mechanistic approach to ecological niche modeling has its complications, it may be the only way to model the prehistoric distributions of agriculturally feasible ecosystems. As more is learned about the phenology of ancient domesticates, and as the resolution of the paleoclimate reconstructions increases and improves, it’s possible that some of the fundamental niches established for the plants in this study, and therefore the hypothetical spatial distribution of their cultivation, might change. But as a means for marshaling current data, and for generating archaeologically testable hypotheses, this approach is both valuable, and valorous. In this study of agricultural feasibility in southwest China the mechanistic modeling focuses on constructing a thermal niche, based primarily on an ecological metric known as the Growing Degree Day (GDD) itself a function of the minimum and maximum temperatures under which an organism will grow, and a cumulative account of mean daily temperatures over any duration relevant to the growth of that organism in a given location. The GDD is a measure of heat accumulation sometimes used to establish rates of plant growth and development. If an annual account of maximum and minimum daily temperatures are available for a given area, and the internal constrains of the plant are known (or experimentally derived), one can establish the GDD for it in a place where prehistoric use is purported. In this study, calculation of GDD for individual plant species is based solely on contemporary weather data interpolated over a three-dimensional landscape. This interpolated surface was then combined with a simple method for evaluating “risk” which looked at the probability of crop failure, based on the annual GDD for each crop, in each area, again calculated from contemporary weather data and interpolated and draped over a three-dimensional landscape model. The author rightly notes that in the absence of daily temperature data from prehistoric settings, or models capable of retrodicting them, the data collected from the mid to late 20th are sufficient for a first-order analysis, and that’s exactly what this is. To test the modeled species distributions based on the mechanistically modeled thermal niches for each taxon, the author collected prehistoric plant remains from archaeological sites in different settings, and at vastly different elevations, throughout southwest China. The following is an outline of the conclusions derived from a comparison of the archaeobotanical data to the expectations of the ecological niche modeling. While most research in southern China has focused on the origins and diffusion of rice-based agriculture, this research suggests that rice AND millet were essential features of the spread of agricultural subsistence to southwest China. This leads both the author, and the reader, to wonder about the role of millets in the spread of the essentially rice-based agricultural program to Southeast Asia more generally. The first form of agriculture to appear in southwest China was based on millets. The relative importance of each type of millet is, in part determined by the constraints of temperature and precipitation on growth, with foxtail millets tolerant to a wider range of higher elevation habitats. For the most part, the relative importance of foxtail (in the archaeological record) corresponds to the GIS model of risk-assessment, with foxtail millet dominating in colder, high altitude environments where the likelihood of crop failure is less than that of broomcorn (which does better at lower elevation). Though the appearance of millet agriculture is contemporaneous with the appearance of the Majiayao Neolithic ceramic tradition in western Sichuan, ca. 5500 BP (a tradition usually linked to intensive, sedentary agriculturalists), the author suggests that millets may have been added to the subsistence economy of pre-existing hunter-gatherers in the region because the short growing season of these summer grasses may have been easily incorporated into a residentially mobile seasonal cycle organized around the procurement of spatially segregated wild resources. In suggesting this, she echoes previous assertions about the early adoption of millets by hunter-gatherers further north (e.g. L. Barton, C. Morgan and R. L. Bettinger, “Harvests for the hunters: the origins of food production in arid northern China.” The SAA Archaeological Record 9 (2009), pp. 28-31; idem, “The origins of food production in North China: a different kind of agricultural revolution.” Evolutionary Anthropology, p. 19, pp. 9-21). Both the seasonality of this pattern and the nature of human mobility throughout the region are ripe for future research, as is the nature of the contact (co-existence, competition) between foragers and farmers, from the lowlands of Sichuan to the highlands of the Tibetan Plateau. Furthermore, and perhaps for similar reasons, millets were likely the first agricultural package to move into highland Yunnan and Guizhou and ultimately into mainland Southeast Asia (though the author admits that these final assertions are based more on the implication of the models, than on current archaeobotanical data). Though millets constitute the earliest archaeological evidence for agriculture in some places (e.g. the Three Gorges area), the earliest evidence for agricultural subsistence on the Chengdu Plain is rice. While rice may have been domesticated as early as 8000 BP in the middle Yangzi drainage, rice-based agriculture did not take hold in southwest China until after 5000 BP. The earliest evidence for this is at Baodun, and the rice appearing here is fully domesticated (suggesting the arrival of a well-developed agricultural economy). Importantly, the author contends that the spread of rice to the Chengdu plain was delayed until the evolution of a phenotype adapted to cooler conditions. This assertion stems both from the expectations of the ecological niche modeling (which suggests that the region has the requisite “growing degree days” to support production of temperate rice varieties), but also from the recovery of charred rice remains with measurements consistent with those of temperate varies of rice found in contemporary and prehistoric northern China. The importance of millet goes beyond its early appearance, and its potential relevance to mobile hunter-gatherers. Foxtail millet (in particular) is far better suited to cold, arid conditions, grows much faster, and demands less labor than does rice. The author suggests these characteristics may have enabled lowland rice farmers (or mixed rice-millet farmers) to expand into the uplands of southwest China, namely the foothills of the Three Gorges, Yunnan, and Guizhou. Furthermore, she links the mixed rice-and-millet strategy to increasing annual yields, reduced agricultural risk, human population growth, and increasing social complexity in the lowlands of southwest China. Archaeobotany suggests that the prehistoric subsistence strategies in the Chengdu Plain were relatively stable, but the archaeological records of the highlands (e.g. Yunnan-Guizhou) are marked by change, perhaps a result of the recurrent adjustments and modifications required to meet the amplitude of environmental volatility in higher elevations and the effect it had on the availability and productivity of food. In such places, the relative importance of the two millets, rice, wild plants, and eventually wheat and barley fluctuated in response to change in local environments, and the caloric demands of the human groups inhabiting them. In the long haul, while millets, which are more tolerant of water stress and low temperature than rice, may have enabled the early expansion of agricultural subsistence into upland areas, it was barley (and to a lesser degree, wheat), with its short growing season, low requirements of GDD and high tolerance of low temperature and frost tolerance that enabled agricultural expansion, and relative stability, in the highlands of Tibet. This is to say that the ecological niche modeling suggests that wheat and barley were better suited to the highlands of southwest China than were millets (and certainly rice). Both the phenology of the plants, and the ecological niches in which people attempted to grow them dictated where and when people could adopt them. And the archaeobotanical record tentatively supports this. The bread-and-butter of archaeological investigations hinge on observations about change in the spatial and temporal distribution of scant remains, which are then employed to illustrate change in the behavior of people. Archaeologists differ on the causes of change, and the past century and a half has seen considerable variation in the popularity of using mass migration or the diffusion of ideas as explanations for change itself. This tradition of scholarship ranges from one extreme that refuses to see change as a function of either (migration or diffusion) in favor of in-situ developments, to another extreme that explains most everything we see in the world as a product arriving (in one way or another) from some glorious epicenter of origin (imagine the “Garden of Eden”). Both extremes are preposterous. This Dissertation represents a concerted effort to understand the processes of change with a novel approach to modeling possibilities, and by tuning the modeling effort to the ever-expanding catalog of carefully collected archaeological data. It is a first-rate exploration of how the practice and products of agriculture diversified and evolved in unlikely settings. This careful, modeled approach, backed by meticulous collection of archaeological data, may prove profitable in other parts of the world where agricultural subsistence appeared considerably later than it did in neighboring regions. Loukas Barton, PhD Department of Anthropology University of Pittsburgh Primary archaeobotanical data from Southwest China Department of Anthropology, Harvard University, Cambridge, MA. 558 pp. March 2013. Primary Advisors: Rowan K. Flad, Richard Meadow. Image: Photograph by Jade D’Alpoim Guedes, with millet.
<urn:uuid:710963c5-8451-49ea-8fa6-80a8cef24867>
CC-MAIN-2021-43
https://dissertationreviews.org/spread-agriculture-southwest-china/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.939049
2,908
3.21875
3
Today, Nepal-India border issue has been being discussed as if the issue is being raised for the first time in history. And the main narrative within India is such that the issue is not coming solely from Nepal but at the behest of China. The main source for such a story was the initial statement made by Indian Army Chief Naravane on the issue. According to him, “there was no dispute…there has never been any problem in the past…they (Nepal) might have raised the issues at the behest of someone else and that is very much a possibility”. It seems most of the media and even academia of the largest democracy in the world subscribed the view of an Army Chief without being critical and analytical. While doing so, they have not realized that Nepal and India are the closest neighbors with “special” relations. In fact it is true that Nepal and India have some of the unique provisions which made the relations ‘special’. Semi- open nature of border management, equal treatment to citizens of both the countries in the matter of residence, ownership of property, participation in trade and commerce, movement and other privileges of a similar nature insured by article 7 of 1950 Peace and Friendship Treaty, and a long-standing tradition to accept the chiefs of the two armies also as honorary chiefs of the other country’s national defense force are some of the evidences to claim the relations distinctive than relations of any two neighbors. Nepal’s Border Settlement Nepal’s border was outlined through a couple of treaties and agreements. The Sugauli Treaty of 1816, the 1860s agreement between East-India Company and Nepal to return some parts of Plain lands of West Nepal, 1875 agreement to renegotiate with a small land of Dang (Duduwa) are amongst them to demarcate Nepal’s Eastern, Western and Southern borders. However, northern border was outlined along with the Betrawati treaty-a tripartite treaty signed by Tibetan, China and Nepal in 1792. Documents state that those treaties and agreements were not ‘even’ to finally demarcate Nepal’s border with both the neighbors; it took almost 170 years for Nepal to sign border protocol on northern border with China but it could not sign similar agreement with another neighbor India as yet. That is why Nepal and India are still struggling to address claims and counter claims before signing the border protocol. As far as current controversy is concerned, Limpiadhura area was in a debate just after the 1816 Treaty; the then administrator Chautaria Bom Shah in 1817 had claimed the territory belongs to Nepal. Then, the acting Chief-Secretary of East-India Company J. Adam wrote to his representative Edward Gardner in Kathmandu agreeing to the assertion made by Bom Shah. But the same East India Company came up with a new map shifting the river not coming all the way from Limpiadhura but from Lipulekh. Interestingly it was around the same period; East-India Company was returning a big chuck of fertile land back to Nepal for Nepal’s support to suppress ‘sepoy’ mutiny and uprising against British rule in India in 1857. Nepal’s then Prime Minister helped the British deploying 9000 Nepal Army personnel under his own command and in return British gave some plain land of Nepal’s Western Tarai which was taken away from Nepal with the 1816 treaty. But , the same British regime also shifted Nepal’s border Point of Kali River in its map from original Kuti Yangti, coming from Limpiyadhura to Tinker River starting at Lipukekh (Shown in Figure 1). However, no document could be traced to explain the reasoning behind British decision. Available documents tell that the territory i.e. Nabi, Kuti, Gunji which were taken away from Nepal with a revised map were not under the full control of British India. That is why, residents of those areas were taking part in Nepal’s 1959 parliamentary election, and they were enumerated by Nepali Census conducted in the year 1961. However, the area was slowly left behind by Nepal in the recent years, and Nepal has no idea how many of them still identify themselves as Nepali. As far as Kalapani area, which is situated towards the east of Tinker River coming all the way from Lipulekh, the then Home Minister of Nepal Bishwo Bandhu Thapa recalls a letter by Indian Prime Minister Nehru to King Mahendra once the former realized the geo-political importance of the land. But after some years, India constructed a temple replicating Kali temple in Limpiadhura, changed the alignment from Tinker River to Pankha Gad, the north-east streams, thus arguing the border as just below Kalapani. Border Dispute in Bilateral Forums Though India was not in the fore to discuss about Nepal-India border issue, Nepal has been raising this issue; the issue which was seen informally before 1990, started being the major agenda on the formal bilateral table or two country forums. For example, the issue which was officially discussed in Prime Minister IK Gujaral’s Nepal visit in 1997 was again debated while Foreign Minister Jaswant Singh was in Kathmandu in 1999. In-between, Indian Ambassador KV Rajan was given a protest letter on the June 8, 1998 by the then GP Koirala government by publicly expressing Nepal’s displeasure over Rajan’s remarks, which according to Koirala, could “nullify the whole process” of border negotiations. It was Rajan who had issued a statement against the charges that India was occupying Nepali land at Kalapani, stressing that successive British-Indian and Nepali governments had acknowledged Indian sovereignty over Kalapani. Along with Nepal’s protest, Rajan had to issue another statement saying his remarks were “misinterpreted”. The same issue could be seen enlisted in the important documents i.e. joint communiqué of the high level visits of two countries later on. Firstly, the issue was covered by the Point 27 of India-Nepal Joint Press Statement issued on March 23, 2002 after an official goodwill visit of PM Sher Bahadur Deuba to India. Through the document, the two Prime Ministers not only noted the importance of a scientifically demarcated alignment of the international boundary between India and Nepal, but also directed the Joint Technical Level Boundary Committee to complete its task by 2003.It was the joint statement which had mentioned the Kalapani area acknowledging that “there were differences in perceptions of the two sides. Likewise, Point 13 of Joint Press Statement issued on the Official Visit of Minister of External Affairs of India Pranab Mukherjee to Nepal from 24-26 November 2008, reads, “the two Ministers noted that Joint Technical Committee on the boundary had completed scientific strip mapping of about 98% of Nepal-India border and agreed to take further necessary steps for signature of the agreed strip maps at an early date. They also directed the officials concerned to expeditiously resolve the outstanding issues relating to the boundary”. Also, Point 12 of the Joint Press Statement issued on the August 04, 2014 after Indian Prime Minister Narendra Modi’s official visit to Nepal states, “The two Prime Ministers also underlined the need to resolve pending Nepal-India boundary issues once and for all”. In between, the border issue was handed over to the Joint Commission, a foreign minister-level bilateral mechanism between the two countries established in 1987 with a mandate to review the entire gamut of bilateral matters. But, just before Modi’s visit to Nepal in August 2014, the commission had directed the foreign secretaries to work on it. That is why the Joint Press Statement reads, they (the two Prime Ministers) had also welcomed the Joint Commission’s decision to direct the Foreign Secretaries to work on the outstanding boundary issues, including Kalapani and Susta receiving required technical inputs from the BWG as necessary. The Indian side stressed on early signing of the agreed and initialed strip maps of about 98% of the boundary. The Nepalese side expressed its desire to resolve all outstanding boundary issues”. Since all these documents are in the MEA’s archive, Indian academia and intelligentsia could have suggested to their own Army Chief to read those papers before making such a controversial remarks. Whether China is behind Nepal to raise the issue? Another question to be engaged here is that whether it was China behind Nepal on the issue. A couple of occasions and events suggest that India and China take consent of each other if they have to deal with Nepal on major issues, and that is what was the case of Kalapani and Lipukekh. Nepal is well aware not only about India-China Agreement on Tibet signed on the April 29, 1954 that firstly acknowledged Lipulekh as one of six borders allowed to Indian pilgrims. The road which was inaugurated by Indian Minster of Defense in May 8, 2020 was also part of India and China agreement to expand a trade route via Lipulekh agreed in 2015 during Modi’s China visit. Once Nepal government knew the agreement on Lipulekh, the then Prime Minister Sushil Koirala had lodged protests with both Delhi and Beijing by sending diplomatic notes. Beijing has also come up with a statement on the 19th of May 2020 on the Kalapani border issue between India and Nepal in which China suggested to properly resolve the disputes through friendly consultations. It was Foreign Ministry spokesman Zhao Lijian’s remarks at a media briefing while replying to questions on India-Nepal differences over the border, which stated that Nepal and India also should “refrain from unilateral actions that might complicate the situation”. The statement itself is seen as an ambiguous one as it was not clearly pointed towards any developments-neither the inauguration of road by India nor Nepal’s assertion to issue new map adding the territory of Limpiadhura area. The confusion occurred because of the fact that it was China which was consulted at Prime Minister Level by India in 2015 while constructing the road. If it is the case, the question here could be whether Chinese recent statement to ‘refrain from unilateral actions’, is the message only to Nepal not to go ‘unilateral’ with new map? This outstanding border problems have complicated Nepal-India relations in a big way. Border related studies including one I was a part of , had recommended both the governments to realize the fact that most of the problems including anti-India rhetoric were outcomes of the border dispute, and many issues would automatically be resolved once the border related issues were addressed. Nepal is very much aware of the fact that there is no way except to go for negotiation to resolve it; we are also familiar to use ‘give and take’ method to resolve claims and counter claims in the territory as it was practiced while resolving border dispute with northern neighbor-China in 1960s. But the problem here is that Indian bureaucrats seem to have briefed New Delhi that border issue including Kalapani is not a serious bilateral issue but a political tools to be consumed domestically in Nepal. While saying so, they had misled high level leadership of India forgetting the small country syndrome on the issue of territory. It is high time for India to learn from the past mistakes, and resolve the issue through high level political dialogue. Instead of blaming a friendly neighbor’s tilt elsewhere, a quick arrangement of such dialogue is way out for India to show its smartness so that Nepal will not be able to use the issue domestically anymore. Uddhab Pyakurel is an Assistant Professor and Coordinator of Master Program at the School of Arts of Kathmandu University, Nepal.
<urn:uuid:7fccc4e7-82c2-4a9e-a6a4-27dad19b6675>
CC-MAIN-2021-43
https://cssame.nchu.edu.tw/2020/06/09/nepal-india-border-dispute-high-level-talks-as-only-way-forward/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.97337
2,430
2.90625
3
Throughout history, the status and importance of women varied by culture and period. Some groups maintained a highly matriarchal culture during certain times, while at other times they were predominantly patriarchal. Likewise, the roles of women in ancient Egypt and their ability to ascend to positions of power varied throughout history. Little is known about female status during the Early Dynastic Period (c. 3000 BCE). However, during the First and Second Intermediate Periods (2100 BCE–1550 BCE), the New Kingdom (1550 BCE–1200 BCE), and certainly during the Ptolemaic Period (300 BCE–30 BCE), Egyptians had a unique attitude about women. The Rise and Fall of Women in Egypt Not only were women in ancient Egypt responsible for the nurturance and admonition of children, but they could also work at a trade, own and operate a business, inherit property, and come out well in divorce proceedings. Some women of the working class even became prosperous. They trained in medicine as well as in other highly skilled endeavors. There were female religious leaders in the priesthood, but in this instance, they were not equal to the men. In ancient Egypt, women could buy jewelry and fine linens. At times, they ruled as revered queens or pharaohs. The role of women in ancient Egypt diminished during the late dynastic period but reappeared within the Ptolemaic dynasty. Both Ptolemy I and II put the portraits of their wives on the coins. Cleopatra VII became a very powerful figure internationally. However, after her death, the role of women receded markedly and remained virtually subservient until the 20th century. How the Moon Shaped the Role of Women in Ancient Egypt Throughout history, strong patriarchal societies existed when the sun was worshiped and times when there was a matriarchal society when the moon was worshiped. During much of Egyptian history, people worshiped both the moon and the sun, which gave rise to both matriarchal and patriarchal societies. For the most part, both the sun, Ra, and the moon, Konsu, were a vital part of the religion of ancient Egypt. It might be that the main objection to Amenhotep IV was that he stressed worship only to the sun disk at the expense of the moon god. Much of the traditional Egyptian society rejected this new concept and wanted a balance between the sun and the moon. Examples of Powerful Egyptian Women In the middle of the 15th century BC, one of the most important people to appear on the Egyptian scene was a woman. Her name was Hatshepsut. She came to power during a very critical time in Egyptian history. For many years Egypt was ruled by the Hyksos, foreigners who conquered Egypt and attempted to destroy many important aspects of Egyptian society. In 1549 BCE, a strong leader emerged by the name of Ahmose I, founder of the 18th Dynasty. He drove out the invaders. Egypt was once more restored to its glory by the time his successor, Amenhotep I, became Pharaoh. His granddaughter, Hatshepsut, became the fifth pharaoh of the 18th Dynasty in c. 1478 BCE after her sickly husband and pharaoh Thutmose II died. The female ruler was a builder, she directed expositions, built ships, enlarged the army, and presented Egypt as having a major presence in the international arena. She also utilized the services of other skilled women in various governmental capacities. Interestingly, she ruled Egypt as a queen and as a king, and her statues often portray her as a man wearing a beard. After her death, Thutmose III built upon Hatshepsut’s strong foundation, which resulted in the largest Egyptian empire the world had ever seen. Amenhotep III continued to advance the cause of Egypt and to provide for its people a better life than they had ever known in the past. During this time, several women of great talent appeared and were able to make many contributions. His queen was named Tiye. She was perhaps the first in this hierarchy of counselors to the king. She presumably molded the pharaoh’s thinking in matters of state and religion and provided him with strong support. It was during this time that another famous and important woman appeared. Her name was Nefertiti and she became the wife of the son of Amenhotep III and Queen Tiye. The man was also known in history as Amenhotep IV. and later as Ankenaten. We are now being told that Nefertiti may have been a more powerful and influential person than her husband. The status of women in ancient Egyptian society was of such importance that the right to the crown itself passed through the royal women and not the men. The daughters of kings were all important. During the reign of Ramesses II (c. 1279–1213 BCE, his favorite wife, and queen, Nefertari, was raised to the status of Royal Wife and Royal Mother. At Abu Simbel temple in Southern Egypt, her statue is as large as the pharaoh’s statue. Thus, we see her portrayed as an important person during the reign of the pharaoh. Often the name of his queen Auset-nefert would appear along with his own. Thus, pharaohs, such as Ramesses II, who esteemed their queens and gave them equal status, also helped to bolster the role and stature of women in ancient Egypt. It is also of interest to note that Ramesses II restored the temple of Hatshepsut in Deir el Bahri. In so many other instances, he either destroyed evidence of the very existence of his predecessors or usurped their creations, but with this famous woman, he went to great length to acknowledge her existence and to protect her memory. Cleopatra VII was the seventh Cleopatra and the last of the Greek or Polemic rulers of Egypt. Her son, Ptolemy XV possible reigned for a few weeks after her death, however, she was the last of the significant Egyptian rulers. She was the last of the powerful women in ancient Egypt, and after her death, Egypt fell to the Romans. Cleopatra was schooled in science, politics, and diplomacy, and she was a proponent of merging the cultures of Greece and Egypt. She could also read and write the ancient Egyptian language. Egypt’s Class Society From the beginning, Egypt was a class society. There was a marked line of distinction that was maintained between the different ranks of society. Although sons tended to follow the trade or profession of their fathers, this was not always the case, and there were even some instances where people were also able to advance themselves regardless of their birth status. Women in ancient Egypt were, like their male counterparts, subject to a rank system. The highest of them was the queen followed by the wives and daughters of the high priest. Their duties were very specific and equally as important as those of the men. Women within the royal family performed duties much like we see today in the role of ladies in waiting to the Queen of England. Additionally, the role of women as teachers and guides for their children was very prominent in ancient Egypt. Priesthood and Non-Traditional Roles There were holy women who possessed both dignity and importance. As to the priesthood, and perhaps other professions, only the women of a higher rank trained in these endeavors. Both male and female priests enjoyed great privileges. They were exempt from taxes, they used no part of their own income in any of the expenses related to their office, and they were permitted to own land in their own right. Women in ancient Egypt had the authority to manage affairs in the absence of their husbands. They had traditional duties such as needlework, drawing water, spinning, weaving, attending to the animals, and a variety of domestic tasks. However, they also took on some non-traditional roles. According to Diodorus, he saw images depicting some women making furniture and tents and engaging in other pursuits that may seem more suitable to men. It seems that women on every socioeconomic level could do pretty much what a man could do with perhaps the exception of being a part of the military. This was evident when a husband died; the wife would take over and attend to whatever business or trade he may have been doing. Marriage and Family Both men and women could decide whom they would marry. However, elders helped to introduce suitable males and females to each other. After the wedding, the husband and wife registered the marriage. A woman could own property that she had inherited from her family, and if her marriage ended in divorce, she could keep her own property and the children and was free to marry again. Women held the extremely important role of wife and mother. In fact, Egyptian society held high regard for women with many children. A man could take other women to live in his family, but the primary wife would have ultimate responsibility. Children from other wives would have equal status to those of the first wife. The Wisdom of the Ages The high-points for women in ancient Egypt came to a screeching halt after Cleopatra. The Greek-Macedonian Ptolemys ascended Egypt’s throne beginning in 323 BCE after Alexander the Great died. This marked a permanent and profound change from an Egyptian culture to one of a Graeco-Egyptian influence. As a result of non-native Egyptian sentiments, the roles of women continued to wane during this time and into the Roman period. The well-known fact that Cleopatra VII became such a strong ruler is a testament to the tenacity of native Egyptians to maintain their cultural views. Additionally, her shrewd intellect, wily relationship-building skills, and desire to support the Egyptian people won them over. Today, Cleopatra is remembered as the last pharaoh and, more importantly, the last female to ever be edified to that stature by the Egyptians. You may also like: Oxyrhynchus Papyri: Historical Treasure in Ancient Egyptian Garbage Updated by Historic Mysteries March 7, 2018
<urn:uuid:aa6c4fd7-b117-43e5-9856-5139a80f9da3>
CC-MAIN-2021-43
https://www.historicmysteries.com/role-of-women-in-ancient-egypt/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.987855
2,080
3.921875
4
Mises Daily Articles The Transformation of the American Party System [A History of Money and Banking in the United States (2002)] "William Jennings Bryan and his pietist coalition seized control of the Democratic Party at the momentous convention of 1896. The Democratic Party was never to be the same again." Orthodox economic historians attribute the triumph of William Jennings Bryan in the Democratic convention of 1896, and his later renominations for president, to a righteous rising up of the "people" demanding inflation over the "interests" holding out for gold. Friedman and Schwartz attribute the rise of Bryanism to the price contraction of the last three decades of the 19th century, and the triumph of gold and disappearance of the "money" issue to the price rise after 1896. This conventional analysis overlooks several problems. First, if Bryan represented the "people" versus the "interests," why did Bryan lose and lose soundly, not once but three times? Why did gold triumph long before any price inflation became obvious, in fact at the depths of price contraction in 1896? But the main neglect of the conventional analysis is the disregard of the highly illuminating insights provided in the past 15 years by the "new political history" of 19th-century American politics and its political culture. The new political history began by going beyond national political issues (largely economic) and investigating state and local political contests. It also dug into the actual voting records of individual parishes, wards, and counties, and discovered how people voted and why they voted the way they did. The work of the new political history is truly interdisciplinary, for its methods range from sophisticated techniques for voting analysis to illuminating insights into American ethnic religious history. In the following pages, we shall present a summary of the findings of the new political history on the American party structure of the late 19th century and after, and on the transformation of 1896 in particular. First, the history of American political parties is one of successive "party systems." Each party system lasts several decades, with each particular party having a certain central character; in many cases, the name of the party can remain the same but its essential character can drastically change — in the so-called "critical elections." In the 19th century the nation's second party system (Whigs v. Democrats), lasting from about 1832 to 1854, was succeeded by the third system (Republicans v. Democrats), lasting from 1854 to 1896. Characteristic of both party systems was that each party was committed to a distinctive ideology clashing with the other, and these conflicting worldviews made for fierce and close contests. Elections were particularly hard fought. Interest was high since the parties offered a "choice, not an echo," and so the turnout rate was remarkably high, often reaching 80 to 90 percent of eligible voters. More remarkably, candidates did not, as we are used to in the 20th century, fuzz their ideology during campaigns in order to appeal to a floating, ideologically indifferent, "independent voter." There were very few independent voters. The way to win elections, therefore, was to bring out your vote, and the way to do that was to intensify and strengthen your ideology during campaigns. Any fuzzing over would lead the Republican or Democratic constituents to stay home in disgust, and the election would be lost. Very rarely would there be a crossover to the other, hated party. One problem that strikes anyone interested in 19th-century political history is, How come the average person exhibited such great and intense interest in such arcane economic topics as banking, gold and silver, and tariffs? Thousands of half-literate people wrote embattled tracts on these topics, and voters were intensely interested. Attributing the answer to inflation or depression — to seemingly economic interests, as do Marxists and other economic determinists — simply won't do. The far-greater depressions and inflations of the 20th century have not educed nearly as much mass interest in economics as did the milder economic crises of the past century. Only the findings of the new political historians have cleared up this puzzle. It turns out that the mass of the public was not necessarily interested in what the elites, or national politicians, were talking about. The most intense and direct interest of the voters was applied to local and state issues, and on these local levels the two parties waged an intense and furious political struggle that lasted from the 1830s to the 1890s. The beginning of the century-long struggle began with the profound transformation of American Protestantism in the 1830s. This transformation swept like wildfire across the Northern states, particularly Yankee territory, during the 1830s, leaving the South virtually untouched. The transformation found particular root among Yankee culture, with its aggressive and domineering spirit. This new Protestantism — called "pietism" — was born in the fires of Charles Finney and the great revival movement of the 1830s. Its credo was roughly as follows: Each individual is responsible for his own salvation, and it must come in an emotional moment of being "born again." Each person can achieve salvation; each person must do his best to save everyone else. This compulsion to save others was more than simple missionary work; it meant that one would go to hell unless he did his best to save others. But since each person is alone and facing the temptation to sin, this role can only be done by the use of the State. The role of the State was to stamp out sin and create a new Jerusalem on Earth. The pietists defined sin very broadly. In particular, the most important politically was "demon rum," which clouded men's minds and therefore robbed them of their theological free will. In the 1830s, the evangelical pietists launched a determined and indefatigable prohibitionist crusade on the state and local level that lasted a century. Second was any activity on Sunday except going to church, which led to a drive for sabbatarian blue laws. Drinking on Sunday was of course a double sin, and hence was particularly heinous. Another vital thrust of the new Yankee pietism was to try to extirpate Roman Catholicism, which robs communicants of their theological free will by subjecting them to the dictates of priests who are agents of the Vatican. If Roman Catholics could not be prohibited per se, their immigration could be slowed down or stopped. And since their adults were irrevocably steeped in sin, it became vital for crusading pietists to try to establish public schools as compulsory forces for Protestantizing society or, as the pietists liked to put it, to "Christianize the Catholics." If the adults are hopeless, the children must be saved by the public school and compulsory-attendance laws. Such was the political program of Yankee pietism. Not all immigrants were scorned. British, Norwegian, or other immigrants who belonged to pietist churches (whether nominally Calvinist or Lutheran or not) were welcomed as "true Americans." The Northern pietists found their home, almost to a man, first in the Whig Party, and then in the Republican Party. And they did so, too, among the Greenback and Populist parties, as we shall see further below. There came to this country during the century an increasing number of Catholic and Lutheran immigrants, especially from Ireland and Germany. The Catholics and High Lutherans, who have been called "ritualists" or "liturgicals," had a very different kind of religious culture. Each person is not responsible for his own salvation directly; if he is to be saved, he joins the church and obeys its liturgy and sacraments. In a profound sense, then, the church is responsible for one's salvation, and there was no need for the State to stamp out temptation. These churches, then, especially the Lutheran, had a laissez-faire attitude toward the State and morality. Furthermore, their definitions of "sin" were not nearly as broad as the pietists'. Liquor is fine in moderation; and drinking beer with the family in beer parlors on Sunday after church was a cherished German (Catholic and Lutheran) tradition; and parochial schools were vital in transmitting religious values to their children in a country where they were in a minority. Virtually to a man, Catholics and High Lutherans found their home during the 19th century in the Democratic Party. It is no wonder that the Republicans gloried in calling themselves throughout this period "the party of great moral ideas," while the Democrats declared themselves to be "the party of personal liberty." For nearly a century, the bemused liturgical Democrats fought a defensive struggle against people whom they considered "pietist-fanatics" constantly swooping down trying to outlaw their liquor, their Sunday beer parlors, and their parochial schools. How did all this relate to the economic issues of the day? Simply that the leaders of each party went to their voting constituents and "raised their consciousness" to get them vitally interested in national economic questions. Thus, the Republican leaders would go to their rank and file and say, "Just as we need Big Paternalistic Government on the local and state level to stamp out sin and compel morality, so we need Big Government on the national level to increase everyone's purchasing power through inflation, keeping out cheap foreign goods (tariffs), or keeping out cheap foreign labor (immigration restrictions)." And for their part, the Democratic leaders would go to their constituents and say, "Just as the Republican fanatics are trying to take away your liquor, your beer parlors, and your parochial schools, so the same people are trying to keep out cheap foreign goods (tariffs), and trying to destroy the value of your savings through inflation. Paternalistic government on the federal level is just as evil as it is at home." So statism and libertarianism were expanded to other issues and other levels. Each side infused its economic issues with a moral fervor and passion stemming from deeply held religious values. The mystery of the passionate interest of Americans in economic issues in the epoch is solved. Both in the second and third party systems, however, the Whigs and then the Republicans had a grave problem. Partly because of demographics — greater immigration and higher birth rates — the Democratic-liturgicals were slowly but surely becoming the majority party in the country. The Democrats were split asunder by the slavery question in the 1840s and '50s. But now, by 1890, the Republicans saw the handwriting on the wall. The Democratic victory in the congressional races in 1890, followed by the unprecedented landslide victory of Grover Cleveland carrying both houses of Congress in 1892, indicated to the Republicans that they were becoming doomed to be a permanent minority. To remedy the problem, the Republicans, in the early 1890s, led by Ohio Republicans William McKinley and Mark Hanna, launched a shrewd campaign of reconstruction. In particular, in state after state, they ditched the prohibitionists, who were becoming an embarrassment and losing the Republicans large numbers of German Lutheran votes. Also, they modified their hostility to immigration. By the mid-1890s, the Republicans had moved rapidly toward the center, toward fuzzing over their political pietism. In the meanwhile, an upheaval was beginning to occur in the Democratic Party. The South, by now a one-party Democratic region, was having its own pietism transformed by the 1890s. Quiet pietists were now becoming evangelical, and Southern Protestant organizations began to call for prohibition. Then the new, sparsely settled Mountain States, many of them with silver mines, were also largely pietist. Moreover, a power vacuum, which would ordinarily have been temporary, had been created in the national Democratic Party. Poor Grover Cleveland — a hard-money, laissez-faire Democrat — was blamed for the panic of 1893, and many leading Cleveland Democrats lost their gubernatorial and senatorial posts in the 1894 elections. The Cleveland Democrats were temporarily weak, and the Southern-Mountain coalition was ready to hand. Seeing this opportunity, William Jennings Bryan and his pietist coalition seized control of the Democratic Party at the momentous convention of 1896. The Democratic Party was never to be the same again. The Catholics, Lutherans, and laissez-faire Cleveland Democrats were in mortal shock. The "party of our fathers" was lost. The Republicans, who had been moderating their stance anyway, saw the opportunity of a lifetime. At the Republican convention, Representative Henry Cabot Lodge, representing the Morgans and the pro-gold-standard Boston financial interests, told McKinley and Hanna, Pledge yourself to the gold standard — the basic Cleveland economic issue — and drop your silverite and greenback tendencies, and we will all back you. Refuse, and we will support Bryan or a third party. McKinley struck the deal, and from then on, the Republicans, in 19th-century terms, were a centrist party. Their principles were now high tariffs and the gold standard, and prohibition was quietly forgotten. What would the poor liturgicals do? Many of them stayed home in droves, and indeed the election of 1896 marks the beginning of the great slide downward in voter turnout rates that continues to the present day. Some of them, in anguish at the pietist, inflationist, and prohibitionist Bryanites, actually conquered their anguish and voted Republican for the first time in their lives. The Republicans, after all, had dropped the hated prohibitionists and adopted gold. The election of 1896 inaugurated the fourth party system in America. From a third party system of closely fought, seesawing races between a pietist-statist Republican Party and a liturgical-libertarian Democratic Party, the fourth party system consisted of a majority centrist Republican Party as against a minority pietist Democratic Party. After a few years, the Democrats lost their pietist nature, and they too became a centrist (though usually minority) party, with a moderately statist ideology scarcely distinguishable from the Republicans. So went the fourth party system until 1932. A charming anecdote, told us by Richard Jensen, sums up much of the 1896 election. The heavily German city of Milwaukee had been mainly Democratic for years. The German Lutherans and Catholics in America were devoted, in particular, to the gold standard and were bitter enemies of inflation. The Democratic nomination for Congress in Milwaukee had been obtained by a Populist-Democrat, Richard Schilling. Sounding for all the world like modern monetarists or Keynesians, Schilling tried to explain to the assembled Germans of Milwaukee in a campaign speech that it didn't really matter what commodity was chosen as money, that "gold, silver, copper, paper, sauerkraut, or sausages" would do equally well as money. At that point, the German masses of Milwaukee laughed Schilling off the stage, and the shrewdly opportunistic Republicans adopted as their campaign slogan, "Schilling and Sauerkraut" and swept Milwaukee. The Greenbackers and later the pro-silver, inflationist, Bryanite Populist Party were not "agrarian parties"; they were collections of pietists aiming to stamp out personal and political sin. Thus, as Kleppner points out, The Greenback Party was less an amalgamation of economic pressure groups than an ad hoc coalition of "True Believers," "ideologues," who launched their party as a "quasi-religious" movement that bore the indelible hallmark of "a transfiguring faith." The Greenbackers perceived their movement as the "religion of the Master in motion among men." And the Populists described their 1890 free-silver contest in Kansas not as a "political campaign," but as "a religious revival, a crusade, a pentecost of politics in which a tongue of flame sat upon every man, and each spake as the spirit gave him utterance." The people had "heard the word and could preach the gospel of Populism." It was no accident, we see now, that the Greenbackers almost invariably endorsed prohibition, compulsory public schooling, and crushing of parochial schools. Or that Populists in many states "declared unequivocally for prohibition" or entered various forms of fusion with the Prohibition Party. The Transformation of 1896 and the death of the third party system meant the end of America's great laissez-faire, hard-money libertarian party. The Democratic Party was no longer the party of Jefferson, Jackson, and Cleveland. With no further political embodiment for laissez-faire in existence, and with both parties offering "an echo not a choice," public interest in politics steadily declined. A power vacuum was left in American politics for the new corporate statist ideology of progressivism, which swept both parties (and created a short-lived Progressive Party) in America after 1900. The Progressive Era of 1900–1918 fastened a welfare-warfare state on America that has set the mold for the rest of the 20th century. Statism arrived after 1900 not because of inflation or deflation, but because a unique set of conditions had destroyed the Democrats as a laissez-faire party and left a power vacuum for the triumph of the new ideology of compulsory cartelization through a partnership of big government, business, unions, technocrats, and intellectuals.
<urn:uuid:e6cac172-1550-4d7e-a672-653e53f85069>
CC-MAIN-2021-43
https://mises.org/library/transformation-american-party-system
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.976082
3,579
3.421875
3
President Franklin D. Roosevelt signed executive order 9066 on Feb. 19, 1942, setting in motion the rounding up and incarceration of more than 120,000 Japanese-Americans. Florence Daté Smith was one of those put into internment camps during World War II. Here is her story, originally featured in the November 1988 issue of Messenger: On December 7, 1941, I was in the library at the University of California. There was a sudden disruption in that customarily muted and somber sanctuary. Someone had brought in a radio. Whispered words swept through the halls: “Japan has attacked Pearl Harbor!” It seemed at that moment that the entire campus community came to an abrupt halt. My world as I knew it halted also, and a new one began. I was a 21-year-old student, majoring in Far Eastern studies there in Berkeley. My parents had come to the United States from Hiroshima, Japan, in the early 1900s. I was born in San Francisco and so was a “Nisei,” or second-generation American, a US citizen. My parents, by US laws then in effect, could never become citizens, only permanent resident aliens. The parents of us Niseis were concerned too. But, confident in the ways of democracy, they said that whatever happened to them now, we were to carry on in their places at home and at work. They never dreamed that their children–solid American citizens–would be affected. For us Niseis on campus, changes occurred rapidly. One by one, students from out of town were called home. My own college support group quickly disappeared. Soon a curfew for all persons of Japanese descent–aliens and American citizens alike–was proclaimed. I felt as if I were under “house arrest,” since I usually spent my days and most of my evenings in the library or in class. Now we were confined to our homes between the hours of 8 p.m. and 6 a.m. Furthermore, we were restricted in travel to a 5-mile radius from our home. I wanted to shout, “Why us? What about persons of German and Italian descent?” Then came another order: Turn in all cameras, flashlights, phonograph records, short-wave radios, chisels, saws, anything longer than a paring knife, even some items that were family heirlooms. Newspapers and radios daily blared headlines about the dangerous presence and activities of the Japanese. Commentators such as Westbrook Pegler wrote, “Herd them up, sterilize them, and then ship them back to Japan, and then blow up the island!” Then followed another order. Each family was to register and thereby receive a family number. We were now No. 13533. Our country had made us mere numbers! In April 1942, Civilian Exclusion Order No. 5 was announced by the Western Defence Command, addressed to all persons of Japanese ancestry. This order was posted publicly and conspicuously everywhere. Everyone in town could see it. I felt like a branded criminal, innocent, yet guilty of something. I was totally devastated. Did everyone have to know? I just wanted to disappear quietly, right then and there, like a ghost. Parents had accepted our being denied entry to public swimming pools, restaurants, and hotels, as well as being restricted from land ownership or immigration quotas. But criminal accusations sufficient to warrant incarceration of citizens was another story. Obviously I could not sink quietly under the waters without a ripple. One afternoon, while I was on my way home from my last day at the university, a group of young school children with long sticks in their hands converged about me, shouting, “A Jap! A Jap! A Jap!” I was uneasy, but not afraid. Very Asian thoughts went through my mind. How was it that these youngsters had no respect for an adult? But my second thought was, “Well, I am only No. 13533.” The date of our departure for internment was announced. Four days later we reported dutifully to the Civilian Control Center. We had, in those few days, hurriedly disposed of our entire household goods. Rapacious, bargain-hunting neighbors and strangers descended upon us. We were at their mercy, and constrained by the urgency of time. They would say, “How about giving me your piano for $5, or your refrigerator for a couple of dollars?” We were helpless. We could only say, “Take it.” I saw my father give away my mother’s prized possessions. We were instructed to go with our bedding, a tin plate, cup, knife, fork, and spoon, and “only what we could carry.” With these things we waited at the center to be sent to some mysterious “reception center” somewhere out there. I thought, “This is it. I am now an object.” At the Civilian Control Center I was at first shocked to see armed guards. For the first time I felt extreme anger. Uniformed men with guns were stationed everywhere. “Why?” I wondered. We had presented ourselves peacefully and certainly we would continue to do so. Towering guarded herded us toward the buses. We quietly boarded, not because of the bayonets and guns, but in spite of them. Perhaps you wonder why and how thousands of persons of Japanese ancestry, over 70 percent of them American citizens, so willingly and nonviolently left their homes in haste and entered into 10 concentration camps located in the barren, unproductive areas of the United States. All through my childhood, my parents encouraged me to integrate American values. I learned them well in the public schools–the beliefs and concepts of democracy, equality, the Bill of Rights, and the Constitution. Yet, simply by observing my parents’ responses and behavior, I inherited their communication and relationship values, which were a mixture of Buddhist, Shinto, and Christian religious concepts. I felt enriched for I was a product of two worlds. I do not remember ever wishing I were other than Japanese and American. Now I was confronted by this near impossible balancing of two different viewpoints–1) belief in liberty and the freedoms guaranteed by the US Constitution and 2) the precept that respects authority, offers subservience, and accepts “what will be will be.” This was difficult to face at that point in my life. I was deeply affected and agitated, more than I was able to acknowledge…until decades later. Recent studies have proven helpful to me. Japanese and Western cultural values were compared in the areas of communication, personal relationships, and perception. In contrast to Westerners, the Japanese generally are more receptive than expressive, listen more than confront, show emotional restraint, exhibit humility and self-sacrifice, favor harmony and conformity, and have an unusually high respect for authority. I was the product of a typical western educational system, but I held many Asian cultural values. Thus there had been a war waging within me. One side said, “Be assertive, verbally expressive, believe in equality, exercise the freedom to be an individual.” The other side said, “Be in unity, be humble, remember harmony and conformity, respect authority first, consider the welfare of the group and community rather than that of the individual. In this is your strength.” In this struggle the second side won, but at a heavy price. We followed all the proclamations and orders issued by both civilian and military authorities. At the “reception center” I experienced added insults to my psyche. I could hardly believe that my new home was Horse Stall No. 48 at the Tanforan Race Track, in San Bruno. Manure had been shoveled out, hay removed, and the remaining debris–including spider webs–was whitewashed over. There was a semblance of cleanliness. We slept on mattresses that we filled with straw. Up in the grandstand there were functioning flush toilets with signs posted that proclaimed, “For whites only!” We had latrines. We had to go out in the weather for everything. We ate in mess halls. I wondered if anyone could imagine the depth of my pain. We were there at the race track, behind barbed-wire fences, watched day and night by armed guards in sentry towers. There was roll call twice a day, at 6 a.m. and 6 p.m. I refused to be counted at 6 a.m. All our mail was opened and censored. Edible gifts brought in by outside friends were cut in half, in search of smuggled weapons. Under armed guard, there were two unannounced, unexpected raids to uncover subversive materials and weapons. None were found. Indeed, we had become simply prisoners. By the fall of 1942, children, youth, young people, and the elderly were located in one of 10 camps in bleak, isolated desert lands. No one was accused of any crime, and yet no one was able to call upon the protection guaranteed us by our country’s constitution. Relocated in Topaz, Utah, out in the desert, I taught in the upper elementary grades for $19 a month. My “appointive” Caucasian colleague told me she made $300, plus living expenses, for the same work. I had repressed feelings about that situation too. One day I strolled over to see how my colleague lived. A large sign was posted boldly in her block, “For appointive staff only.” I wondered what would happen to me if I were apprehended. I even stopped and used their restroom before leaving. I confess that my resentment was showing. It jarred my very personhood and integrity to be: - accused unjustly of being a dangerous citizen, forcibly moved to this remote area of the United States, while hundred of thousands of Hawaiian-Americans of Japanese descent, as well as German and Italian-Americans, were not; - confined behind barbed-wire fences, together with 10,000 persons in one square mile, with families living in accommodations meant for single men, in military barracks with mess halls and latrines; - watched day and night by armed guards who were ordered to shoot on sight anyone appearing or attempting to leave the area (it did happen in Topaz: A guard shot an elderly man who thoughtlessly stepped too near a fence to pick up an arrowhead); - incarcerated as a potential saboteur and then nine months later have the armed services begin to recruit volunteers from these camps; - asked to swear unqualified allegiance to the United States and also at the same time foreswear any form of allegiance to the Japanese emperor or any other foreign power. Feelings ran high at this point. How could loyalty to the United States be questioned when at the same time the government was seeking among us volunteers for military service? Over a thousand volunteers joined from these internment camps to become part of the most highly decorated American combat unit in the entire history of our country. These men were determined to demonstrate their loyalty to the United States. In another area I was hurt to the quick. As a teacher, I saw the effects of this internment life upon the children of the camp community. They roamed about, no longer responsible to their own parents. Why should they be? These parents could not even provide their own children with protection or even support them. In the classrooms I was saddened to see children exhibit discourtesy and disrespect toward teachers, authority, and each other. They seemed lost, indeed. My task was to educate them academically and, in addition, help them regain self-respect. My mother, a former teacher and an observant person, said that during those years I appeared rather grim. I was. I was unable to confide to her the fact that I was depressed, lonely, overwhelmed, and was facing a frightening future. Suddenly I had become the “head of the family,” for I was the sole American in the family in a country that was treating us hostilely. To make matters worse, my father was hospitalized with tuberculosis. I was told by the unsympathetic Caucasian hospital administrator that my father would never leave the hospital and that furthermore the doctor did not care about this case. When I reported this incident to my minister, all the evacuee ministers in camp dressed in their Sunday best and made a “call” upon this medical officer. Misdiagnosed, my father lived for 13 years after being released from camp. But my mother died four years after entering internment. She needed medical care and surgery that neither the camp personnel nor the hospital could provide. For us, Father’s hospitalization marked a permanent separation for us as a family. After we had been interned about a year and a half, the government realized its mistake and began to encourage us to leave. It saw that there was no good reason to keep us interned. The original reason for interning us was no longer valid, as there was no proof that we had done anything to undermine the US war effort. We were not potential saboteurs. But, more important for the government, keeping us in the camps was expensive. Eventually I went to Chicago, through the Quakers, to work at a Presbyterian settlement house. From the 1950s to the late 1970s, I lived in Lombard, Ill., near the York Center Church of the Brethren. My husband and I were pacifists and we also believed in simple living and in outreach, so we were drawn to York Center church, while Lee Whipple was pastor. In 1978 we moved to Eugene, Ore., and became part of the Springfield congregation. For over 35 years I did not talk to anyone about my internment years and the scandal of it. And I refused all speaking invitations. The reason I now go to schools to give presentations is that we former internees are a dying generation, and when I look at the school textbooks I see nothing about the internment. So I realized that if I didn’t speak out it would be come secondary information; the primary sources soon would be gone. I have created a slide presentation, and dug out pictures from books and old records, relying on the Armed Services and the government archives. We were not allowed to have cameras in the camps, of course. Not even my children had known my story earlier. They complained that they did not hear about it. They heard their father talk and joke about his prison experiences as a World War II conscientious objector, but I did not make one peep. Of course our children saw this contrast between their parents. But I just could not talk about it. I know now that it would have been emotionally and psychologically healthy to talk and that I should have done it 30 or 40 years ago. But we were such zombies then. We thought it was violent or disrespectful to react like that. The experience was too traumatic; it devastated our personhood. This happened to all of us. Through the years individuals such as the late Min Yasui and agencies such as the Japanese American Citizenship League have worked to obtain redress for the victims of the internment. The Church of the Brethren Annual Conference and the General Board, over the years, petitioned Congress to acknowledge the wrongness of the internment and to make just redress. In 1976 President Gerald R. Ford rescinded President Franklin D. Roosevelt’s infamous Executive Order 9066 of 1942 that sent over 100,000 Japanese-Americans to concentration camps. This past August 10, President Ronald Reagan signed H.. 442, which offers restitution of $20,000 to each surviving victim of the internment and an official government apology. This is my story. I tell it now, to help people to know about and to understand the pain that the internment caused, so that such an atrocity will never happen in this country again. First published in the November 1988 issue of the Church of the Brethren magazine “Messenger.” Florence Daté Smith lives in Eugene, Ore. She has been a long-term member of Springfield Church of the Brethren.
<urn:uuid:03768a04-cf7f-4844-be85-c988208e9faf>
CC-MAIN-2021-43
https://www.brethren.org/messenger/uncategorized/remembering-internment/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.985006
3,367
3.328125
3
Kapalabhati – (Frontal Brain Cleansing Breathing) By Dr. Rita Khanna “Kapal” is a Sanskrit word; it means forehead / skull and “Bhati” means shine / light. Thus, Kapalbhati means an exercise that makes the skull shine. It also means that Kapalbhati is the practice, which brings a state of light and clarity to the frontal region of the brain with the inner radiance. Kapalbhati is highly energizing abdominal breathing exercise. Actually, this is a cleansing exercise; it purifies the blood, removes toxins from the body, cleans the nasal passages and removes bronchial congestion. In this Pranayama, complete attention is given to exhalation only and hardly any effort is applied to inhale. Inhalation is mild, slow and longer than the exhalation. Quick exhalation and natural inhalation follows each other. Kapalabhati should be practiced on an empty stomach or three to four hours after a meal. Sit on the floor in any comfortable and simple posture or if you prefer, sit in a chair. Keep your spine and head erect and the hands on knees. Close the eyes. Make sure the body and mind are relaxed. Step 1. After taking a comfortable sitting position, take a few deep breaths consciously. See that the diaphragm is moving properly. During inhalation, the diaphragm descends and the abdomen is pushed out. During exhalation, the diaphragm pushes the lungs up and the abdomen goes toward the spine. This constant up-and-down movement of the diaphragm throws the air in and out. Step 2. Inhale slowly and comfortably, relaxing the abdomen allowing the air to return gently to the lungs. Step 3. At the end of each inhalation, exhale rapidly and forcefully through the nostrils by contracting the abdominal muscles quickly with a backward push causing the diaphragm to rise and force the air out of the lungs. This completes one cycle of Kapalabhati exercise. Repeat this several times slowly (up to 5 rounds of 10 breaths). When you are comfortable, increase this to 20 breaths. Under the guidance of a teacher, you can extend the number of rounds each week. But under no condition should one go beyond one’s capacity. - It is important that the rapid breathing used in this technique be from the abdomen and not from the chest. - If you start experiencing any pain or dizziness, stop the practice immediately and sit quietly or just lie down in Shavasana for some time. - When the sensation has passed, recommence the practice with more awareness and less force. If the problem continues, consult a yoga teacher. Benefits of Kapalabhati: - Kapalabhati is the best exercise to stimulate every tissue of the body. It activates, energizes, revitalizes and recharges the entire body. After and during the practice, a peculiar vibration and joy can be felt, especially in the spinal centres. When the vital nerve current is stimulated through this exercise, the entire spine will be like a live wire and one can feel the movement of the nerve current. - The constant movements of the diaphragm up and down act as a stimulant to the liver, spleen and the abdominal muscles. It balances and strengthens the nervous system, tones the digestive organs and improves digestion. It develops strength and stamina and coordinates the abdominal muscles. - Kapalabhati Pranayama is especially effective in lowering carbon dioxide (CO2) in the lower parts of the lungs. It cleanses the lungs and entire respiratory system. The blood is purified and body gets an increased supply of oxygen to all cells. - This is highly effective in controlling illness, allergies, obesity, constipation, diabetes, kidney / prostate / uterus diseases, lung problems and many other diseases. This technique increases the glow on the face of the practitioner. - On a mental level, it energizes and prepares the mind for meditation by removing laziness and sensory distraction from the mind. It also brings mental clarity and alertness. - While exhaling, do not use the body force and there should not be any strain or jerk on the muscles of the face. Keep your facial muscles relaxed, especially corner of the lips, nose & eye muscles. Thus, ensure that breathing is very- very slow and the release of the breath is at a great speed. Tensed muscles may cause you epilepsy, high blood pressure & paralysis. - In this process, shoulders or any other part of the body should not move up and down. Normally when people breathe out, they bend the body to the front from the waist and give a jerk, or shake the shoulders and head violently which would be highly incorrect. Body must be steady and quite peaceful and the face must be pacific. Although Kapalabhati has tremendous benefits to a practitioner, there are, however, some health conditions in which this breathing technique should not be practiced without supervision. Those suffering from heart disease, high blood pressure, vertigo, epilepsy, hernia, gastric ulcer & recent surgery should take advice from a Yoga expert. If any of the above diseases are in an acute form, I would recommend abstaining from Kapalbhati. Now a question arises in most of our minds. Since this form of pranayaam is taboo for so many physical conditions and it can also have many side effects, how should one go about it? Also is it advisable and safe to learn and practice this form after learning from books or from watching certain TV programmes. Pranayama is a science & needs to be done with accuracy as also with precautions. As we all have different body structures and varying fitness level, conducting these exercises without pre checking medical conditions can be quite counterproductive & even risky. I have come across scores of people who have been doing this pranayama in a very wrong manner after learning from TV / DVD / Books. Seeing them, one really felt scared. Not only are they harming themselves, they are also propagating the same to others. My earnest request to each one is that this pranayama should be done with guidance from qualified & experienced yoga experts. Always check with your doctor if you have any doubts or concerns regarding the suitability of this breathing technique for you. Dr. Rita Khanna is a well-known name in the field of Yoga and Naturopathy. She was initiated into his discipline over two decades ago by world famous Swami Adyatmananda of Sivananda Ashram in Rishikesh. She believes firmly that Yoga is a scientific process, which helps us to lead a healthy and disease-free life. She is also actively involved in practicing alternative medicines like Naturopathy. Over the years, she has been successfully practicing these therapies and providing succour to several chronic and terminally ill patients. At present, Dr. Rita Khanna is teaching Yoga in Secunderabad. She has been treating and curing various diseases and disorders through Yoga, Diet and Naturopathy and has been achieving tremendous satisfaction in disseminating this virtue. From Joseph McNulty A good set of reading as usual this month (January). Just a few words about the High Blood Pressure (HBP) or Hypertension. I should like to add to the list of advice points. Of course meditation is a great help. Clinical studies by Wallace and Benson the ground breaking paper on the efficacy of meditation shows clearly that amongst other factors HBP has a definite drop after meditation. Another point to note is that total abstention from alcohol is essential and of course smoking. It may be of interest to note that after a lifetime of yoga and meditation my blood pressure is measured every year and I have had no medication on this account whatsoever. My blood pressure is 119/68 on average over the year and my doctor concludes it is excellent. Hence my advice: Yoga and meditation every day of your life and no alcohol or smoking. Keep up the good work. Teaching Yoga for Stress Management – Stress Relief for Teenagers By Dr. Paul Jerard, E-RYT 500 Yoga practice has realistic solutions for stressed-out teenagers. Young people need to take time out for non-competitive and wholesome activities, such as Yoga. When teens have a chance to explore themselves from within, this is time well spent. There has never been a time when teenagers were subjected to more stress than right now. Reuters Health reported, “One third of US teens say they feel stressed-out on a daily basis.” This was based upon a study of over 8,000 teens, and young adults, at the University of Michigan, Ann Arbor. For adults who lack compassion, for young people trying to cope in our society, consider this: The leading cause of death in teens and youths, ages 10 to 19 years old, is “teenage suicide.” Stress can place young people at risk. According to the US Department of Justice, “It is estimated that 500,000 teenagers try to kill themselves each year.” The sources and the reasons for teenage stress, on such a massive scale, are subject to theory, but let’s take a look some of the reasons why so many young adults and teenagers are at risk. Family units are challenged, because many teens live in single parent families. Parents work so much that “bonding time” is compromised. Peer pressure has always been part of the back drop in finding one’s self as a teenager. Technology also plays a role in pushing teens further than ever before. Sure they are privileged to have access to so much information, but they also suffer from information overload. On top of this, high expectations are placed on teens for social status, academic performance, athletic performance, performance in the entertainment industry, etc. So how can Yoga help teens to cope with stress? Regular teen Yoga sessions, or classes, should contain physical posturing (asanas), Yogic breathing (pranayama), laughing, positive affirmations, and learning to create an automatic relaxation response on a daily basis. Teens must learn to reserve regular “Yoga time” for themselves. Working part-time, studying for SATs, getting a date for a prom, and preparing for college, are part of becoming a young adult, but there needs to be time to constructively “unplug” from all of it. Yoga delivers mental clarity to all practitioners, regardless of age. Teens can learn to pursue one short-term goal at a time. This will make daily life much more manageable. Teens should learn various Yogic relaxation techniques, such as body scanning, stage-by-stage relaxation, and progressive muscle relaxation. Physical, mental, spiritual, and emotional health, can be restored by learning to accept oneself, as is. Teens can condition and prepare themselves to realize that they will not be in control of every situation life throws at them. Open discussions with their peers, after a Yoga session, in support groups, teen meetings, after school activities, or a public speaking class, will strengthen teen social skills and character. There is a huge demand for teen stress management services, and Yoga teachers are sitting on a multitude of solutions for teens and their families. The reason is simple: Teens are at risk because of internal and external pressure. This may seem like it is nothing new to most parents, but according to a survey conducted by The National Center on Addiction and Substance Abuse (CASA), at Columbia University, teens are more likely to resort to illegal drugs or alcohol, due to high levels of stress. Again, this should come as no surprise to adults, as the adult behavior is identical. Many adults use illegal drugs or alcohol, due to excessive stress. Teens will naturally copy familiar adult examples, which they have observed, over time. On another note: If young celebrities, and professional athletes, are abusing themselves, why should we expect teens to be any different? These are who our children perceive to be role models. The television is no longer a reliable “babysitter,” for young children or teens. Parents are challenged to censor entertainment, and become better examples than traditional role models. This comes at a time, when many middle-class parents may be working two jobs each, just to make ends meet. What difference can Yoga make in the daily lives of teens? One major difference is bonding time with family and parents. Many families do not eat their meals together. This turns contemporary families into strangers, who live in the same home. From the time a child is born, there is a need to develop solid relationships, with the rest of the family unit. If relationships within the family have become strained, due to divorce, separation, death, fighting, or illness, there is still time for mending family ties. Professional counseling should be a consideration, as well as, participation in non-competitive activities. This is where Yoga can fit into the family’s weekly schedule. When families make an appointment to practice Yoga together, this will solidify the individual relationships within. Yoga teachers and studios should run workshops or surveys to monitor local demand for family, teen, kids, or “mommy and me” Yoga classes. These classes make a difference in your community and will save the lives of “at risk teens.” For parents who are seeking family-oriented classes, but cannot find them in their area, they can learn what they need to know from local Yoga teachers. If this is not possible, learn to develop a safe practice from Yoga books, videos, and courses. Your children can learn with you, as there are a number of videos and books designed for their age. Make sure that safety is your primary concern, and you will enjoy your bonding time. © Copyright 2008 – Paul Jerard / Aura Publications
<urn:uuid:1241d947-e406-4a2a-a014-571be9e3fce2>
CC-MAIN-2021-43
https://www.yoga-teacher-training.org/newsletter-parent/february-2008-aura-wellness-center-announcements-newsletter/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.945622
2,908
2.875
3
I work with a lot of folks who are new to information architecture, and I’ve answered a lot of “but where do the controls go?” questions lately. There is a definite sense of directionality to interfaces, and understanding the expected direction that the user moves through the page helps you determine how to control the display or change to the content on the page. It will also help you understand why a poorly-placed control causes confusion or frustration in your users. The controls to display or interact with a piece of content should be above and/or to the left of whatever they’re controlling. Caveats and preconditions: First, everything in this article applies for left to right (LTR) languages. If you’re designing in a right to left (RTL) language, an up-down language, or a down-up language, your mileage may vary, but flipping the rule is probably a good start. Second, this pattern requires two things: containers you want to manipulate, and controls to manipulate them. Containers may refer to: - Forms (particularly forms with subsections or embedded tables) - Charts and sets of charts In this case we’re also going to take specific types of encapsulated containers out of the pattern, including: - Video / video controls - Audio / audio controls - Search boxes We’ll explain why in the Exceptions section. These lists are not all-inclusive. The design principles Let’s start with the obvious: left-to-right (LTR) languages are read left to right. So at the highest level of generalities, readers are more likely to notice the thing to the top left first and the bottom right last. When a user is scanning a page to get a sense of place, we call this specific pattern F-Shaped Scanning because of the roughly f-shaped eye-tracking patterns it produces. A sense of place is critical for a user to acquire; knowing where they are, what the state of the place is, and what they can do allows them to move forward (or backward) toward their intended goals. Because “Where the hell am I?” is the first question anyone has in any situation, and because people read left-to-right, we answer the questions “where the hell am I?” by putting the wayfinding tools (the site title, logo, and global navigation) as far up and to the left as we can get. Consider this the functional equivalent of the “Welcome to Pennsylvania!” sign you get when you cross a state border. Even assuming you knew you were going to Pennsylvania, it’s good to know when you’ve crossed the border, and if you didn’t know you were visiting the Keystone State today it’s really important to find out you have. Almost as important as answering “Where am I?” is answering “What can I do here?” You may have also heard this referred to this in terms of questions like “Can the user perceive the purpose of the page?” or “Does the user identify the call to action?” (To answer “What can I do here?” we also have to answer “And what state am I in right now?” Most of the visual components that we use present information display both state and affordance simultaneously through a visual language, but if we’re presenting information through auditory means via a screen reader or speech-controlled interface, we have to specify both.) In general, it’s best to present interaction controls either in conjunction with the wayfinding points or directly after them. So for example, a tabset is both wayfinding (answering “Which content set am I on?”) and interaction (answering “How can I switch to other related content?”). A call to action to “buy it now!” on the other hand, should be to the right (or below) of the name of the product because nobody wants someone shouting “BUY THIS THING!” when they don’t know what the thing they’re being asked to buy is yet. You may have noticed that the direction with which we read and the containers that we digest convey more than just wayfinding and interaction, however. They also convey a sense of time. For example, as the reader you assume that if I were reading this paragraph out loud to you, I would read it after the content above this paragraph, and before the content below this paragraph. Events lower on the page are perceived as taking place after events higher on the page, even though somewhere in our brains we know that the entire page is present all at a single moment. The reason that we put the wayfinding above or to the left of interactions is because we want to know “where am I?” before we know “what can I do?”, and in a left-to-right language, earlier events get put to the top or the left of later elements. The time component is true of any reading, but is most obvious in serial art or comics. In Understanding Comics, Scott McCloud explains (emphasis his): In learning to read comics we all learned to perceive time spatially, for in the world of comics, time and space are one and the same… so as readers, we’re left with only a vague sense that as our eyes are moving though space they’re also moving through time — we just don’t know by how much! Obviously, if a perception of time is present in straight written words and also in something as visually-oriented as comics, we can assume it also applies to our less-visual-than-comics, more-visual-than-a-novel websites. When using a web interface, users hate moving backwards in time. Think about the last time you filled out a long form on the web. Maybe it was shipping information. Maybe it was a mortgage. Whatever it was, I’m fairly confident that if the form designers required you to get information from the top of the form to continue something at the bottom of the form, it made you grouchy. We expect that if we’ve provided something at the top of an experience, it’s remembered during the time it takes us to get to the bottom. The most frequent information architecture mistake I see in form design is form fields that don’t respect the perceived time directionality of the form. For example, I was recently shown a form where checking a box in row 6 would change a value in row 2. When you’re filling out that form, and you’ve reached row 6, row 2 is in the past. You’re neither going to expect to have to recheck your past nor look for changes — especially if you’re on a small screen and row 2 is no longer visible! On the other hand, if row 2 affects the values (or fields) available row 6, well, you haven’t even gotten to row 6 yet… it’s in the future. Or to put it another way, “If X then Y” has to run in the order “first ask X then ask Y”. Even if you can’t quite grok that for visual interfaces, think of the impacts for users who are listening to the screen… if you update something that’s already been read, it’s like interrupting yourself to say “nope, changed my mind”, and then you have to tell the user about the whole form all over again or risk confusing them. In addition to wayfinding, interaction-finding, and the perception of time, our controls need to be visually associated with the things they affect. The Gestalt Principle of Proximity states that things that are grouped together appear to be related. That means that if you want a control to be associated with a container, they have to be near each other. The further apart they are the less they look connected. Fitts’s Law strengthens this principle by pointing out that the further apart two things are the bigger you have to make the second one to make it quickly usable. The Law of Uniform Connectedness helps us strengthen the connections between controls and the things they effect by stating that elements that are visually connected are perceived as more related than elements with no connections… which is a fancy way of saying “draw a line between two things or put a box around them, and they look like they work together.” A popular mistake is putting a control at the top of a page which doesn’t affect anything except at the bottom of the page. Another popular mistake is putting a control inside a child container that actually controls the parent. So here’s a quick test: if you can draw a box around the thing being changed and the control, and everything in the box is being controlled by the control, then they’re probably (but not always) associated correctly. I see this mistake most often with tabsets and charts, so here’s an example. If we put the controls above and to the right of the tabset’s content body, it’s perceived to control not just the current tab’s information but also the information on the other tabs. If we put the controls within the body of the Charts tab, but outside of the body of any of the charts on the page, the controls are perceived to control all of the charts on the page. For heaven’s sake, don’t put in exceptions! Nothing drives a user crazy like having three charts that will switch between day, week, and month with a single control and one chart that’s like “nope, I’m not even time-related, I need different controls”. So what if we do have four charts that require at least two different controls? At that point the best thing we can do is give each chart its own control, so that the user can see explicitly what controls affect which charts. Oh, and if you have to group your charts into two sub-containers for “affected by time” and, say “affected by amount spent” or something, each of those containers only needs one control… as long as all its children use the same scale. One exception to this rule occurs when you can use the Principle of Proximity and containerize the controls to be very obviously related to the thing to their direct left or directly above them. Remember when we mentioned that we weren’t going to talk about video players? That’s because the controls for a video player are generally below the video player, but there’s such a strong containership thanks to the Principle of Proximity and the Principle of Uniform Connectedness that we can get away with it. Containers with encapsulated controls such as video players, audio players, and carousels tend to display mostly visual or visual/auditory content (as compared to written or mixed types), have specialized controls such as play/pause which are different from most of our other use cases, and don’t change the content of anything other than their own container. (Also, fuck carousels.) Another exception can be found in micro interactions for form fields. Search fields, select menus, and similar elements may have controls or affordances at the right or bottom, but they are so tightly visually coupled with their content that they’re viewed by the user as a single element. That isn’t to say that with some creative destruction you can’t break their usability through CSS styling, but at least you’ll have to put some effort into it. Web pages have directionality. Assuming you’re working in a LTR language, you should ensure that you’re answering wayfinding questions to the top or left and interactivity questions directly after that, also to the top or left of the content container you’re affecting. Users don’t like to go backwards in time, so make sure that your conditional logic never asks a user to move back up or to the left of the page because of something they completed further down or to the right. Users also don’t understand when controls are far away from the things they’re controlling, or are perceived to control a larger set of things than they actually affect, so follow the Principle of Proximity and put your controls closest to the border of the container they’re affecting. Finally Gestalt principles, especially around proximity and uniform connectedness, can help you strengthen relationships between controls and content when you need to break the rules, but avoid breaking the rules whenever you can, so that your sites will make more sense.
<urn:uuid:42f469d2-3aba-436b-b52c-541786da82cd>
CC-MAIN-2021-43
https://theinterconnected.net/2018/09/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.941571
2,706
2.8125
3
- 2013-2015 British Journal of Nutrition Meta-Analyses - 2013 PLOS ONE Milk Study - 2012 Annals of Internal Medicine analysis by Smith-Spangler et al. - 2009 American Journal of Clinical Nutrition study by Dangour et al. - The 2008 Organic Center Report - Introduction to Nutritional Quality - Organic Vs. Conventional Foods - Key Nutritional Quality Studies: Food Comparison This page is a portal for important, large-scale meta-analyses that have taken a comprehensive comparative look at the nutritional quality of organically and conventionally grown foods. Meta-analysis is a statistical tool to extract more robust insights from several different studies focused on the same basic question, like “does organic milk offer nutritional advantages for consumers, compared to milk from cows on conventionally managed farms?” Or, do drugs of “X” class prevent cardiovascular disease? Each study is discussed briefly below, with links to the full text, FAQs, and other related content. British Journal of Nutrition Meta-Analyses An international team of scientists was convened in 2013 by Newcastle University in the U.K., and carried out the most sophisticated large scale analysis ever of the differences in organic vs. conventional food. Their work produced three meta-analysis papers published in the British Journal of Nutrition that looked at plant-based foods, milk and dairy, and meat products. The major findings of each study are described below as well as links to press coverage and other content relevant to these important analyses. For details about each study, click on the FAQ document below, and for information on these studies from Newcastle University click here I: Plant-Based Foods – Click Here for journal article This study has three main findings. (1) Organic crops have, on average, higher levels of antioxidants than conventional crops, (2) organic crops have lower cadmium levels than conventional crops, and (3) pesticide residues are present much more frequently in conventional crops than organic ones. Particularly compelling is the fact that organic crops have, on average, significantly higher levels of antioxidants than conventional crops. Average total antioxidant activity was a 17% higher in organic versus conventional crops. For some individual antioxidants, the differences were much greater — 69% higher levels of flavanones, 28% higher levels of stilbenes, 50% higher levels of flavonols, and 51% higher levels of anthocyanins. - Frequently Asked Questions - Presentation of Key Figures (pdf) - Press Release - Organic Crops Booklet - Supplemental Data - “Clear differences between organic and non-organic food, study finds,” July 2014, The Guardian - “Study of Organic Crops Finds Fewer Pesticides and More Antioxidants,” July 2014, The New York Times - “Organic foods are more nutritious, according to review of 343 studies,” July 2014, Los Angeles Times - “Are Organic Vegetables More Nutritious After All?,” July 2014, NPR on radio interview - “Is Organic Food Really Healthier?,” July 2014, Time - “Parsing of Data Led to Mixed Messages on Organic Food’s Value,” October 2012, The New York Times II: Milk and Dairy Products – Click Here for journal article This study found that organic milk and dairy products are healthier because they contain higher levels of health-promoting omega-3 fatty acids, including more very long-chain omega-3s. Overall, organic milk and dairy products also have a far healthier mix of fatty acids. The concentration of health-promoting, total poly-unsaturated fatty acids (PUFAs) in organic milk and dairy products was, on average, 7% higher than in corresponding milk and dairy products from conventionally managed livestock farms. Also, health-promoting omega-3 fatty acids in dairy products were, on average, 46% higher than in conventional milk and dairy products. Importantly, the overall concentration of the critical, very long-chain omega-3 fatty acids was 58% higher in organic versus conventional diary products. In addition, the concentration of the heart-healthy fatty acid CLA (conjugated linoleic acid) was 34% higher in organic milk and dairy products. The ratio of omega-6 to omega-3 fatty acids was also 79% lower in organic milk and dairy products, an important advantage for individuals dealing with or working to avoid heart disease, diabetes, chronic inflammation, and overweight. The typical balance in omega-6 to omega-3 fatty acid intakes in the U.S. is 10:1 to 15:1, a range that is clearly a risk factor for a number of chronic health problems. Plus, there are nutritional advantages linked to organic milk consumption beyond fat levels and quality. The Newcastle meta-analysis found that organic milk contains, on average, significantly higher levels of α-tocopherol (Vitamin E) and iron (Fe), but lower concentrations of iodine (I) and selenium (Se), compared to conventional milk. The meta-analysis also identified trends towards higher vitamin A, ß-carotene, lutein and zeaxanthin, potassium (K), and lower copper (Cu) levels in organic milk, but further studies are required to confirm these results. III: Meat Products – Click Here for journal article This study documented that organic meat is healthier because it has a markedly more desirable composition of fatty acids. The concentration of health-promoting poly-unsaturated fatty acids (PUFAs) in organic meats was, on average, 23% higher than in corresponding meat products from conventional managed livestock farms. Plus, the levels of health-promoting omega-3 fatty acids were, on average, 47% higher than in conventional meat, and the overall concentration of the critical, long-chain PUFA fatty acids was also markedly higher in organic versus conventional meat. In addition to higher concentrations of health-promoting fatty acids, the team found that the levels of some pro-inflammatory fats were lower in organic meat products. The levels of the saturated fatty acids myristic acid were, on average, 18% lower, while the levels of palmitic acid were 11% lower. 2013, The PLOS ONE Milk Study On December 9, 2013, Dr. Benbrook and his team published an 18-month, nationwide study (full study and summary available) that confirmed that there are large and consistent differences in the fatty acid profile of organic and conventional milk and dairy products in the U.S. The research was carried out by a team led by scientists at Washington State University and appeared in the prestigious peer-reviewed journal PLOS ONE. In typical American diets, people consume around 15 grams of omega-6 (ω-6) fatty acids for each gram of omega-3 (ω-3) fatty acids, resulting in an ω-6/ω-3 ratio of 15 — a level far above the heart-healthy goal of around 2:1, or lower. For more information about fatty acids to help interpret these results, see Good Fat, Bad Fat or this detailed primer. The PLOS ONE study found that both organic and conventional milk and dairy products help lower this key ratio. The average ω-6/ω-3 ratio found in this study for the Organic Valley brand of organic milk is 2.3, while in conventional milk the average ratio is 5.7, still substantially better than most sources of fat in the American diets. The ω-6/ω-3 ratio in organic milk is much lower than in conventional milk, because pasture and forage-based feeds make up a much greater share of daily “Dry Matter Intake” on organic dairy farms, compared to conventional dairy farms. The team also reports that simple steps by consumers can markedly lower an individual’s overall dietary ω-6/ω-3 intake ratio. Overall, they report that organic whole milk had 62% more total ω-3 fatty acids than conventionally produced milk. Among individual ω-3 fatty acids, organic milk concentrations were 60% higher for ALA (alpha-linolenic acid), 33% higher for EPA (eicosapentaenoic acid), and 18% higher for DPA (docosahexaenoic acid). Organic whole milk had, on average year round, 18% higher levels of CLA (conjugated linoleic acid) than conventional milk, 25% lower total ω-6 fatty acids, and 25% lower concentration of LA, the major ω-6 fatty acid. 2012, Smith-Spangler et al. In September of 2012, a team led my Crystal Smith-Spangler, MD published the study “Are Organic Foods Safer or Healthier Than Conventional Alternatives?” in Annals of Internal Medicine. They concluded that “the published literature lacks strong evidence that organic foods are significantly more nutritious than conventional foods. Consumption of organic foods may reduce exposure to pesticide residues and antibiotic-resistant bacteria.” Dr. Benbrook and other experts have critiqued this study widely, as described in the related content below. - Letters to the Annals of Internal Medicine Editor re the Smth-Spangler et al. paper - The Organic Center Response to Smith-Spangler et al. by Charles Benbrook - WSU Blog on Smith-Spangler et al. - “Parsing of Data Led to Mixed Messages on Organic Food’s Value,” October 2012, The New York Times 2009, Dangour et al In 2009 a team of British scientists published a detailed meta-analysis in the American Journal of Clinical Nutrition funded by the U.K. Food Standards Agency. This team concluded that organic food offered no significant nutritional benefits. The results of this study by Dangour et al. are presented – and criticized – in the related content links below. - Letters to the Editor of the AJCN on Dangour et al. and Dangour et al. Responses - TOC Response to Dangour et al. by Dr. Charles Benbrook 2008, The Organic Center Report In March, 2008 The Organic Center (TOC) released a thorough review and meta-analysis of the differences in nutrient concentrations across 236 matched pairs of organic versus conventionally grown foods that concluded that organic foods were nutritionally superior to conventionally grown. These results are compiled from 97 high-quality studies published since 1980. Click here for the full report or see the executive summary in English and Spanish.
<urn:uuid:d79a4a3e-ba1b-4bda-9c30-5e7e70849a32>
CC-MAIN-2021-43
https://hygeia-analytics.com/nutrition/organic-vs-conventional-foods/key-studies/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.929229
2,191
2.828125
3
This tragedy, which occurred in Kibeho camp in southwest Rwanda, raises several key issues regarding internal displacement, particularly that of the protection of IDPs within camps and against forcible return, the screening of criminal elements and persons guilty of war crimes, and the coordination of international efforts to meet humanitarian and protection needs of IDPs. A. Development of the Kibeho crisis The origin of IDP camps in post-genocide Rwanda In the wake of the April 1994 Rwandan genocide, the Security Council’s decision to reduce the UNAMIR peacekeeping force to 270 persons left the Rwandan Patriotic Front (RPF) as the only significant force capable of stopping the massacres, which in the span of three months had claimed the lives of 500,000 to 1 million persons. Massive displacement was caused by the widespread killings and fear of RPF reprisals. By 4 July 1994, the French Opération Turquoise had created in the country’s southwest corner a 'safe humanitarian zone’ equivalent to roughly one-fifth of the national territory. By the time of the RPF’s proclamation of a new government on 19 July, roughly 1.2 to 1.5 million internally displaced persons had already fled to this zone, most of them having escaped the advance of the Rwandan Patriotic Army (RPA) in June and July. As the deadline for French withdrawal drew near, a collaborative effort between political, military and international humanitarian organisations successfully encouraged many of the displaced persons in the southwest to remain in Rwanda rather than to continue their flight abroad. When Opération Turquoise ended on 21 August, some 390,000 internally displaced persons remained in thirty-three camps.(1) Events leading to the massacre The new Rwandan government suspected that the IDP camps were providing sanctuary to persons implicated in the genocide and were being used for the formation of an anti- government militia. As neither the UN mandate for Opération Turquoise nor the objectives of the French government included disarming or arresting soldiers, criminal elements were able to consolidate in the camps. In addition, refugee populations surrounding Rwanda, which comprised both those responsible for the genocide as well as innocents under their authority, were re-arming and launching cross-border incursions, in spite of a UN arms embargo. Unable to diffuse this growing threat, the Government viewed the IDPs as compromising its territorial integrity. The Department of Humanitarian Affairs’ field presence in Rwanda, UNREO, was charged with the inter-agency coordination of actions on behalf of IDPs, centralised through the Integrated Operations Centre (IOC), consisting of representatives of UN agencies, NGOs, major donors and the Rwandan government. At the end of 1994, the IOC launched Opération Retour to facilitate voluntary return. During the first six weeks an estimated 40,000 IDPs returned to their home communes but the number fell drastically by the end of February 1995. Meanwhile, camp populations increased due to reports of returnee arrests, overcrowded prisons, and the illegal occupation of homes, as well as a lack of confidence in local judicial procedures. In Kibeho Camp the population grew from 70,000 to 115,000 in a fortnight.(2) By late March, some 220,000 IDPs still remained in the camps. The international community and the Rwandan government disagreed on the conditions under which IDPs should return. While international agencies believed that IDPs should not return until a certain level of security existed in the country, the Rwandan government believed that security could only be established when concentrations of displaced populations had dispersed. The IOC developed a strategy to reconcile the Government’s preoccupation with national security with the international community’s concern for 'voluntary return in safety and dignity’ [see C.i. below for discussion of use of this term]. Although the use of force was to be avoided, the strict meaning of 'voluntary return’ was compromised: the camps were to be closed by ending food and relief distribution and transferring IDPs to home communes. Massacre at Kibeho However, even before the implementation of this strategy, on 18 April the RPA had moved to close the camp at Kibeho by surrounding it and cutting off its food and water supply. For the next three days, the concentration of 80,000 persons on one hill and rapid deterioration of humanitarian conditions resulted in panic and casualties when soldiers met stone-throwing with machine gun fire. On the fourth day, a large group of IDPs tried to break the cordon. The RPA opened fire on the crowd, killing several hundred persons and causing a stampede which claimed more lives. The government put the death toll at 338 while the UN put the figure at 2,000. UNAMIR troops were present during the massacre but were ordered not intervene despite their mandate to “contribute to the security and protection of displaced persons...” (Security Council Resolution 918 of 17 May 1994). Over the next three weeks, the IDP camps in southwest Rwanda were evacuated. Thousands of IDPs returned to their home communes but several thousand others crossed into Zaire. Many returning IDPs refused to register with local authorities or to proceed to their communes of origin, and instead hid in rural areas. Some IDPs eventually mingled with Burundian refugees in camps in Rwanda. The International Commission of Inquiry In an effort to restore its reputation, the Government of Rwanda established an Independent International Commission of Inquiry. The Commission’s report, issued on 17 May 1995, indicated that the government could have taken steps to prevent the massacre. The Commission correctly faulted the RPA for its lack of communication, its inexperience and its inappropriate training for what was basically a police operation. Fear and panic on the part of IDPs compounded by prolonged exposure to the elements and the denial of food, water and sanitation created a powderkeg which needed only a small spark for its ignition. B. Analysis of the crisis The Kibeho tragedy was avoidable. Signs of an impending disaster existed. The first involved the divergent priorities and perspectives of the Rwandan government and international agencies regarding IDPs. The IOC failed to appreciate the urgent concerns of the Rwandan government, thus heightening its suspicions about the international community’s intentions. The IOC also lacked the flexibility and resources to implement projects in order to encourage voluntary IDP return or to devise an effective camp closure strategy in a time-frame which could have responded to the government’s security concerns. Furthermore, the integrated concept of the IOC did not reflect the current reality. Not only did UN agencies not ensure consistent representation on a high enough level within the IOC but the Rwandan government’s participation was sporadic and did not include the key ministries of Defence and Interior. Monitoring of the camps fell within UNAMIR’s mandate but the force did not ensure a sufficient presence in the camps prior to or during the crisis - only a single contingent of fewer than 100 soldiers (of a full strength of 5,529 soldiers) remained in the camp throughout the events. UNAMIR officers and Human Rights Field Officers could have played a more substantial monitoring role in the camps. A strategy for an increased UN presence in the camps, including Human Rights Field Officers, should have been included directly in the provisions of Opération Retour. The divergence between the international community and the Rwandan government concerning internal displacement reflected a lack of political will on the part of the international community to develop a coherent approach to the post-genocide situation in Rwanda and in the larger Great Lakes region. The Rwandan Government pledged to respect human rights and refrain from reprisal killings but lacked the resources to rebuild its devastated infrastructure, in particular its judicial system. At the same time, donors provided substantial resources for humanitarian assistance to refugee camps in neighbouring countries harbouring forces of the former regime, without supporting efforts to separate those who should have been excluded from refugee status. The inability of the IOC to reconcile humanitarian with political and strategic interests, and its reluctance to recognise the fragility of the consensus between all parties, allowed the Kibeho tragedy to develop. The Rwanda experience indicates that solutions to the problems of internal displacement cannot ignore regional dynamics nor allow humanitarian action to substitute for military, political or diplomatic solutions. C. Lessons learned for the future protection of IDPs i. Legal issues The Kibeho tragedy underlined the necessity for agencies and governments to be able to refer to a body of guiding principles on internal displacement. While it may be far-fetched to assume that the existence of a legal concept would have altered the situation in Kibeho, a set of minimum international guidelines applying to situations of internal displacement would have facilitated the channelling of political pressure on the government to encourage it to develop more appropriate ways to deal with the IDP issue. The IOC had to elaborate its own guidelines, which were more easily compromised because they were self-created. The Guiding Principles on Internal Displacement, submitted by the Representative of the Secretary-General on IDPs to the 54th session of the Commission on Human Rights and endorsed by the Inter-Agency Standing Committee (IASC) on 26 March 1998, should help to facilitate the work of organisations acting on behalf of IDPs as well as to provide a basis for the development of more effective responses to internal displacement in the wake of complex humanitarian emergencies. Section V of the Guiding Principles, concerning return, resettlement and reintegration, could have been of particular use in the Rwandan context given the lack of clarity and consensus on principles on IDP issues. Such principles may also have encouraged a more serious investment of resources and energy in the first phase of the plan espousing voluntary return. In addition, these principles could have helped foster an international consensus after they were violated in Kibeho, by providing the Independent International Commission of Inquiry with objective principles upon which to base its evaluation and conclusions. The Rwanda example shows that the Guiding Principles are useful where a general legal norm exists, but a more specific right that would ensure implementation of the norm in the case of IDPs has not been articulated. The term 'voluntary return’ was borrowed from refugee law. Since no international legal norm exists explicitly protecting people against individual or mass transfer from one region to another within their own country, the norm must be inferred from the right to freedom of residence and movement. However, the Rwandan government did not consider itself bound, through inference, by the right of its citizens not to be forcibly relocated. Rwandan authorities repeatedly invoked their sovereign right to address the security threat presented by the camps. Having no clear or specific basis upon which to insist upon the concept of “voluntary return in safety and with dignity” for IDPs (Principle 28 of the Guiding Principles), the international community could only negotiate with and exert pressure on the Government to resolve the problem through means consistent with a peaceful solution to the IDP problem. A humanitarian disaster ultimately precipitated the Kibeho massacres. Attempts by UN agencies and the Special Representative of the Secretary-General to exercise their good offices to address the denial of camp access by humanitarian agencies during the RPA cordon were ineffective in addressing the extreme food and water deprivation which resulted in the escalation of the crisis. In this regard, Section IV of the Guiding Principles relating to Humanitarian Assistance could provide a future basis for coordinated UN intercession with governments, especially in humanitarian crisis situations involving IDPs. ii. Institutional issues Implementation of the Guiding Principles will depend on the existing institutional arrangements and political will in any given country. The case of Rwanda demonstrated that where the authorities’ will to protect IDPs is weak, only strong institutional arrangements with substantial political weight and expertise can make a difference in IDP protection. One means for improvement in the international institutional protection of IDPs thus lies in better coordinating and supporting the efforts of institutions currently undertaking activities on behalf of IDPs. The UN Secretary-General’s 1997 Programme of Reform reaffirms that the Emergency Relief Coordinator’s (ERC) role is to ensure that issues of protection and assistance for internally displaced persons are addressed. The IASC, comprised of the heads of the major UN humanitarian agencies, recommended that the ERC should help mobilise resources and identify gaps; assign responsibilities, including camp management; develop information systems; and provide support to the field. The ERC and its Working Group, which has recently been designated as the main inter-agency forum on IDPs, should be able to play a mobilising role with regard to the internally displaced by initiating a division of labour of agencies, by developing agreed strategies where necessary and by helping to ensure that humanitarian assistance is not substituted for political action. The participation of the High Commissioner for Human Rights and the Representative of the Secretary-General on IDPs in the IASC and its working group should help ensure the integration of a protection perspective in decisions involving IDPs. In appropriate contexts, one agency can assume primary responsibility for ensuring that protection and assistance are provided to IDPs by increasing awareness of their plight and mobilising support on their behalf. This lead agency model has been found to better meet the needs of IDPs than when no single agency is designated as such.(3) Agreements between agencies are also a welcome form of coordination. For example, UNHCR and HRFOR (UN Human Rights Field Operation in Rwanda) signed an agreement in Rwanda in September of 1995 which outlines the responsibilities of the two agencies regarding the protection of the security and physical integrity of returning refugees and IDPs and allows for joint intervention in specific cases. The 52nd, 53rd and 54th sessions of the Commission on Human Rights called upon the Office of the High Commissioner for Human Rights to develop technical cooperation projects specifically targeted to promoting the human rights of IDPs. Such projects may contribute to alleviating the causes of internal displacement and encourage voluntary IDP return by heightening respect for legal procedures, harmonising national law with international human rights standards, providing support to independent national human rights institutions and by strengthening civil society and NGOs. Human rights field officers play an integral role in the establishment of confidence necessary for voluntary return of displaced populations and act as a deterrent to human rights abuses. They should be sufficiently deployed in areas with large concentrations of IDPs and should make available information relative to the situation of IDPs and analyses of trends to, inter alia, host governments and the Representative of the Secretary-General on IDPs. Future human rights operations could include in their mission agreements specific provisions allowing access of human rights personnel to internally displaced populations, and should make reference to the Guiding Principles. In line with the Secretary-General’s 'Programme for Reform’ which identified human rights as an issue which cuts across all areas of United Nations activities and set as a major task for the Organisation to fully integrate human rights in its broad range of activities, UN staff must be better trained in human rights norms and IDP concerns. This would allow them to raise protection issues on behalf of IDPs and to better integrate protection concerns with the provision of relief. Such training would also facilitate the development of common UN approaches in response to serious violations of human rights and humanitarian law that could lead to internal displacement. Stephanie Kleine-Ahlbrandt has worked in the field in Bosnia and Herzegovina, Rwanda and Albania, and currently works for the Office of the UN High Commissioner for Human Rights. The views expressed in this article are purely personal. For a more comprehensive analysis of the Kibeho crisis, see Kleine-Ahlbrandt S The Protection Gap in the International Protection of Internally Displaced Persons: the case of Rwanda, Geneva, Institut Universitaire de Hautes Etudes Internationales, 1996, 172pp. - Adelman H and Suhrke A, Early Warning and Conflict Management, Study II of the DANIDA Joint Evaluation of Emergency Assistance to Rwanda, The International Response to Conflict and Genocide: Lessons from the Rwanda Experience, March 1996, p 94. - In-Country Report, United Nations Rwanda Emergency Office, 9 February 1995. - See Cohen R and Deng F The Forsaken People: Case Studies of the Internally Displaced ISBN 0-8157-1513-7 and Masses in Flight ISBN 0-8157-1511-0.
<urn:uuid:ab89bad8-3995-45fb-805a-6384604e814d>
CC-MAIN-2021-43
https://www.fmreview.org/camps/kleineahlbrandt
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.945547
3,349
3.03125
3
If you have ever sucked on a piece of orange candy and looked at your tongue afterward, you already know that sometimes food can turn the tongue orange. But what about when it happens and there is no orange candy – or any orange foods – in sight? This can feel worrisome, to say the least. Why is it happening, what is causing it and most importantly, what can you do to make it stop? Interestingly, as Wales Online reports, the human tongue can be a remarkably accurate messenger to tell you when there is something amiss in your body system.(1) It may something minor or something major, but if there is an imbalance or a medical issue brewing, in many cases, watching your tongue can act like an early warning system to alert you. In this article, learn the different possible causes for an orange tongue, reasons why an orange discoloration can occur, how an orange tongue is diagnosed and treated and if you can prevent orange tongue discoloration from recurring. Causes of an Orange Tongue When the word “orange” is used for medical purposes, it can actually mean any color in the orange spectrum, which can actually range from a very light orange or even “burnt yellow” all the way to a brownish or maroon-orange shade. In fact, there are a number of possible causes that could explain why your tongue has suddenly started to look orange in color. According to Heal Cure, here are some of the best-known possible causes for why this could happen:(2) According to Healthline, as many as 60 percent of all people will struggle with some level of acid reflux at some point in their lives.(3) The full name for acid reflux is GERD (gastroesophageal reflux) and sometimes it is also called simply “heartburn.” According to Med Health Daily, with acid reflux, the acids from the stomach and the digestive tract come back up into the oral cavity.(4) This is what causes the tongue to look orange. Overgrowth of yeast Yeast overgrowth is best known as Candida, but actually there are several forms of yeast that can cause your tongue to turn orange. According to U.S. News Wellness report, up to three-quarters of all women will contract a vaginal yeast infection during their lifetime.(5) Yeast overgrowth can also affect men and children. Babies, in particular, tend to suffer orally from yeast overgrowth (it is called thrush). Use of antibiotics Some antibiotics are known to cause imbalances with yeast, especially in the mouth tissues. When you are taking antibiotics that have this side effect, you may see your tongue turning orange. Overgrowth of bacteria or fungi Certain strains of bacteria and fungi can turn the tongue and/or mouth tissues orange when they begin to growth inside the mouth. One example is the Ramichloridium schulzeri (“Golden tongue” syndrome). If you are struggling with a fungus infection that has not yet been diagnosed, you may see orange spots appear on your tongue as well. While it may be more difficult to diagnose, if you are working or living in a space that has been infested with mold or mildew spores, this can also affect your body. One of the first signs that mold is affecting you can be when the tongue turns orange. For some people, allergic reactions to some pollen, molds or chemicals can cause an orange tinge to the tongue. For others, food allergies can cause the tongue to turn orange. According to WebMD, if your body becomes deficient in certain vitamins (the B vitamins and folate are good examples) it can turn the tongue orange.(6) Eating too much of certain foods. The University of Arkansas for Medical Sciences reports that eating too much of certain foods can cause your skin and also your mouth or tongue to turn orange.(7) One of the main culprits here, of course, is carrots. It is so common that there is a name for it – carotenemia. Another slightly less well known food that can turn your skin and tongue yellowish-orange is tomatoes. This is called lycopenemia. In fact, any food that is too high in the anti-oxidant pigment beta-carotene can cause yellowing or an orange tinge to the skin and mouth tissues. Early onset of black hairy tongue Emedicine health states that a condition called black hairy tongue, which is often caused by tobacco use over an extended period of time, can begin with the tongue changing to a different hue depending on the person’s individual habits.(8) Underlying health conditions Finally, while an orange tongue is rarely considered a direct symptom of more serious health conditions such as STDs (sexually transmitted diseases), HIV/AIDS, diabetes or cancer, these types of serious illnesses can reduce the immune system’s ability to fight off infection. According to Health Know Facts, sometimes when the immune system is under functioning due to a serious illness, the tongue can turn orange when bacteria, allergens, mold or mildew, fungi or viral agents invade the body.(9) When your tongue turns orange, this is rarely going to be your only symptom. The exception may be if you have eaten too much of a certain type of food (such as carrots or tomatoes) or have consumed something colored orange. Otherwise, you will likely have additional symptoms that crop up. These symptoms can be very valuable and useful to help you and your doctor figure out what is causing your orange tongue discoloration. Here, it can be helpful to begin a symptoms log and do this for a week or so. This can help you begin to notice patterns, such as how you feel after eating certain foods, going to certain places or doing certain things. For example, if you notice your tongue always looks more orange after you come home from work and you also feel like you have allergies, this may be a signal that there is mold in your workplace that you are breathing in. By keeping a symptoms log for several days, you can notice how you feel and also note down any changes in your tongue color during the day. By bringing this log to your doctor, you can begin to match up the symptoms you have with possible causes for your orange tongue, along with any exams and testing your doctor may order to get the most accurate diagnosis. It can also be helpful to learn about symptoms that are commonly reported to co-occur with an orange tongue. This way, you won’t be tempted to rule out symptoms that you assume are not associated with your orange tongue. This is a list of common co-occurring symptoms that often show up when your tongue turns orange: » Orange saliva. » Spots on the tongue. » Coating on the tongue. » Tongue feels dry. » Mouth itching, burning or sore. » Unpleasant taste in mouth (like bitter, smoky, ashy, metallic). » Bad breath. » Allergy or cold symptoms like a sore throat, runny nose, stuffy nose, coughing/sneezing, headache, fever or chills. » Stomach upset. » Fatigue or weakness. » Vaginal itching, burning, pain on urination, discharge. » Mouth sores. There can be additional symptoms as well, especially if there is a more serious underlying condition present. How are tongue problems diagnosed? The first step in diagnosing why your tongue has suddenly turned orange is to make an appointment with your doctor. Here is where your symptoms log can come in handy. Your doctor can review your recent history and may spot patterns or symptoms that are important for diagnosis. The appointment will begin with your doctor taking a personal and then a family medical history. Here, the goal is to identify any underlying health conditions that you have or that your close family has that may contribute to your tongue discoloration. The doctor will also want a list of the supplements, vitamins, herbs and/or medications you are taking or have recently taken to see if any of these cause tongue discoloration. It can be helpful to prepare this list in advance. Your doctor will also do a complete physical exam, paying special attention to listening to your breathing, examining your lymph nodes for signs of swelling and looking in your eyes and ears as well as your nose and throat for signs of respiratory distress. The most common initial test that doctors order is a blood test: » CBC (complete blood count) and blood chemistry profile. This complete blood test can look for vitamin and mineral imbalances, disease markers and other indicators that something is amiss in your body. From here, your doctor will be able to order more tests that relate to directly to your specific symptoms (aside from the orange tongue itself). For example, if you have co-occurring gastrointestinal symptoms, you may get tested for acid reflux. According to WebMD, there are three main tests used to diagnose GERD: manometry, esophageal pH monitoring and endoscopy.(10) Conversely, if your co-occurring symptoms are respiratory and mold exposure is suspected, Mayo Clinic states that these tests may be done: skin prick test and radioallergosorbent blood test.(11) If your symptoms point to a viral or bacterial infection or a yeast overgrowth, there are swab and blood tests that can be done to test for a range of different illnesses. And if your doctor suspects that your orange tongue may be linked to the recent use of medications or antibiotics or a dietary imbalance, obtaining a correct diagnosis may involve simply waiting until the course of medication has completed or changing your diet to see if that resolves the orange color on your tongue. Treating an orange tongue will depend on the diagnosis your doctor gives you. Here is a list of possible treatment options that may be prescribed based on your symptoms and individual diagnosis: » A change in dietary or lifestyle habits, such as adding or removing certain foods, limiting alcohol or tobacco use, drinking more water or other similar changes. » Adding a vitamin or mineral supplement to your daily health routine. » Taking medications to control allergy symptoms, such as nasal sprays, allergy pills or shots. » Taking medications to clear up an underlying overgrowth of yeast or bacteria or a fungal issue. » Taking medications to prevent future occurrences of acid reflux or heartburn. » Taking probiotics or eating yogurt (or both) to restore a healthy balance of intestinal flora and fauna following antibiotics use. » Receiving appropriate treatment for a more serious underlying disease or illness (i.e. HIV/AIDS, STDs, cancer, diabetes, et al). Can Orange Tongue Be Prevented? Due to the variety of reasons why the human tongue can turn orange in color, it is not currently possible to completely prevent this from occurring. However, use of probiotics, either by eating yogurt or drinking kefir or by taking probiotic pills can reduce the likelihood that yeast overgrowth or bacteria/fungal/mold exposure will turn your tongue orange. If your tongue has turned orange and you are struggling to figure out why this is happening and how to stop it, help is available. The most important thing is to not wait to seek help. Instead, you should pay careful attention to your symptoms and seek out the advice of a medical professional right away to ease your mind and help your body heal. Here is what to do to resolve your orange tongue issues: » Keep a symptoms log for several days in a row. » Make an appointment with your medical doctor. » Bring in your symptoms log and list of current medications, supplements and vitamins. » Have your physical exam. » Have any tests done that your doctor orders. » Get your diagnosis and follow your doctor’s treatment instructions until your orange tongue issues resolve. » Continue with any long term maintenance recommendations, such as dietary or lifestyle changes or use of probiotics, to prevent your orange tongue from recurring.
<urn:uuid:9b24a023-a09e-43d8-a9eb-bc57fa155761>
CC-MAIN-2021-43
https://www.thehealthyapron.com/orange-tongue
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.924716
2,522
2.640625
3
A car accident, a rough tackle, an unexpected tumble. The number of ways to bang up the brain are almost as numerous as the people who sustain these injuries. And only recently has it become clear just how damaging a seemingly minor knock can be. Traumatic brain injury (TBI) is no longer just a condition acknowledged in military personnel or football players and other professional athletes. Each year some 1.7 million civilians will suffer an injury that disrupts the function of their brains, qualifying it as a TBI. About 8.5 percent of U.S. non-incarcerated adults have a history of TBI, and about 2 percent of the greater population is currently suffering from some sort of disability because of their injury. In prisons, however, approximately 60 percent of adults have had at least one TBI—and even higher prevalence has been reported in some systems. These injuries, which can alter behavior, emotion and impulse control, can keep prisoners behind bars longer and increases the odds they will end up there again. Although the majority of people who suffer a TBI will not end up in the criminal justice system, each one who does costs states an average of $29,000 a year. With more than two million people in the U.S. currently locked up—and millions more lingering in the justice system on probation or supervision—the widespread issue of TBI in prison populations is starting to gain wider attention. A few pioneering programs offering rehabilitation to prisoners—and education to families and correctional staff about TBI—are underway around the country. And several studies aim to ascertain the best ways to handle this huge population. "It's not as cut-and-dry as a lot of people think," says Elisabeth Pickelsimer, an associate professor at the Medical University of South Carolina. Some of the best options so far include cognitive therapy for prisoners and education for the people around them. The kicker seems clear to many researchers: "If we don't help individuals specifically who have significant brain injuries that have impacted their criminal behavior, then we're missing an opportunity to short-circuit a cycle," says Peter Klinkhammer, associate director of services at the Brain Injury Association of Minnesota. One hard knock Concussions are the most common type of brain injury, and about 85 percent of people who suffer one will more or less fully recover within a year. But for those who do not, lingering symptoms, such as headaches or increased irritability, can get in the way of everyday functioning. Many of the behavioral issues that result from a TBI are due to the nature of the impact itself. In an accident or altercation, the brunt of the blow is often borne by the front or top of the head—right around the frontal lobes where behavior is regulated. Interactive by Ryan Reid This sort of injury can be loosely compared with a computer glitch: "If something went wrong with the central processing unit, it might be slower—you couldn't save documents as easily—but it might chug along," says Wayne Gordon, a professor of rehabilitation medicine at Mount Sinai School of Medicine. Traumatic brain injury can lead to attentional and memory deficits as well as increased anger, impulsivity and irritability—which make for a poor match with the corrections world. One of the big challenges in addressing TBI in prison populations, and beyond, is that it is not as easy to diagnose as a broken bone or a blood-borne illness. Symptoms are by no means unique to the injury and can be co-occurring with other mental health conditions. To make things even tougher for those hoping to track the disability, no two brain injuries are alike. "Two people can have the same injury and have a totally different set of impairments," Gordon says. "One can be fine, and one can be not so fine—but we don't know why that is yet." He suggests that differential responses could be due to a combination of physical, genetic, contextual and social factors, such as skull thickness, the magnitude of g-forces involved in the impact or past history of more minor, sub-concussive injuries. Due in part to these variables, not all TBIs result in a medical paper trail. Doctors treating people with serious wounds might miss diagnosing a brain injury, and hospitals do not always code for every presenting condition. Also, many people who suffer a head injury, especially a milder one, such as a concussion, might not seek medical attention at all. Researchers have started using detailed interviews with prisoners to get a better sense of how many have suffered from a brain injury. In a recent South Carolina survey of 636 prisoners, some 65 percent of males and 73 percent of females reported having sustained TBIs at some point in their lives. Injury counts are likely underestimated. Many people, for example, are unaware of injuries that they might have sustained when they were babies or young children. And even adulthood injuries were not entirely clear to prisoners. "They were told they had their bell rung—they got knocked out," says Rebecca Desrocher, assistant program director at the U.S. Department of Health and Human Services's Federal Traumatic Brain Injury Program. The very nature of brain injuries can also make tracking them—and figuring out how many an individual might have suffered—especially difficult. As Pickelsimer points out, "after you've had some, you don't remember them as clearly." These injuries are additive, with each assault to the brain compounding damage from the previous ones. The average reported number of TBIs for an individual prisoner was about four, Pickelsimer says. And some reported up to a dozen. Through these interviews, Pickelsimer says, another thing became clear: prisoners were often not aware that a single event—or a series of them—could be making it harder for them to earn a ticket out of jail, or avoid being sent back in the future. As much as TBI seems to increase the likelihood that a person will wind up in prison, it also seems to make the corrections environment that much more difficult to navigate. In prison, "there's so much that goes on a day-to-day basis: 'Line up over here; do this; do that,'" says David Maltman, a policy analyst at the Washington State Developmental Disabilities Council. When a prisoner with TBI is misremembering rules or is slow in responding to instruction, many prison staff are likely to see a prisoner as noncompliant or intentionally defiant, provoking situations that can lead to further injury—or at least poorer chances at an early release. Brian injury also increases the likelihood that people will have other mental health troubles, including substance abuse, and can also make it more difficult to overcome additional conditions. In a survey of adults enrolled in a New York State substance abuse program, about half had a record of TBI, Gordon says. The screening that Pickelsimer and her colleagues have done in South Carolina found that for both men and women, alcohol and crack cocaine were among the most common substances to which TBI prisoners were addicted. And these habits can cloud a person's memory of brain injuries they might have suffered in accidents, altercations or other incidents, which makes accurate diagnosis even more challenging. For those getting substance abuse treatment, a TBI can also make traditional rehab programs less effective. With the "reduced processing speed and their memory challenges," Gordon says, lessons might need to be altered or even repeated for enrolled prisoners with a history of TBI. The behavioral and other cognitive changes that TBI can bring, "if left unaddressed, are apt to provide challenges to the offender post-release as they attempt to reintegrate into their respective communities," notes Adam Piccolino, a neuropsychologist for the Minnesota Department of Corrections. Bridge to the outside Treating TBI in the broad adult population is not a perfect science. The goal is to "supply them with skills they need to better regulate their behavior and process information," Gordon explains. It often involves cognitive retraining and rehabilitation—and has imperfect results. And as he points out, these therapies have yet to be thoroughly tested on incarcerated populations. Others argue that tools that seem to work in the broader population should be used in prisons as well. Cognitive rehabilitation therapy is one such tool that seems to be gaining traction in the TBI field. It aims to help those TBI sufferers make better-informed choices and to improve memory. And with such minimal knowledge about TBI and its symptoms, simply educating inmates about their—and others'—condition might go a long way in helping them cope with related challenges, Desrocher says. Even with proper education and therapy, though, people with TBI will often experience behavioral issues. So many groups have put an emphasis on training staff—and even arresting officers—to handle these sorts of prisoners better in hopes that they "can recognize a behavior for what it is—and not defiance of an infraction of the rules," Maltman says. Resulting altercations can put law and corrections staff—and fellow prisoners—at risk for injury. But knowing which prisoners might benefit from alternative approaches requires thorough screening processes that are either highly variable across institutions or entirely absent. "Additionally," Piccolino notes, "once an offender is identified with having incurred a TBI, the process of knowing whether they also experience ongoing complications related to their TBI is challenging." Some organizations, such as the Brain Injury Association of Minnesota, have gone a step further and are also working with prisoners' family members, probation officers and outside support services to ready ex-convicts for release. Klinkhammer notes that for prisoners with TBI, returning to the outside world can be an extremely difficult transition. Once predictable prison routines disappear, he explains, it's almost like Dorothy going from her black-and-white reality in Kansas to the colorized world in Oz. Although that shift might sound like a blessing, for those with a brain injury who have difficulty managing their reactions or processing a lot of incoming information quickly, the new environment can be too much. "It can be very overwhelming, and it could result in one or more reason for a person to 'recidivize'"— do something that will land them back in jail, even if they had no intention of breaking the law— Klinkhammer says. Much of his group's efforts come down to education and helping family and other community members learn how to support a prisoner with TBI returning to the outside world. And oftentimes just explaining to them that an old injury might be contributing to unpredictable behavior is a big help. "People know that their loved one's been knocked out" or were in a car accident years before, Klinkhammer says. "But the thought that the outcome of that may result in disinhibition or that it could be an aggravating factor to a person's criminal behavior gets lost." The group does not yet have formal data on the success of the program, but from his observations, Klinkhammer says, "individuals are doing better when they are able to dovetail back into society in a way that they're supported." The key is "making sure that when people step out into the community they're not falling into an abyss," he says. And "in doing that, we're also helping society at large stay safer." Once a person with TBI is behind bars, arguing for a chunk of shrinking budgets to help them out is not always an easy sell. In South Carolina, for example, once a person is identified as having TBI, the department of corrections is obligated to provide extra resources for them. "It's cheaper for them to just lock them up," Pickelsimer says. In her estimation, "the intervention has to be when they are much younger"—before they commit a crime, by encouraging teenagers to stay in school and not have children until they are prepared to provide and care for them. By doing that, she says, the next generation will be less likely to fall into a cycle of injury and crime. Gordon would extend this early intervention to screening, too. In his research on TBI in substance abusers, participants who had multiple brain injuries tended to be in their 30s. But, he says, "the average age when they had their first injury was 14." If their injury had been identified—and they had received any necessary assistance—earlier, future substance abuse and behavioral issues might have been avoided altogether. This, he says, is an example of "using screening and identification as prevention—and what you're preventing is social failure." That social failure due to TBI is not limited to the corrections world, he notes: "In any group of folks who are failing—substance abuse, the hardcore unemployed—I would say, the prevalence of TBI is very high." Early diagnosis does not necessarily require expensive intervention, he says. Treatment for those already in trouble can also start younger. An experimental program in El Paso, Texas, adapted a TBI cognitive treatment program for juvenile offenders. The goal was "to try to teach them how to be in touch with their own sensations and activities so they can learn to stop and think before they act—and then consciously choose a choice and evaluate whether that was the right choice," Gordon explains. When administered to kids—both those who had a history of TBI and those who did not—there was a fivefold reduction in recidivism, he reports. The Traumatic Brain Injury Act of 1996 carried provisions to help reduce the incidence of TBI and improve psychological treatment, and in 2000 it was expanded to include education about prevention—especially to parents. A 2008 reauthorization of the act added a mandate to study TBI prevalence among institutionalized populations, which includes prisons but also nursing homes and other institutions where people reside. But studies have been slow to materialize. Minnesota is currently assessing data from their prison population to determine how much TBI affects substance abuse treatment completion, use of medical and mental health resources, and rates of recidivism. One of the first steps to better understanding TBI in these populations, however, is to boost screening—as well as ensure that such monitoring is scientifically sound and widespread. And just demonstrating the value of screening might take years, Desrocher says. Her hope is that down the road, the data show that it is "not only [of] clinical value for the individual—but also a value for society."
<urn:uuid:700e9099-6fe2-4e7f-bff1-25fd9e4c03ed>
CC-MAIN-2021-43
https://www.scientificamerican.com/article/traumatic-brain-injury-prison/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00671.warc.gz
en
0.972564
2,973
2.5625
3
voir la définition de Wikipedia educational software (n.) The use of computer hardware and software in education and training dates to the early 1940s, when American researchers developed flight simulators which used analog computers to generate simulated onboard instrument data. One such system was the type19 synthetic radar trainer, built in 1943. From these early attempts in the WWII era through the mid 1970s, educational software was directly tied to the hardware, usually mainframe computers, on which it ran. Pioneering educational computer systems in this era included the PLATO system (1960), developed at the University of Illinois, and TICCIT (1969). In 1963, IBM had established a partnership with Stanford University's Institute for Mathematical Studies in the Social Sciences (IMSSS), directed by Patrick Suppes, to develop the first comprehensive CAI elementary school curriculum which was implemented on a large scale in schools in both California and Mississippi. In 1967 Computer Curriculum Corporation (CCC, now Pearson Education Technologies) was formed to market to schools the materials developed through the IBM partnership. Early terminals that ran educational systems cost over $10,000, putting them out of reach of most institutions. Some programming languages from this period, particularly BASIC (1963), and LOGO (1967) can also be considered educational, as they were specifically targeted to students and novice computer users. The PLATO IV system, released in 1972, supported many features which later became standard in educational software running on home computers. Its features included bitmap graphics, primitive sound generation, and support for non-keyboard input devices, including the touchscreen. The arrival of the personal computer, with the Altair 8800 in 1975, changed the field of software in general, with specific implications for educational software. Whereas users prior to 1975 were dependent upon university or government owned mainframe computers with timesharing, users after this shift could create and use software for computers in homes and schools, computers available for less than $2000. By the early 1980s, the availability of personal computers including the Apple II (1977), Commodore PET (1977), Commodore VIC-20 (1980), and Commodore 64 (1982) allowed for the creation of companies and nonprofits which specialized in educational software. Brøderbund and The Learning Company are key companies from this period, and MECC, the Minnesota Educational Computing Consortium, a key non-profit software developer. These and other companies designed a range of titles for personal computers, with the bulk of the software initially developed for the Apple II. Major developments in educational software in the early and mid 1990s were made possible by advances in computer hardware. Multimedia graphics and sound were increasingly used in educational programs. CD-ROMs became the preferred method for content delivery. With the spread of the internet in the second half of the 1990s, new methods of educational software delivery appeared. In the history of virtual learning environments, the 1990s were a time of growth for educational software systems, primarily due to the advent of the affordable computer and of the Internet. Today Higher Education institutions use virtual learning environments like Blackboard Inc. to provide greater accessibility to learners. Courseware is a term that combines the words 'course' with 'software'. Its meaning originally was used to describe additional educational material intended as kits for teachers or trainers or as tutorials for students, usually packaged for use with a computer. The term's meaning and usage has expanded and can refer to the entire course and any additional material when used in reference an online or 'computer formatted' classroom. Many companies are using the term to describe the entire "package" consisting of one 'class' or 'course' bundled together with the various lessons, tests, and other material needed. The courseware itself can be in different formats, some are only available online such as html pages, while others can be downloaded in pdf files or other types of document files. Many forms of e-learning are now being blended with term courseware. Most leading educational companies solicit or include courseware with their training packages. In 1992 a company called SCORE! Educational Centers formed to deliver to individual consumers courseware based on personalization technology that was previously only available to select schools and the Education Program for Gifted Youth. Some educational software is designed for use in school classrooms. Typically such software may be projected onto a large whiteboard at the front of the class and/or run simultaneously on a network of desktop computers in a classroom. This type of software is often called classroom management software. While teachers often choose to use educational software from other categories in their IT suites (e.g. reference works, children’s software), a whole category of educational software has grown up specifically intended to assist classroom teaching. Branding has been less strong in this category than in those oriented towards home users. Software titles are often very specialised and produced by various manufacturers, including many established educational book publishers. With the impact of environmental damage and the need for institutions to become "paperless" , more educational institutions are seeking alternative ways of assessment and testing, which has always traditionally been known to use up vasts amount of paper. Assessment software refers to software with a primary purpose of assessing and testing students in a virtual environment. Assessment software allows students to complete tests and examinations using a computer, usually networked. The software then scores each test transcript and outputs results for each student. Assessment software is available in various delivery methods, the most popular being self-hosted software, online software and hand-held voting systems. Proprietary software and open-source software systems are available. While technically falling into the Courseware category (see above), Skill evaluation lab is an example for Computer-based assessment software with PPA-2 (Plan, Prove, Assess) methodology to create and conduct computer based online examination. Moodle is an example of open-source software with an assessment component that is gaining popularity. Other popular international assessment systems include QuestionMark and EvaluNet XT. Many publishers of print dictionaries and encyclopedias have been involved in the production of educational reference software since the mid-1990s. They were joined in the reference software market by both startup companies and established software publishers, most notably Microsoft. The first commercial reference software products were reformulations of existing content into CD-ROM editions, often supplemented with new multimedia content, including compressed video and sound. More recent products made use of internet technologies, to supplement CD-ROM products, then, more recently, to replace them entirely. Wikipedia and its offspins (such as Wiktionary) marked a new departure in educational reference software. Previously, encyclopedias and dictionaries had compiled their contents on the basis of invited and closed teams of specialists. The Wiki concept has allowed for the development of collaborative reference works through open cooperation incorporating experts and non-experts. Some manufacturers regarded normal personal computers as an inappropriate platform for learning software for younger children and produced custom child-friendly pieces of hardware instead. The hardware and software is generally combined into a single product, such as a child laptop-lookalike. The laptop keyboard for younger children follows an alphabetic order and the qwerty order for the older ones. The most well-known example are Leapfrog products. These include imaginatively designed hand-held consoles with a variety of pluggable educational game cartridges and book-like electronic devices into which a variety of electronic books can be loaded. These products are more portable than general laptop computers, but have a much more limited range of purposes, concentrating on literacy. Earlier educational software for the important corporate and tertiary education markets was designed to run on a single desktop computer (or an equivalent user device). The history of such software is usefully summarized in the SCORM 2004 2nd edition Overview (section 1.3), unfortunately, however, without precise dates. In the years immediately following 2000, planners decided to switch to server-based applications with a high degree of standardization. This means that educational software runs primarily on servers which may be hundreds or thousands of miles from the actual user. The user only receives tiny pieces of a learning module or test, fed over the internet one by one. The server software decides on what learning material to distribute, collects results and displays progress to teaching staff. Another way of expressing this change is to say that educational software morphed into an online educational service. US Governmental endorsements and approval systems ensured the rapid switch to the new way of managing and distributing learning material. There are highly specific niche markets for educational software, including: While mainstream operating systems are designed for general usages, and are more or less customized for education only by the application sets added to them, a variety of software manufacturers, especially Linux distributions, have sought to provide integrated platforms for specifically education. Among the most popular are Sugar, aimed primarily at preschool and elementary grades; DoudouLinux (www.doudoulinux.org) - a system targeting young children; Edubuntu, foremost targeted to middle and secondary grades; and, UberStudent, designed for the academic success of higher education and college-bound secondary students. In addition, Portos, designed by Cornell University, is a complete educational operating system designed to teach programming. Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,094s
<urn:uuid:5f1fe22b-2765-436a-80c2-9a62246ed002>
CC-MAIN-2021-43
http://dictionnaire.sensagent.leparisien.fr/Educational%20software/en-en/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00191.warc.gz
en
0.852094
2,499
3.15625
3
Understanding Spatial Dimensions Much misunderstanding exists today as to the nature of spatial dimensions. The following is intended to resolve some of these issues. Although our consciousness is primarily limited to 3 spatial dimensions, we actually exist in a universe composed of many more spatial dimensions, possibly an infinite number of dimensions. Without realizing it, we actually use 4 Dimensional awareness all the time. We have to in order to perceive 3 Dimensional objects. This will be explained as follows. The 1st Dimension is Length, creating a line. The 2nd Dimension is Width, which combined with the 1st Dimension creates a plane or sheet. The 3rd Dimension is Height, which combined with the 1st and 2nd Dimensions creates a cube. The 4th Dimension is an added depth in hyperspace (outside of 3 Dimensional Space). When the 4th Dimension is combined with the 1st, 2nd, and 3rd Dimensions a hypercube is created. When a 1 Dimensional object (a line) is viewed with 1 Dimensional awareness (1 direction only) a point is perceived. When viewed with 2 Dimensional awareness a line is perceived. (For example: Look at the end of a pencil, i.e. in 1 direction, and you will see a point. Look at the entire pencil, i.e. in 2 directions, and you will see a line.) When a 2 Dimensional object, such as a square, is viewed with 2 Dimensional awareness a line is perceived. When viewed with 3 Dimensional awareness a square is perceived. (For example: Look at the edge of a piece of paper, i.e. in 2 directions, and you will see a line. Look at the entire piece of paper, i.e. in 3 directions, and you will see a square.) When a 3 Dimensional object, such as a cube, is viewed with 3 Dimensional awareness a flat object is perceived. When viewed with 4 Dimensional awareness a cube is perceived. (For example: Look at the image of a cube, i.e. in 3 directions, and you see a flat 2 dimensional object. Look at a cube, wrapping your awareness around it, and you see a 3 dimensional cube.) THUS WE USE 4 DIMENSIONAL AWARENESS TO PERCEIVE 3 DIMENSIONAL OBJECTS, ALL THE TIME. To perceive a 4 Dimensional object, such as a hypercube, we must use 5 Dimensional Awareness. The Physical-Etheric Universe is structured in a total of 3 Spatial Dimensions. The Astral(Emotional) Universe is structured in total of 4 Spatial Dimensions. The Manasic Universe (including Causal/Soul and Mental/Intellectual) is structured in a total of 5 Spatial Dimensions. The Buddhic/Christic Universe in a total of 6 Spatial Dimensions. The Atmic Universe in a total of 7 Spatial Dimensions. The Monadic Universe in a total of 8 Spatial Dimensions. The Logoic Universe in a total 9 Spatial Dimensions. Time is not a spatial dimension but rather is a measure of events in space. (See Understanding Time in Additional Info.) To call the Astral Universe the 4th Dimension is a mistake because the Astral Universe also contains Dimensions 1, 2, and 3. Also, the Manasic and higher Universes all contain the 4th Dimension. In the same way it is a mistake to call the Manasic Universe the 5th Dimension. Full and complete awareness in a higher Universe includes awareness of additional spatial dimensions. (More on the 7 Universes.) Much misunderstanding exists today as to the nature of time in higher levels. The following is intended to resolve some of these issues. Time is not a dimension in space. Time does exist in higher levels, it is just different. Time is both objective and subjective. Objectively it is a linear progression of universal change. Subjectively it is the processing rate and a holistically inclusive consciousness of change. The higher your consciousness, the faster time goes by. The lower your consciousness the slower time goes by. When you are happy, time flies. When you are depressed, time stretches on for a seeming forever. Why? Because higher states of consciousness use subtler (lighter and quicker) energy and lower states of consciousness use denser (heavier and slower) energy to process your experiences. Thus, when you are in a higher state you process experiences faster and when you are in a lower state you process experiences slower. Using physical level change (time) as a baseline reference, which is very dense and slow compared to higher levels, higher states process physical change much quicker than lower states. Thus higher states “eat” physical time at a faster rate, giving the experience of physical time flying by. Lower states “eat” physical time at a slower rate, giving the experience of physical time passing slowly. Also, acceptance of experience speeds processing rate and resistance to experience slows processing rate, acceptance speeds up the experience of time and resistance slows the experience of time. Additionally, the higher your consciousness, the more expansive and inclusive it is of the past and the future. The higher the state, the greater the ability to include more of the past from which learning is continually occurring and more of the potential future that is being created in a continually more optimal way. The past is set and does not change. What is changing is how we view the past, how we frame it. There is nothing wrong with the past that needs changing. No event is inherently bad, everything is subject to interpretation as bad or good or neutral. What needs changing is how we view the past, how we interpret it. When the past is resisted, it is interpreted as a detriment and acts upon the present as a detriment. When the past is increasingly accepted, valued, and learnt from then it is interpreted as a resource that empowers the present and the future. As we grow in consciousness, we are able to harvest increasing quantities of valuable lessons from the past. Events are temporary but the valuable consciousness gained from the events lasts forever. The potential future is continually being improved as we harvest lessons from the past – we continually see better ways of doing things, as time passes. Time generally speeds up for an older person (life goes by faster) because they are generally processing at a higher rate. Thus the higher your consciousness, the faster time goes by. Time is so sped up for the Causal Body (the Soul) that a Personality lifetime for it is like a day. A good life is like a good day. A bad life is like a bad day – nothing more, very temporary. For the Monad, time is much faster – a Personality lifetime is like a minute. Thus, hundreds or thousands of human lifetimes (incarnations) are experienced very quickly for the Monad. However, while the Monad is focused upon the current moment along with the Personality and the Soul, the Monad is also so expanded in its consciousness that it is simultaneously processing billions of years into the past and billions of years into the future, in the now – it is experiencing what could be called the “Eternal Now”. The Mechanism of Pleasure Although it may appear that we get pleasure from external objects or events, upon examination, pleasure is derived entirely from within. If you examine a hot fudge sundae ice-cream under a microscope, you will not find any pleasure inside of it. The same goes for any object and any event that gives us pleasure. What then is pleasure and where does it come from? Pleasure or bliss is our natural innate state of being in the core of our individuality. Varying degrees of this innate bliss flow into our consciousness depending upon our state of mind. During a windy day the sun cannot clearly reflect upon the water. Similarly when our mind is agitated by aversion or strong desire, our innate bliss is thrown out of our consciousness. When our aversion or desire is satisfied, we are at peace and our innate bliss is allowed in. When we engage with an object or event that we have a strong desire for, our consciousness becomes concentrated upon it, which further stills our mind and allows more bliss in. The more concentrated our mind, the more bliss is allowed in and the more bliss we feel. Sex strongly concentrates the mind by the action of sexual energy and thereby allows a large amount of bliss in for most people. Even the anticipation of the satisfaction of desire has a concentrating effect upon the mind and thereby allows some bliss in. Any activity or perception that results in peace or concentrates mind allows our innate natural bliss in while such activities and perceptions are not the source of the bliss. The source of your bliss is YOUR INNER SPIRITUAL BEING. You are not your physical body, you are not your emotions, you are not your mind, you are not your personality. All of these are temporary and will be discarded upon death. Your Causal/Soul Body is that nearly immortal part of you that is your source of bliss. It is always in bliss. When you feel pleasure, happiness, joy or ecstasy you are feeling this part of yourself. And, your Causal/Soul Body is just the beginning of bliss. As your consciousness ascends into higher states the bliss becomes increasingly fulfilling and satisfying. This is our Source’s homing signal, our Source’s way of leading you back home. A true test of progress in the growth of consciousness is the degree of natural happiness or bliss that a person resides in. Therefore, the external universe of phenomenon is composed of objects and events that are painful or not painful, i.e., they inhibit our natural innate bliss or they allow it in. We have more choice than we often realize as to what we allow to be painful for us. For those who have reached enlightened liberation, all is equal, nothing is painful, and bliss is consistently naturally present. Correct meditation is the way to consistent bliss. Striving for an integration of ultimate satisfaction that survives the date of death, is essential. At the same time, denying ourselves of the relative satisfactions of this world is unnecessary. The balanced joyful path includes the appreciation of the relative joys of the world, while not getting lost in their overindulgence, and, daily working on integrating the source of ultimate satisfaction. The joyful path is composed of enjoying life, while understanding the real source of our joy, and becoming ever more full of the source of joy.
<urn:uuid:aa423f00-3053-429e-90c4-15373070e472>
CC-MAIN-2021-43
https://energyreality.com/additional-info/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.943517
2,169
3.015625
3
Aboriginal astronomy and appropriate relationships Aboriginal and Torres Strait Islander laws, often conveyed in lore, contain strict rules about social behaviours, warning and punishing those who act in a disrespectful and harassing manner. This sort of negative behaviour is taboo, and helps the community maintain a healthy relationship and a communal sense of wellbeing. Harassment, bullying, and the danger of stereotypes Aboriginal and Torres Strait Islanders have strict laws that dictate behaviour, social structure, and protocols for interacting with family, community, and partners. They are part of a complex kinship system that ensures individuals and the community live in a healthy and collaborative environment. Even so, laws can be broken, and that is why the lore warns people about the dangers of behaving badly. Today, many racist stereotypes exist in mainstream Australian society that serve to disenfranchise and degrade Aboriginal and Torres Strait Islander people. Some of these stereotypes suggest that Aboriginal and Torres Strait Islander men are prone to domestic violence and abuse, with some falsely claiming that this is part of “traditional culture”. The truth is that these sorts of behaviours are taboo in traditional Law, which is reflected in a number of Aboriginal star stories. These stories are narratives that feature a moral charter. They tell about beings who commit acts of violence and face strict repercussions and punishment. These narratives are recorded in the stars as a guide to follow sacred Laws and a warning to anyone who might try to break them. This is a sensitive topic in Australia, as many people have tried to use the damaging effects of colonisation to highlight a few cases of violence to blanket-stereotype all Aboriginal people. Indigenous scholars have written about the issue from a number of different angles1. The #MeToo movement emphasises issues that are critical for young people to understand, particularly about acceptable behaviour, and the harm and damage that result from bullying, harassment, assault, and inappropriate relationships. Traditional Aboriginal and Torres Strait Islander traditions discuss these issues in detail, and student can learn much from them. In this lesson plan, we will explore traditional star stories that contain moral charters about the importance of respectful behaviour towards women and strict punishments for doing otherwise. This will show students that this sort of behaviour is NOT a part of Indigenous cultures, and that traditional Law dictates respectful and healthy relationships for personal, family, and community health and wellbeing. The Seven Sisters A well-known and widespread tradition that crosses the country is that of the Seven Sisters2. The sacred narratives of this continent wide religious complex that travels the sacred route from the coast of South Australia to the coast of Arnhem Land, focuses on a man chasing a group of young girls. The man is composed of the stars in the Western constellation of Orion. And just like his Greek counterpart, the man is often seen as a skilled but vain hunter and someone who does not respect women. He falls in love with the young girls of the Pleiades and wishes to “make them his wives”, chasing them across the sky. But he is prevented from doing so by others (often other women in the stars), and faces punishment and humiliation for his crimes.3 The Kokatha and Ngalea peoples of the Great Victoria Desert call this man Nyeeruna4. In addition to being a vain man and skilled hunter, he is also a shapeshifter trickster. Nyeeruna is made up of the stars in Orion and stands upside down in the sky. He pursues the seven Yugarilya sisters of the Pleiades (Fig. 2). The sisters are timid and shy, and try to avoid Nyeeruna’s advances. They are protected by their eldest sister, Kambugudha (who is made up of the stars in the cluster called the Hyades). She knows that Nyeeruna is really a coward and she is not afraid of him, nor does she tolerate his inappropriate behaviour. Kambugudha blocks Nyeeruna from her younger sisters. He is angry that she is standing in his way, so he produces fire magic in his club, which he holds in his right hand (the red star Betelgeuse) and tries to cast it at Kambugudha. Kambugudha is ready for him and she retaliates by collecting fire magic in her left foot (the red star Aldebaran) and kicks sand into Nyeeruna’s face, humiliating him and putting out the fire magic in his club. She then places a row of dingo puppies between Nyeeruna and herself, to keep him away. (The puppies are the stars of Orion’s shield in Greek traditions.) She also calls on Babba, the father dingo, to help. Babba attacks Nyeeruna, and the Moon and stars around support Kambugudha, putting the harassing Nyeeruna in his place. A primary social lesson is that Nyeeruna’s behaviour towards the women is unacceptable and breaks traditional Law. Kambugudha protects her sisters, and the community supports her in the fight against the bully Nyeeruna. Fig. 1: Nyeeruna, Kambugudha, and the Wuarilya sisters. From Leaman & Hamacher (2014). The Sun-Daughters A number of traditional star stories from across Australia describe Laws that state abuse or harassment towards women is not acceptable and will result in consequences. Sometimes the punishment is physical and personal, while in other cases it can affect the entire community. In Yolngu traditions of Elcho Island in Arnhem Land (NT), the Sun-woman and Moon-man have several daughters, who are smaller Suns. The traditions state that if the daughters are hit or disturbed by a man, the rain will not come and the wells will dry up5. This demonstrates that breaking this taboo can negatively affect the health, safety, and wellbeing of the entire community. Classroom activity - Health and Physical Education Years 7 and 8 This classroom activity will involve sensitive topics: racism, harassment, stereotypes, consent, and appropriate relationships – all with a focus on improving personal, family, and community health and wellbeing. This will focus on in-class discussions about appropriate relationships, sexual harassment, bullying, racial stereotypes, and how these negatively affect health and wellbeing. Framed in star stories, the discussion will focus on traditional Aboriginal and Torres Strait Islander Law that forbids negative and anti-social acts. This discussion will aim to breakdown racial stereotypes, contextualise appropriate behaviour and the dangers of harassment and bullying, and communicating respectfully. This resource addresses the following content descriptions from the Australian Curriculum: - Investigate the benefits of relationships and examine their impact on their own and others’ health and wellbeing (ACPPS074) - Investigate the benefits to individuals and communities of valuing diversity and promoting inclusivity (ACPPS079) This resource addresses the following excerpts from the achievement standard for Years 7 and 8 in Health and Physical Education: - evaluate the impact on wellbeing of relationships and valuing diversity - analyse factors that influence emotional responses - investigate strategies and practices that enhance their own, others’ and community health, safety and wellbeing Inquiry-based learning questions - What sorts of behaviours between boys/men/girls/women are unacceptable? - What are appropriate ways of engaging in relationships with a romantic partner, whether real or desired? - How do traditional Aboriginal and Torres Strait Islander Laws address issues around harassment and abuse? - How do negative and racist stereotypes about Aboriginal and Torres Strait Islander men cause long lasting and damaging harm to them, their families, and their communities? - How can we better learn from Aboriginal and Torres Strait Islander Laws about appropriate interactions in relationships? Activity – Open discussion Suggested timing for activity: 30-45 minutes through class discussion Required resources: n/a The teacher should explain the basic narrative of the stories noted above, and discuss the actions and consequences involved. A number of videos relating to various Seven Sisters traditions can be found online, and most deal with a similar theme. The astronomical component gives context to the deeper discussion that teachers need to lead with the class. An open discussion with students should address acceptable behaviours (ACPPS074), problems with stereotypes (ACPPS079), how bullying and harassment cause harm (ACPPS079), and proper relationships between children, teens, and adults (ACPPS074). It may be worth doing this in separate groups for boys and girls (and those who choose to join either side). The astronomically themed narratives about Laws dictate interactions between men and women and appropriate behaviours towards others and can be highlighted as examples. The students can dig deeper into the meanings of the narrative and the ramifications of Nyeeruna’s actions. Such a discussion should include topics about respect, consent, and the repercussions of bullying and harassment. Students should also discuss the ways racial stereotypes cause harm to people and communities (ACPPS074, ACPPS079). 2 Riem N.A. (2012).The Pleiades and the Dreamtime: an Aboriginal Women's Story and Other Ancient World Traditions. Coolabah, 9, 113-127. 3 Tindale, N. B. (1959). Totemic beliefs in the Western Desert of Australia, Part 1: Women who became the Pleiades. Records of the South Australian Museum, 13 (3), 305–332. 4 Leaman, T.M. and Hamacher, D.W. (2014). Aboriginal Astronomical traditions from Ooldea, South Australia, Part 1: Nyeeruna and ‘The Orion Story’. Journal of Astronomical History and Heritage, 17(2), 180-191. 5 Warner, W.L. (1937) A Black Civilization: a Social Study of an Australian Tribe. Harper & Bros, pp. 537-538.
<urn:uuid:0b8adb15-ad24-43c5-ae87-6803724b3b1e>
CC-MAIN-2021-43
https://indigenousknowledge.unimelb.edu.au/curriculum/resources/aboriginal-astronomy-and-appropriate-relationships
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00551.warc.gz
en
0.934621
2,051
3.84375
4
Today is the birthday (1897) of Enid Mary Blyton, prolific English children’s writer whose books have been among the world’s best-sellers since the 1930s, selling more than 600 million copies. Her first book, Child Whispers, a 24-page collection of poems, was published in 1922. She wrote on a wide range of topics including education, natural history, fantasy, mystery, and biblical narratives and is best remembered today for her Noddy, Famous Five, Secret Seven, and Adventure series. Blyton’s work became increasingly controversial among literary critics, teachers and parents from the 1950s onwards, because of the alleged unchallenging nature of her writing and the themes of her books, particularly the Noddy series. Some libraries and schools banned her works, which the BBC had refused to broadcast from the 1930s until the 1950s because they were perceived to lack literary merit. Her books have been criticized as being elitist, sexist, racist, xenophobic and at odds with the more open environment that eventually emerged in post-war Britain, but they have continued to be best-sellers since her death in 1968. I’ll address those criticisms in a bit. They are perfectly justified. But I was raised on a diet of Noddy and Secret Seven in the 1950s, in the days before television in South Australia, let alone the internet. Reading was my constant pleasure, and Blyton was one of my favorites before I graduated to more mature books. I lived in a world dominated by elitism, sexism, and racism and took them, more or less, as normative even though I did not accept them or agree with them. I am older, and things are different nowadays, but at 8 years old I wasn’t going to take on the world of prejudice that Blyton extolled. Besides, her books have redeeming qualities, and beneath it all she was deeply compassionate. Even back in the 1950s I cringed at her portrayal of boys and girls, but I liked her stories, nonetheless. Blyton worked in a wide range of fictional genres, from fairy tales to animal, nature, detective, mystery, and circus stories. In a 1958 article published in The Author, she wrote that there were a “dozen or more different types of stories for children”, and she had tried them all, but her favorites were those with a family at their centre. In a letter to the psychologist Peter McKellar, Blyton describes her writing technique: I shut my eyes for a few minutes, with my portable typewriter on my knee – I make my mind a blank and wait – and then, as clearly as I would see real children, my characters stand before me in my mind’s eye … The first sentence comes straight into my mind, I don’t have to think of it – I don’t have to think of anything. In another letter to McKellar she describes how in just five days she wrote the 60,000-word book The River of Adventure, the eighth in her Adventure Series, by listening to what she referred to as her “under-mind,” which she contrasted with her “upper conscious mind.” This tactic inevitably presented the danger that she might unconsciously, and clearly did, plagiarize the books she had read, including her own. Blyton’s daily routine varied little over the years. She usually began writing soon after breakfast, with her portable typewriter on her knee and her favorite red Moroccan shawl nearby; she believed that the color red acted as a “mental stimulus” for her. Stopping only for a short lunch break she continued writing until five o’clock, by which time she would usually have produced 6,000–10,000 words. Blyton’s writing exemplifies a strong mistrust of adults and figures of authority, creating a world in which children govern. Her daughter notes that in her mother’s adventure, detective and school stories for older children, “the hook is the strong storyline with plenty of cliffhangers, a trick she acquired from her years of writing serialised stories for children’s magazines. There is always a strong moral framework in which bravery and loyalty are (eventually) rewarded.” Blyton herself wrote that “my love of children is the whole foundation of all my work.” It’s not too much of a leap of faith to believe that Blyton herself idealized her own childhood, and lamented the changes in her world over her lifetime (wrought by adults). Blyton felt a responsibility to provide her readers with a positive moral framework, and she encouraged them to support worthy causes. Her view, expressed in a 1957 article, was that children should help animals and other children rather than adults: [children] are not interested in helping adults; indeed, they think that adults themselves should tackle adult needs. But they are intensely interested in animals and other children and feel compassion for the blind boys and girls, and for the spastics who are unable to walk or talk. Blyton and the members of the children’s clubs she promoted via her magazines raised a great deal of money for various charities. The largest of the clubs she was involved with was the Busy Bees, the junior section of the People’s Dispensary for Sick Animals, which Blyton had actively supported since 1933. The club had been set up by Maria Dickin in 1934, and after Blyton publicized its existence in the Enid Blyton Magazine it attracted 100,000 members in three years. Such was Blyton’s popularity among children that after she became Queen Bee in 1952 more than 20,000 additional members were recruited in her first year in office. The Enid Blyton Magazine Club was formed in 1953. Its primary object was to raise funds to help those children with cerebral palsy who attended a center in Cheyne Walk, in Chelsea, London, by furnishing an on-site hostel among other things. The Famous Five series gathered such a following that readers asked Blyton if they might form a fan club. She agreed, on condition that it serve a useful purpose, and suggested that it could raise funds for the Shaftesbury Society Babies’ Home in Beaconsfield, on whose committee she had served since 1948. The club was established in 1952, and provided funds for equipping a Famous Five Ward at the home, a paddling pool, sun room, summer house, playground, birthday and Christmas celebrations, and visits to the pantomime. By the late 1950s Blyton’s clubs had a membership of 500,000, and raised £35,000 in the six years of the Enid Blyton Magazine‘s run. By 1974 the Famous Five Club had a membership of 220,000, and was growing at the rate of 6,000 new members a year. The Beaconsfield home it was set up to support closed in 1967, but the club continued to raise funds for other pediatric charities, including an Enid Blyton bed at Great Ormond Street Hospital and a mini-bus for disabled children at Stoke Mandeville Hospital. To address criticisms leveled at Blyton’s work some later editions have been altered to reflect more contemporary attitudes towards issues such as race, gender and the treatment of children. Modern reprints of the Noddy series substitute teddy bears or goblins for golliwogs, for instance. The golliwogs who steal Noddy’s car and dump him naked in the Dark Wood in Here Comes Noddy Again are replaced by goblins in the 1986 revision, who strip Noddy only of his shoes and hat and return at the end of the story to apologize. The Faraway Tree‘s Dame Slap, who made regular use of corporal punishment, was changed to Dame Snap who no longer did so, and the names of Dick and Fanny in the same series were changed to Rick and Frannie. Characters in the Malory Towers and St. Clare‘s series are no longer spanked or threatened with a spanking, but are instead scolded. References to George’s short hair making her look like a boy were removed in revisions to Five on a Hike Together, reflecting the idea that girls need not have long hair to be considered feminine or normal. In 2010 Hodder, the publisher of the Famous Five series, announced its intention to update the language used in the books, of which it sold more than half a million copies a year. The changes, which Hodder described as “subtle,” mainly affect the dialogue rather than the narrative. For instance, “school tunic” becomes “uniform,” “mother and father” becomes “mum and dad,” “bathing” is replaced by “swimming,” and “jersey” by “jumper.” Times change; so does language. Blyton’s books are not great literature: no one suggests that they are. I think of them as period pieces reflective of my own boyhood, and not something I could recommend for my son when he was growing up. Japanese comics and video games were much more appealing to him in those days. I have to go with Mrs Beeton’s nursery recipes for jam roly-poly and rolled treacle pudding to honor Blyton; they seem so terribly apt. I give the suet crust recipe at the end for completeness, and because I still use it. Outside of the UK you generally have to buy suet from a proper butcher and it comes in big lumps. Typically I freeze it and then hand grate it. ROLY-POLY JAM PUDDING. - INGREDIENTS.—3/4 lb of suet-crust No. 1215, 3/4 lb. of any kind of jam. Mode.—Make a nice light suet-crust by recipe No. 1215, and roll it out to the thickness of about 1/2 inch. Spread the jam equally over it, leaving a small margin of paste without any, where the pudding joins. Roll it up, fasten the ends securely, and tie it in a floured cloth; put the pudding into boiling water, and boil for 2 hours. Mincemeat or marmalade may be substituted for the jam, and makes excellent puddings. Average cost, 9d. Sufficient for 5 or 6 persons. Seasonable.—Suitable for winter puddings, when fresh fruit is not obtainable. ROLLED TREACLE PUDDING. - INGREDIENTS.—1 lb. of suet crust No. 1215, 1 lb. of treacle, 1/2 teaspoonful of grated ginger. Mode.—Make, with 1 lb. of flour, a suet crust by recipe No. 1215; roll it out to the thickness of 1/2 inch, and spread the treacle equally over it, leaving a small margin where the paste joins; close the ends securely, tie the pudding in a floured cloth, plunge it into boiling water, and boil for 2 hours. We have inserted this pudding, being economical, and a favourite one with children; it is, of course, only suitable for a nursery, or very plain family dinner. Made with a lard instead of a suet crust, it would be very nice baked, and would be sufficiently done in from 1-1/2 to 2 hours. Time.—Boiled pudding, 2 hours; baked pudding, 1-1/2 to 2 hours. Average cost, 7d. Sufficient for 5 or 6 persons. Seasonable at any time. SUET CRUST, for Pies or Puddings. - INGREDIENTS.—To every lb. of flour allow 5 or 6 oz. of beef suet, 1/2 pint of water. Mode.—Free the suet from skin and shreds; chop it extremely fine, and rub it well into the flour; work the whole to a smooth paste with the above proportion of water; roll it out, and it is ready for use. This crust is quite rich enough for ordinary purposes, but when a better one is desired, use from 1/2 to 3/4 lb. of suet to every lb. of flour. Some cooks, for rich crusts, pound the suet in a mortar, with a small quantity of butter. It should then be laid on the paste in small pieces, the same as for puff-crust, and will be found exceedingly nice for hot tarts. 5 oz. of suet to every lb. of flour will make a very good crust; and even 1/4 lb. will answer very well for children, or where the crust is wanted very plain. Average cost, 5d. per lb.
<urn:uuid:fcc990e4-d3fe-4f36-97ef-911f8dba3e67>
CC-MAIN-2021-43
https://www.bookofdaystales.com/tag/treacle-pudding/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00191.warc.gz
en
0.972182
2,718
2.625
3
The latest paper is mainly concerned with the technical features of modern cloud offerings at a tiny school. Basic description within the available clouds includes explanation of a couple of cloud styles, which include open public, private, cross and clouds based on power pricing designs. Hybrid atmosphere are based on earnings sharing models with central data and processing power via multiple hardware and service providers. Public atmosphere normally enable better connection than the various other models. In this paper, we briefly review some essential characteristics of two significant modern impair technologies — hybrid atmosphere and public cloud. Our main focus is on how to examine and choose them to enhance the information and computing solutions of small schools. All of us discuss primary areas just like security, https://datos-de-la-nube.com/los-tipos-de-servicios-en-la-nube-son-el-uso-de-la-computacion/ scalability, centralizing resources, providing real-time control, reducing cost, educating users about new computing technology, and developing a teaching and learning technique around these new solutions. Our following few weblogs will also resolve these issues. Hybrid atmosphere are quite the same as the traditional atmosphere in that that they both give infrastructure services such as computer software, hardware and network applications. Nevertheless , the cross types cloud corporation uses computer power and virtualization ways to provide better service top quality than it is traditional comparable version. For instance, a traditional cloud provider would deploy one hardware server and one software program server while a modern cloud provider deploys multiple virtual hosts for each physical server. Thus, modern cloud services generally offer improved centralizing of assets and better service quality, especially in education where various users may possibly access a similar computing resources. Transport businesses are a sort of industry that transacts passengers or perhaps goods derived from one of place to one more. They are able to give transport solutions to additional businesses, individual passengers or international operate partners. Generally, a transportation business is usually classified according to the kind of service plan they offer or perhaps the kind of consumer bottom that they provide. In addition , this classification as well depends on the dynamics of their setting of operation such as trucking, maritime, surroundings freight and rail gets. A lot of entrepreneurs aiming to establish their particular transport business often fail for the reason that they don’t factor in the medical costs when ever determining the entire capital required for the business. By using this, they will mistakenly believe that a move business is incredibly easy to set up. However , to be able to run this kind of business successfully, it is important to factor in vehicles startup costs as well as functioning costs. To determine the startup costs of your transfer company, you should include the set assets just like the vehicles and furniture and also the fixed costs such as gas, maintenance, insurance and labor. When it comes to the operational costs of your travel business, the fixed costs would range from the wages paid to the staff members working in your enterprise plus different expenses such as vehicle equipment, utility bills and other expenses received due to day-to-day operations of the company. Nevertheless , in starting a travelling business, it is more ideal to keep expenses to a minimum especially during initial transitbusiness.com stages. It is additionally advisable to use vans which might be affordable and practical to use. If you own a transport organization, then it could be more ideal if you choose to purchase just one single van instead of renting a couple of vans and this can be very costly. That way, you will be able to save some huge cash which you can use in advertising and marketing, marketing and different essential sections of your business. A data area is a place suitable for the exceptional use of more than one computers. It is sometimes a attached area ensured exclusively when you use high-security info. Data bedrooms are significant spaces designed for housing very sensitive or personal information, usually in a few sort of protected, highly protected environment. They are used for various purposes, this sort of seeing that data storage space, secure doc exchange, electronic file sharing, document transfer, economical transactions, and much more. There are many places that a data room may be located, including lenders, businesses, institutions, hospitals, law firms, governments, press organizations, and more. Most of these spots house delicate information, which would be dropped without a secure physical data room. This information is normally kept on networked computers, with servers currently being available to get from any location. This arrangement enables us to access data files even when school or site is offline. The advantage of using online info rooms is the fact documents are kept via the internet, while still keeping the physical clones at the place of storage. Records may be duplicated from the on the net data bedroom and shipped to a physical data room using the web, or printed out out and mailed. Data may be replicated over the network by using a specialized product, while others may be copied bodily using adhesive tape, disk travel, or laser beam printer. Info rooms in addition provide a location in which to perform very confidential orders. Some transactions may require the document or perhaps data for being viewed over a screen, to get copied, in order to be searched before they are often transmitted or perhaps stored. Security features may include firewalls, supran scanning devices, dedicated IP addresses, and encryption. In order to protect highly confidential transaction from others https://searchstreams.info/room-data-sheets-example-for-management/ looking at the docs, it is important in order that the online data room is normally password secured. Online games happen to be click here for more info probably the most exciting entertainment options for people who really want to remain engaged in a fun activity for a long time. You can play free games while you wait for a inevitable evening meal, while studying for an exam or maybe when sleeping at night. A large wide range of free games available in fact it is up to you to learn which one you enjoy the best. For example , if you are partial to shooting games and have a side career as a hunter, there are games only meant for this kind of purpose. An internet game is likewise a multiplayer online game that is certainly mostly either partially or perhaps fully performed through the Net or some other pc network available worldwide. The normal idea at the rear of a multiplayer game is that two or more players take up a role and may engage in a battle against each other simply by communicating with each other through different means, which includes voice talk, text discussion, picture talk and others. The action occurs in single player mode just where one participant acts as a protagonist and the various other as a great antagonist. There are many examples of multiplayer games that include Reverse Strike, Daydreamer, MineCraft, FIFA, Skateboard The game of golf, and others. The best online games will be those kinds that allow you to interact with other players. You should check whether each of the players will be connected to the Internet and see if you communicate with all of them as a team. Additionally , you should also make an effort multiplayer online games that allow you to make your personality and create your private character. They are actually better to play when compared with single player online games. To get the best outcomes, you may also consider co-op and multiplayer game titles that give you tips on enhancing your aim, tactics on getting more points and useful information on the various components that lead to winning. Antivirus software, as well named or spyware protection, is a special laptop program utilized to stop, understand, and eliminate malicious program. It stops malicious programs (virus, unsolicited mail, spyware, adware, etc . ) from joining a system and/or using the system resources. This kind of prevents system crashes, loss of data, and overall instability of this computer. The antivirus software program detects, former to execution, the presence of malicious programs in the program and provides a listing of these harmful programs with their description, so that the user may decide if to allow them to operate. The majority of antivirus software utilizes block lists, which have been created by developers in collaboration with an antivirus security software programs specialist. These obstruct lists are designed to scan all incoming data, both on the internet and other applications, for acknowledged as being malware programs. Upon recognition, the obstruct list is used to determine the malware code type, which is then followed by removing the spy ware codes. There are different types of anti-virus software including anti-spam, fire wall, privacy protection, and others. A few of these are standalone applications and some are installed within just other www.appsguide.org/avast-free-antivirus-avast-pro-antivirus-avast-internet-security-avast-premier programs. Various free ant-virus software services provide protection features that will help prevent online hackers from getting at your personal info. Common security features contain identity thievery protection, absolutely free antivirus application download, anti spyware applications, and parental control. These kinds of safety features function to protect the private information just like financial records, credit card statistics, email addresses, loved ones, business info, and also other pertinent data. While these types of free antivirus software download programs are unable to guarantee the removal of malware and other malwares, they can be a cheap way to try to protect yourself. The online data room for IPOs could be a very important piece of property for a variety of several types of companies. Sometimes the single most critical piece of a business’s upcoming stock offerings will be the info room regarding IPOs. With that said ,, it is also vital for a buyer to have as much information as is possible about what is occurring in the enterprise. If you are going to purchase this type of expense real estate you want to make sure that you know what you will be purchasing. Therefore going to a selection of web sites and really doing your home work on the firm that you will be thinking of investing in. This means understanding who the principals are involved in the company and what all their track record is. One of the things that must be done when you buy an online info room with respect to IPOs by an online info service provider is always to conduct what is called “due diligence. inches What is “due diligence? ” Basically it is a way for a real estate investor to see what style of information the business has on submit regard about what their consumer bottom may be like. This is very important because in many cases an GOING PUBLIC will include details such as how many clients are currently making use of the service, the person activity by simply those subscribers, and what their overall earnings amounts are in a presented quarter. Via all this info it will be possible to determine how many users are paying for an annual subscription. This user activity together with the overall number of people who have contacted the site will give a true picture of the healthiness of the share. The last part of information that needs to be available to any individual looking at choosing digital data rooms intended for IPOs by an online info room corporation is what kind of restrictions the company has set up on who can gain access to the data rooms. Generally this type of information is made offered in institutional investors and brokers, but not to everyday Net viewers or normal customers. These kinds of restrictions happen to be put into place to hold private just who are actually intended to have access to the info https://vdroom.net/data-room-for-life-sciences areas and to maintain your company’s proprietary information safe. This is the only way the fact that company or perhaps brokerage may ensure that they aren’t letting in the incorrect people. Sie innehaben bestimmt unser Originalbuch des RA-Spiels gespielt – es war eines irgendeiner süchtigsten Spiele. Hingegen wussten Eltern, weil Eltern das Bd. des RA-Spiels nutzen können, um eine Vorstellung davon drogenberauscht entgegennehmen, genau so wie welches Partie spieltWirkungsgrad Das sei einfach. “Titel von Ra Demo Play – die großartige book of ra magic online echtgeld Chance, um Ihr Slot-Spiel zugeknallt verbessern” verder lezen How does Contact VPN preserve your internet interconnection from hackers? If you are using a wireless interconnection at home or perhaps in the office, then you definitely are at risk of several security hazards. When you are not really at home, you can check your email messages or social networking sites. However , assuming you have a secure VPN server at home or at the job, you can in safety access the web. How does Touch VPN help protect the privacy and identity from hackers? Through a safeguarded VPN server, you possibly can protect your entire devices out of any risks. These include individuals that may try to use your personal facts and bank accounts for their individual gain. Even though of the applications do not allow ads, there are several which experts claim. Apps just like Facebook and twitter use applications that gather user’s personal data, just like name, address, email address, and more and mail it to third parties. You can get apps that block advertisements, but they are unable to block the apps that happen to be already collecting your personal data. How do I receive apps that block advertisings from coming on my android os device? Many VPN servers are available on the Android Market that offer the same security that you receive through a VPN at home or on the job. You can down load any of these popular digitalbloginfo.com/the-best-antivirus-for-android-and-why-you-need-it/ apps and set up them with your Samsung tablet, including the The samsung company Galaxy Case. You can use these kinds of apps to surf the web whilst keeping up currently with the hottest news or updates, talk with friends, and stream video clips from Vimeo on your Korean tablet. In my personal opinion the AVG Fantastic Guide is the foremost anti computer virus program in the marketplace today. It has a very sturdy infection removing program compatible antivirus for windows 10 which tests your entire pc and takes out the largest sum of regarded viruses on your computer. I have tested the program several times over a variety of varied computers to determine exactly how well it works. Here are the reasons why this method is so wonderful… AVG Final Guide – This is because this actually ceases problems via even becoming created in the first place by using trojans detection technology which other anti pathogen software application usually do not utilize. Employed all the malware conferences that other anti-virus software application got made for me, this simply did not help my personal system at all. After looking for a great anti-malware method to remove spy ware from my personal system that may help stop complications, came across the AVG Very best Guide. The guide strolled me through every single step of the direction to go and how to take away malware right from my computer and prevented the most problems that were on my computer. Another great thing relating to this software is that this doesn’t need you to have any kind of knowledge of virtually any hacking approaches to remove malware. You will need to know how to search for them on the Internet and then simply what to do with them once they have been completely found and this facts is offered in this wonderful AVG Maximum Guide. This software didn’t take me a long-term to figure out using it, and I was able to use it on a brand new computer that was clean. I propose this product extremely to any individual looking to prevent malware complications and have an excellent experience with their computer without needing to know everything with computers or any types of hacking. An Avast review will help you see if the antivirus software which is designed by AVG Technologies seems to have enough security for your computer systems. This anti-virus software will not only protect you from viruses and other malware moves but it also keeps your computer guarded from spamming activities. It protects you right from spyware as well as any other type of spyware or adware goes for. The internet protection suite made by AVG Technologies is called AVG Net Security. You cannot find any other approach by which you can receive higher level of safeguards than this kind of. It will keep your computer protected from all kinds of threats which include viruses, spyware and adware and trojans attacks. You can get out more about the various versions of the software within the internet. Each of the Avast critical reviews are available on the internet and you can read about the in depth features of the program. By using this antivirus software you need to be careful because there are many dangers on the internet which can injury your computer quickly. You have to remember that avast assessment should be employed carefully because there are many people that write these people and declare that they are the most of the lot but when it comes to protection they are simply actually not. Many people claim that they have got the best of your free editions but in fact they have been posting the same virus and mybagsroom.info/triad-of-the-best-bitdefender-vs-avast-vs-norton/ spyware with millions of others. This is the main reason why you should always use avast free of charge version in diagnosing and clean your computer frequently. Another great characteristic of this cost-free version in the antivirus application is that it is completely free of charge which means that you do not have to shell out even a solitary cent to try avast. You can down load it in the website and get the complete protection from any sort of threat on your pc.
<urn:uuid:86774bd6-8937-4355-bcb5-c80a3850d49a>
CC-MAIN-2021-43
https://lspkits.com/category/uncategorised/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00431.warc.gz
en
0.946519
3,677
2.65625
3
The Census return for 1851 gives a fascinating picture of Dulwich before the impact of the railways and the Crystal Palace, when it was still part of the administrative county of Surrey and a village in every sense. The census enumerators took their own idiosyncratic route in carrying out their duties, so that it can be difficult to link households with particular buildings. Occasionally houses are named, and it would be possible (although this has not been done for the purposes of this survey) to arrive at definitive answers to such problems of identification by consulting College leases. Allowing for a few houses which may have been included incorrectly, this appraisal is concerned with the Village, the Common, Half Moon Lane, Dulwich (now Red Post) Hill, Herne Hill, the west side of Lordship Lane, and the Penge (now College) Road, in other words the College Estate excluding Sydenham Hill. It covers 278 households, about two- thirds of which had 'heads of households' born in the south-east of England. 122 had come from what is now Greater London or Middlesex, and of this number only 38 had their origins in Dulwich, showing (perhaps surprisingly to most people) that although the population may not have been as mobile as it is today, Dulwich was by no means an inbred country village. Of the remaining one-third 'heads', a substantial group came from the south and west of England, 7 from Scotland, Ireland or Wales, 7 from Europe, and 2 from Asia. Although those from outside England were almost entirely middle-class, a large proportion of the manual workers came from beyond the South East. Certain groups of cottages or smaller houses, i.e. Wellington Place in north Dulwich, Garden Row and Lloyds Yard behind the west side of the village, and Ridley's and Herring's Cottages at the west end of the Common, have now disappeared, and Boxall Row has changed its name (to Boxall Road) and its form. In such dwellings, by and large, were housed the skilled tradesmen and working people who serviced the more wealthy denizens of the larger houses. On either side of the Village street (later the High Street, now called Dulwich Village) were the shops, mostly on the east side (as today), but there was of course no Commercial Place, nor shops at the north end of the village. The Census is unclear as to which houses were also shops, or whether heads of households were self-employed (although some are described as employers), but the picture for some types of shop does not look very different from today. Of the heads, four were butchers, three bakers, three grocers (two of them also cheese- mongers), besides a fruiterer, chemist, fishmonger, bookseller, and a stationer who was also a harness maker. Shoes and clothing are a very different story, with five cordwainers (or shoemakers), two linen drapers, three tailors, and five dress makers, suggesting that clothes were made rather than bought off the peg. Many daughters of shopkeepers and manual workers were also dress makers or seamstresses - a further 14 women were thus employed and while the daughters used a needle, their mothers often took in washing; 17 women were laundresses or manglers of linen. Four of these were heads of households, as were two charwomen, a butler, and one male house servant. However, in the category of domestic servant, a staggering 259 lived in. Eighty nine households had at least one servant, including 7 of the 16 (potential or actual) shop- keepers, the wheelwright, the local builder (Thomas Bartlett of north Dulwich, who employed six men), one of the carpenters, and one of the gardeners. Only ten of the 88 other middle-class households had no servants. If there was only one servant it was usually a young housemaid; if two, there would be a cook and housemaid; with three, a second housemaid or a nurse was added. A groom or footman is often listed where there are four or more servants, and further diversification (lady's maid, under nurse, kitchen maid) occurs as the numbers increase. Forty eight households had four or more servants living in, and the a banker Mr Matthias Attwood had a housekeeper, coachman, groom, cook, two footmen and three housemaids to look after himself, his brother and his son (both merchants) in his house at the top of Red Post Hill. His neighbour William Stone, at Casina (or Cassino) House, a silk broker (amongst other things), had eight. Gardeners did not live in, but the middle-class must have taken a pride in their gardens as there were 54 - the largest single occupational group - living in the area, usually away from the centre of the village, of whom 42 were heads of households. Two farms are noticed: Colonel Constable's 200 acres in (or near) Court Lane, and Mr Bew's dairy farm of 10 acres on the Common. Apart from Mr Constable's sons, 11 heads of households were described as agricultural workers, including two cowkeepers and a haybinder. Five unnamed tramps slept in Constable's barn on Census night. Mr Bartlett the builder has already been mentioned. The building trade was also represented by nine carpenters, eight bricklayers, six painters and three plumbers, 16 of whom were heads of households, several of them employing others. There was another builder, John Willson, but he described himself as a 'Builder in London employing 89 men and six boys'. His household had four servants. Apart from private houses, the biggest employers of staff were the inns; Mr Middlecott at the Greyhound, and Mr Webb at the Half Moon, each had five. The Crown was a smaller establishment with three. Mr Bryant, beershop keeper, was at Herring's Cottages in west Dulwich, with all four local police constables as his fairly close neighbours. Transport was represented by 17 coachmen, grooms or stablemen, not including those amongst the establishments of the larger houses, and of these 14 were heads of households. There were two omnibus proprietors and four omnibus conductors. In related professions, equivalent perhaps to today's garages, were five blacksmiths and Mr Dale, the vet, who also styled himself smith. He lived at the south end of what is now Commercial Place. Isolated trades were a caneworker in Boxall Row, and a mahogany picture frame maker near Bell House, opposite the Picture Gallery. It would be interesting to know whether the balance of occupations amongst the middle and upper classes has altered much since then. Almost certainly there were fewer teachers. At the College there was the Master (George John Allen) and the four Fellows, the Warden being absent when the Census was taken, and besides the 12 Poor Scholars there were the six Poor Brothers and six Poor Sisters (one of whom was, under the College Statutes, Matron for the boys, and is so described in the return), all but two of whom (giving the lie to their alleged poverty) had servants, often relatives. The Master had his own coachman, and for the College as a whole there was a cook, footman, kitchen maid and two housemaids. Living opposite, on the east side of the village, was the Headmaster of the Grammar School, the Rev. Bennett George Johns, and on the west side, between the wheelwright and one of the butchers, lived assistant master William Joseph Harris, aged 25, his 38 year old schoolmistress wife and 17 year old 'daughter-in-law', also a teacher. In the same category, one schoolmistress lived alone in Boxall Row, one with her sister in Herring's Cottages, and one (the daughter of a bricklayer) taught English. A governess lived in a merchant's household on the Common and another was the wife of the bookseller. There are three Misses Berry recorded (Tom Morris, writing half a century later, says that the two Misses Berry kept a young ladies' school at Blew House), the eldest of whom, and the head of the household, is described as "annuitant". At the time of the Census they has four visitors (two iron merchants and two gentlemen) and three servants. Of other professions, there were two general practitioners (the celebrated Dr Webster, and the surgeon Edward Ray at 97 Dulwich Village) and eleven lawyers, nine of them heads of households. Charles Rankin (actually Ranken), the solicitor at Belair, had seven servants, and although one of the other solicitors had no servants a solicitor's managing clerk who was head of the household at Elm Lodge in Half Moon Lane had three. Merchants and manufacturers make up by far the largest group of the middle class. If Attwood the banker and two stockbrokers are included, there were 44 (50% of the middle class excluding shopkeepers). Some are just described as merchants, but there is a fascinating variety: Stone the silk broker, Courage the brewer, a drug merchant, Manchester merchant, West Indian merchant, corn distiller, wood broker, straw hat maker, fancy soap maker, rice dresser, copper smelter, carpet warehouseman, and many more. The next largest group were ladies described as annuitants or fundholders - 15 in all. Four men were retired or living on unearned income. Other interesting individuals were Stephen Poyntz Denning, portrait painter (and keeper of the Picture Gallery), Bonham the deputy keeper, and an assistant Keeper of Public Records. Nicholas Francatelle, clerk, of Half Moon Lane, was evidently an early feminist; although he is listed as head of the household, his wife Mary is described as "wife in charge of the house". This article was written by Dr Tony Cox and first published first published in the Dulwich Society Newsletter in October 1983.
<urn:uuid:7f04921a-942c-4ff2-9b79-62084fd21165>
CC-MAIN-2021-43
https://dulwichsociety.com/local-history/380-the-1851-census
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.987779
2,067
2.671875
3
It is a dental infection, whereas comparatively simple relating to diagnosis and access, are often difficult to manage acutely. Dental abscesses or periapical infections generally arise secondary to cavity (tooth rot associated with poor dental hygiene), trauma, or failing dental passageway treatment. Left untreated these infections are often not only extraordinarily painful however conjointly cause a major risk of down into the deep neck house or ascending to intracranial sinuses A dental symptom, or tooth symptom, could be a buildup of pus that forms within the teeth or gums. The abscess generally comes from a microorganism infection, usually one that has accumulated within the soft pulp of the tooth. A tooth-related abscess (also referred to as a periapical abscess) happens within the tooth. This happens once the tooth’s nerve is dead or dying. This sort of symptom shows up at the tip of the tooth’s root. Then it spreads to the encircling bone. Most abscesses are painful; therefore individuals typically obtain treatment quickly. Generally the infection causes very little or no pain. If an abscess isn’t treated, the infection will last for months or maybe years. It’ll not escape on its own; therefore it’s vital to not ignore the symptoms. Fast facts on dental abscesses Here are some key points regarding dental abscesses. Here is much detail and helping material is within this article. - There are 3 styles of dental abscess: gingival, periodontal and periapical. - Symptoms of dental abscesses embody pain, a foul style within the mouth and fever. - Dental abscesses are caused by a microorganism infection. - Treatment for an abscess could involve passage surgery. - To minimize pain, it’s best to avoid cold drinks and food and use a softer toothbrush. Tooth Abscess Stages White Spots, The initial stage of caries begins once chalky white areas on the surface of the tooth seem thanks to the loss of calcium and build-up of plaque. Bacteria within the plaque then begin to metabolize sugars from food consumed. The buildup of those acids causes enamel to deteriorate, a method cited as demineralization of the tooth surface. At this part, caries might still be reversible with the proper treatment Enamel DecayIn stage two of decay, the enamel starts breaking beneath the tooth’s surface. At this stage, the natural remineralization method is unable to revive the correct enamel and minerals, inflicting a lesion to create inside the tooth. Because the decay persists, the surface of the tooth risks breaking, that is irreversible. If one’s tooth breaks, one should get dental attention instantly. Dentin Decay Stage three of cavity is additionally referred to as dentin decay. If left untreated, bacteria and acids can still dissolve the enamel and also the lesion risks reaching the dentin. The dentin is that the part of the tooth that exists in between the enamel and also the pulp. Once decay moves to the dentin, the extent of pain begins to accentuate and a pointy pain could also be old within the infected tooth. Once enough of the sub-surface enamel is weakened by the loss of metal and phosphate minerals, the enamel collapses and a dental cavity is made. At this time, a dental filling can most likely be needed to restore the tooth. Involvement of The Pulp The pulp is taken into account the tooth’s center. It’s created from living tissue and cells that are mentioned as odontoblasts. Cells of the pulp manufacture dentin, that acts because the animal tissue between the enamel and pulp. If the pulp of a tooth gets infected with bacterium, pus then forms that unknowingly kills the blood vessels and nerves within the tooth. This can be ordinarily called a toothache and may cause constant pain. At this stage, the most common course of treatment is root canal therapy. Abscess Formation Abscess formation is that the final stage of decay and out and away, the most painful. Once the infection reaches the basis tip of the tooth, the conjoining bones risk infection in addition. The gums and tongue typically swell which may have an effect on speech and puts you in danger for alternative diseases. At this stage, further oral surgery may have to be performed Tooth Loss If left untreated at every stage of cavity, the tooth are lost and it must be extracted. Tooth decay is simple to manage. Establishing an oral care programmed that involves these preventive measures can facilitate avoid tooth decay: Adhere to a strong oral hygiene regimen –use toothpastes and mouthwashes with fluoride and brush sort of a professional with an electrical toothbrush. A powerful oral care program will maintain your oral health and is thought to be the simplest preventative live to require avoiding dental caries. Avoid a diet high in sugar similarly as feeding between meals. Drinking water also can facilitate: keeping hydrated can help turn out spit to still nourish teeth enamel and to wash the mouth. Of course, visiting your dental professional for normal checkups can each facilitate forestall dental caries and maintain healthy oral care. Tooth Abscess Home Remedy 5 Home Remedies for a Tooth symptom The following home remedies will be applied together with prescribed treatments. Rinsing your mouth with salt water is a straightforward and reasonable possibility for temporary relief of your abscess. It can even promote wound healing and healthy gums. To use this remedy: Mix 1/2 teaspoon of traditional flavored with 1/2 cup of heat H2O.Rinse your mouth with the salt water. Try and swish it around within your mouth for a minimum of 2 minutes. Spit the water out. Repeat up to a few times per day. Baking soda is another cheap choice for treating an abscess. You will even have already got some in your kitchen cabinet. Baking soda is superb for removing plaque within the mouth. It also has medicine properties. To use this remedy: - Mix 1/2 tablespoon of baking soda with 1/2 cup of water and a pinch of salt. - Swish the mixture in your mouth for up to 5 minutes. - Spit out, and repeat till you’ve finished the mixture. - You can repeat this up to 2 times per day. - Repeat up to a few times per day. Oregano essential oil Oregano oil is vital oil which will be purchased during a food store or drugstore. You’ll also notice it online. Oregano oil is associate medicament and inhibitor. It’s going to facilitate cut back the swelling associated pain of an abscess. Make certain to dilute any volatile oil with carrier oil to forestall additional irritation. Here’s a way to select carrier oil. To use this remedy: - Mix some drops of oregano essential oil to one ounce of carrier oil. - Apply some drops of this mixture to a plant disease or swab. - Hold the cotton ball on the infected space for 2 to a few minutes. - Remove the cotton ball or swab. Leave the mixture on for at least ten minutes, then rinse. - Repeat up to a few times per day. A cold compress is good to lower pain and swelling. To use this remedy: - Place ice cubes during a dry towel. - Hold the compress against your skin close to the affected space. - The compress may be used for 15-minute intervals. - This may be recurrent multiple times per day. Fenugreek has antibacterial properties and an extended history of use as a home remedy for healing wounds and reducing inflammation. It’s going to be obtainable within the spice aisle of your supermarket or online. To use this remedy: - Make a fenugreek tea by heating one cup of water during a cooking pan and stirring in 1 teaspoon of ground fenugreek. - Allow the mixture to cool down. - Apply a small quantity to the affected space using a cotton ball. - Repeat up to three times per day. Tooth Abscess Symptoms Signs and symptoms of a tooth abscess include: - Severe, persistent, throbbing toothache which will radiate to the jawbone, neck or ear - Sensitivity to hot and cold temperatures - Sensitivity to the pressure of mastication or biting - Swelling in your face or cheek - Tender, swollen humor nodes below your jaw or in your neck - Sudden rush of fetid and foul-tasting, salty fluid in your mouth and pain relief, if the abscess ruptures Difficulty respiratory or swallowing You might notice: - Gum redness - Bad taste - Pain once you chew - Jaw pain - Swollen body fluid nodes - Trouble respiration or swallowing - Sometimes a symptom causes a pimple-like bump on your gum. If you press it and liquid oozes out, it’s a simple task you’ve got an abscess. That liquid is pus. How do health care professionals diagnose a dental abscess? A medical aid supplier or medical specialist will assess the signs and symptoms to see if a dental symptom is there throughout a check-up. They will then refer the patient to a dental supplier for diagnosing and treatment. Designation of a tooth symptom is put together determined by - signs and symptoms reportable by the patient - examination and tests that are performed by the medical man - what’s pictured with dental radiographs (X-rays) Tooth Abscess Treatment In adult teeth, the same old treatment for an abscess begins with properly clearing the infection. Treatment depends on what quantity the tooth infection has unfolded. The course of action typically involves oral antibiotics like antibiotic. The tooth is opened to get rid of the infected contents among the pulp chamber. If needed, incision and drain is performed on the soft tissue to supply additional exit of pus and pressure of a growing infection. In some things, the infection will unfold quickly and need immediate attention. If a dental practitioner is unprocurable and there’s a fever, swelling within the face, or swelling within the jaw, a visit to the hospital room is suggested. A hospital room visit is imperative if there’s problem with respiratory or swallowing. Once the infection is cleared and therefore the tooth may be reconditioned, a passageway procedure is performed. The “root canal treatment” cleans out the whole inner area of the tooth (pulp chamber and therefore the associated canals) and seals the space with an inert rubber material referred to as gut perch. Cleanup and protection the inner area protects the tooth from additional invasive infections. The tooth might have to be extracted if an excessive amount of tooth structure or bone that surrounds the tooth is lost from decay and infection. For children’s primary teeth (baby teeth), if a tooth has septic, there’s little or no that may be done to save lots of the tooth. The infection has advanced and there’s no thanks to fully take away all of the infection. The suitable treatment to eliminate the infection would be extraction of the abscess. Complete removal of the abscess is additionally vital in avoiding a persistent infection that would risk harming the tooth that’s developing beneath. Oral antibiotics might or might not be required reckoning on the extent of the infection. During maternity, a dental symptom needs immediate attention so as to reduce additional unfold of the infection. Any risk of infection whereas pregnant may be a concern because the infection may be additional severe in pregnant ladies or might damage the vertebrate. The clinical spectrum of dental symptom ranges from minor well-localized infection to severe critical complications involving multiracial areas.The overwhelming majority of otherwise healthy patients presenting with a dental infection is managed on a patient basis.Common presenting symptoms embody dental pain/toothache; intraoral and/or extra oral lump, erythema, or discharge; and thermal hypersensitivity.A major thought is that the potential for airway obstruction as a consequence of extension of the infection into facial areas close the cavity.Panoramic dental x-ray reveals the supply of infection in most cases; but, a periapical x-ray may be useful. CT scan is usually recommended if there’s suspicion of a facial area infection or if wide or periapical x-rays don’t seem to be accessible. Prompt operative intervention to spot and eliminate the supply of infection and supply a path for evacuation, in conjunction with antibiotic medical aid and corroborative care, is needed. Operative treatment is taken into account the cornerstone of victorious management.Immuno-compromised patients should be treated in a very timely fashion as tooth-related infections might unfold quickly.
<urn:uuid:183c4e01-90dd-4dfb-a1a3-ee5939597fa0>
CC-MAIN-2021-43
https://www.healthymagazine.net/abscess-tooth/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00390.warc.gz
en
0.91917
2,698
3.515625
4
Their fur has been considered a luxury item since the Middle Ages Sable Scientific Classification - Scientific Name - Martes zibellina Sable Conservation Status - Molluscs, hares, rodents, musk deer - Name Of Young - Group Behavior - Fun Fact - Their fur has been considered a luxury item since the Middle Ages - Estimated Population Size - 2.3 million - Biggest Threat - Most Distinctive Feature - Luxurious fur - Gestation Period - 245 to 298 days - Litter Size - 1 to 7 - Dense forest - Wolves, foxes, wolverines, tigers, lynxes, eagles, owls - Common Name - Russia, Mongolia, parts of China and Korean Peninsula Sable Physical Characteristics - Skin Type - 18 to 22 years - 2 to 4 pounds - 35 to 36 centimeters - Age of Sexual Maturity - 2 years - Age of Weaning - 7 weeks Click through all of our Sable images in the gallery. Sable Fur Has Long Been Cherished for Its Texture and Color With their smooth, finely tinted coats, sables have long been objects of desire — or, at least, their fur has. The Sable is a forest-dwelling animal that sports a lush, silky coat and spends most of its time alone. Today, sables are animals that are commercially farmed, but large populations still exist in Russian and Mongolian forests. Smaller communities can also be found in other pockets throughout Asia. 5 Incredible Sable Facts! - Sable fur has been a luxury item since medieval times. - In heraldry, sable is the word for black. - Henry VIII, England’s famous Tudor king who married six times, declared that this fur could only be worn by nobles with ranks of a viscount or higher. - Russia’s Siberian conquest was in large part fueled by the sable fur trade. - Sable hunting was a job done by Russian convicts exiled to Siberia. Sable Scientific Name The scientific name for the sable is Martes zibellina. It’s a faux Latin combination derived from the Old French word “martre,” meaning “sable martin,” and zibellina, which comes from the Italian name for the animal, zibellino. Colloquially, the word “sable” has Slavic roots and entered Western European vocabularies during the medieval fur trade. Germans adopted the term “zobel,” the Dutch used “sabel,” and the Spanish “cibelina.” In Medieval Latin, the language used by the Catholic Church in the Middle Ages, created the word “sabellum” to describe the animal. Sable Appearance and Behavior These animals are between 13 and 22 inches long from head to backside. Their tails tack on an additional 5.1 to 7.1 inches. They usually tip the scales between two and four pounds. And typically, males are larger than females. Sable fur is unique in that it’s smooth in all directions. When you brush it against and with the grain, it feels the same. However, the texture changes slightly with the seasons. Winter pelage is longer and more lush than summer fur. Coloring is geographically dependent, but all species are some shade of brown or black. Several populations also sport lighter patches around their throats. Genetic cousins to pine martens, these animals are similar looking, except their hair is silkier, their heads and ears are shorter, and their tales are proportionally shorter. Depending on food availability, their home territories are between 1.5 and 11.6 square miles. Most of the year, they’re crepuscular hunters — meaning they’re most active at dawn and dusk. However, during mating season, sables are out and about during the day as well. For the most part, Sables are solitary animals and only convene for breeding and child-rearing. As great climbers, they prefer habitats filled with spruce, pine, larch, cedar, and birch. Typically they burrow near riverbanks and deep in thick woods. Sables build lodges around tree roots, which serve as structural reinforcement. Inside, they carpet their dens with grass and shed fur. Sable Predators and Threats These animals are omnivores whose diets change seasonally. Because of their relatively small size, they are prey for larger carnivores. What Eats Sables? What Do Sables Eat? In the summer, sables primarily feast on hares, eggs, and other small mammals. During winters, they incorporate wild berries and rodents into their nutrition rotation. Sometimes, they stalk wolf and bear tracks in search of leftovers. And occasionally, they’ll eat fish caught with their front paws. Currently, the International Union for the Conservation of Nature categorizes sables under Least Concern. The populations remain stable, and some are even growing. Reproduction, Babies, and Lifespan June through mid-August is the sable breeding season. To win over mates, males rumble like cats — often violently. When pairs form, they couple for eight straight hours! However, females don’t become engorged immediately. Instead, it takes eight months for implantation to occur. As such, their gestation periods are 245 to 298 days, but embryonic development only lasts 25 to 30 days. These animals give birth in hollowed trees. To prepare for the event and ensure newborns’ comfort — they build nests of moss, leaves, and dry grass. Litters can range in size from one to seven cubs, but two or three is the norm. Babies are born with closed eyes and weigh between .88 and 1.23 ounces. Typically, they’re about 3.9 to 4.7 inches long. After about a month, pups’ eyes open, and they leave the nest shortly after that. At two years old, they reach reproductive maturity and start having cubs of their own. During the baby’s early days, mothers nurture and suckle the young, while fathers defend the nest and forage food. How long do these animals live? In the wild, the average individual makes it to 18. In captivity, sables’ lifespans are about 22 years. Pine martens and sables can and do interbreed in the wild. Their offspring are called “kidus.” Smaller than full sables, the hybrids also have coarser hair, and almost all are sterile. However, there is one known instance of a female kidu successfully mating with a pine marten. Sable subspecies is a hotly debated topic. One school of thought insists only seven exist. Others believe there could be 17 or as many as 30. By the 20th century, these animals were nearly extinct from excessive hunting and poaching. However, commercial farming supplanted wild hunting, and sables experienced a resurgence. Their growth was aided by a Russian reintroduction initiative that lasted from 1940 to 1965. In terms of population numbers, researchers estimate that over 2 million individuals are thriving in the wild. According to some accounts, their numbers are rising, not declining.View all 138 animals that start with S Sable FAQs (Frequently Asked Questions) Are Sables Carnivores, Herbivores, or Omnivores? Sables are omnivores, meaning they eat both meat and plants. What Is a Sable? A sable is a medium-sized animal in the Mustelidae weasel family. Sables are most frequently associated with Russian wildlife, and the country has a long tradition of valuing the animal. What Color Is a Sable? Depending on location, sables’ colors range from light brown to black. Some populations have lighter patches around their throats. In the heraldic tradition, sable is the word used for black. Can Sables Be Pets? Sables do not make good pets. They have sharp teeth and bite. Attempts at training them have proved futile. However, sables are commercially farmed for their fur. Is a Sable a Mink? No, sables and minks are not the same animals. Their furs are different colors, textures, and weights. Plus, minks are slightly smaller than sables. Where do Sables Live? Sables mainly live in Russia and Mongolia. Smaller populations survive in parts of northern China and the Korean Peninsula. What Kingdom do Sables belong to? Sables belong to the Kingdom Animalia. What phylum do Sables belong to? Sables belong to phylum Chordata. What class do Sables belong to? Sables belong to the class Mammalia. What family do Sables belong to? Sables belong to the family Mustelidae. What order to Sables belong to? Sables belong to order Carnivora. What type of covering do Sables have? Sables are covered in Fur. In what type of habitat do Sables live? Sables live in dense forests. What are some predators of Sables? Predators of Sables include wolves, foxes, wolverines, tigers, lynxes, eagles, and owls. What is the scientific name for the Sable? The scientific name for the Sable is Martes zibellina. What is the lifespan of a Sable? Sables live for 18 to 22 years. What is a baby Sable called? A baby Sable is called a cub. What is the biggest threat to the Sable? The biggest threat to Sables is poaching. How many Sables are left in the world? There are 2.3 million Sables left in the world. What is an interesting fact about the Sable? Sable fur has been considered a luxury item since the Middle Ages. How many babies do Sables have? The average number of babies a Sable has is 1 to 7. - Britannica, Available here: https://www.britannica.com/animal/sable - Research Gate, Available here: https://www.researchgate.net/publication/306105916_Martes_zibellina_Sable - Styles Gap, Available here: https://www.stylesgap.com/mink-fur-coats-vs-sable-coats/
<urn:uuid:65e6ed25-3426-48a7-83f3-67c9fa17e1c2>
CC-MAIN-2021-43
https://a-z-animals.com/animals/sable/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.928091
2,259
3.390625
3
Reading in English is one of the best ways to build your child’s vocabulary. It can also be a good family activity to do together. The websites below will give you free resources to help make reading with your child fun, as well as educational. Let’s take a look! Why should you read in English with your child? Childhood is the best time to start learning a second language. A positive early experience with English often leads to a better connection with the language and improved fluency in later life. For kids, speaking and listening are easier skills to develop than reading and writing. However, learning to read in English from a young age will help your child to develop a wider vocabulary and a better understanding of structure (grammar). Children’s books are written and illustrated to be fun, engaging and easy to understand. Connecting a new piece of vocabulary or grammar to a story or poem can make it easier for a child to remember. Reading interesting stories doesn’t feel like hard work or “study” so this is good preparation for more formal English lessons later. Reading together with your child can help them take more of an interest in it. If they struggle to read by themselves, then listening to you read a story in English will still help them to practise their listening skills and build their vocabulary. You might even find a new favourite story together! The recommended websites below are a mix of British and American resources. Most include videos or audiobooks, which you can use with your child to practise listening and matching the sounds of words to how they are written. All of these kids’ reading websites are free to access and many of them offer downloadable ebooks! This is a superb website for reading online because instead of giving you an ebook, its stories are in the form of interactive games! The story appears onscreen at the same time as it is read aloud, and you can interact with the story while it is being told. There are also bonus games to play afterwards, song videos, and printable activities. The site is built for young children, including pre-schoolers who are just learning their letters for the first time. New stories are added to this website regularly. Each interactive story comes with at least one game, two songs, and a printable activity sheet to help your child practise new vocabulary that they have learnt during the story. Try the Preschool Activity Library to find stories and activities based around your child’s interests – for example: colours, trains or their favourite animal! You can also look into the Literacy page for top tips on “raising a reader”! We Enjoyed: I Will Not Take A Bath is a very funny little story about a baby who doesn’t want a bath until he has his favourite toys – why not practise telling the story with your child’s favourite bath-time toys? Suggested Age Range: 3-6 years Starfall is an online resource build to help children learn to read. It offers interactive books for preschool children and young schoolchildren, each book is built around practicing particular letters and sounds. There are also seasonal stories – such as Pumpkin for Halloween or Snowman for Christmas – and simple maths exercises in English! There are also lots of free resources for teaching your child at home, including printable worksheets, flashcards and posters. Some resources are only available to teachers in the USA and Canada, but there is more than enough material for you to use when teaching your kids English at home! Try the Talking Library to practise reading and listening to famous stories – or even learn a bit of Shakespeare! Make sure you check out the Parent-Teacher Center for printables, advice, and even lesson plans! We Enjoyed: The It’s Fun to Read section has some fun tongue-twisters to try! How fast can you say them all? Suggested Age Range: 3-9 years Oxford Owl is a British website providing free ebooks, learning activities, and workbooks for children. You have to sign up to the site to gain full access, but creating an account is completely free and gives you access to a library of hundreds of books and structured learning resources! Sounds good, right? All resources are categorised by age, making it easy to find stories that should interest your child. Activities and workbooks are also divided in this way, making it very easy to navigate what’s best for your son or daughter! Try the phonics guide, which is written to help parents understand and teach English phonics to their children! If you want to practise writing as well as reading with your child, the Activity Books section includes a workbook for proper handwriting. This is especially useful if your native language doesn’t use the Roman alphabet! We Enjoyed: The Winnie the Witch books are a fun series of stories about an inept witch, and have a lot of lively illustrations that are great for younger children to explore and talk about! Suggested Age Range: 4-11 years Storyline Online is a great site for finding free videos of writers, teachers and sometimes celebrities reading children’s storybooks aloud. The videos come with fun visuals from the story as well! Each video also has free activity guides for both teachers and parents. These will give you plenty of ideas on how to discuss the story with your child, and games you can play afterwards. The website’s blog also contains links to many other useful resources. Storyline is run by the award-winning SAG-AFTRA Foundation, and they are linked to a huge network of excellent literacy resources for children! Try listening to a story one day, and then the next day too, watching it again with the sound off and the captions on. See if your child can imitate how the reader of the story talks! Don’t forget to look at the Activity Guides below each video to find ways to get your child thinking about the story they have just heard. We Enjoyed: Library Lion is a cute, funny story that teaches children about how to behave politely in a library and the importance of helping other people. Suggested age range: 3-12 years This one is, quite simply, exactly what it says on the tin – a website dedicated to sharing children’s books for free! Books are sorted for toddlers, young children, older children and young adults/teenagers, and can all be either read online or downloaded as a pdf for offline reading. As well as storybooks, this site has hundreds of non-fiction books, allowing your child to use English as a tool to learn about different countries, famous people, history, science, or whatever takes their fancy! Try the School Textbooks category to find free textbooks and workbooks aimed both at American schoolchildren and kids studying English as a second language. The Learn to Read category is also an excellent stop if your child is just beginning to learn to read in English – it’s full of books that are simple, but engaging! We Enjoyed: Sticky Brains by Nicole Libin is a great book for learning how to talk about emotions, as well as good and bad events. It is an easy way for kids to learn about managing stress in life. Suggested Age Range: 3-15 years The ICDL is an incredible resource for finding children’s books from all around the world. Most of the books are in English, but there are dozens of languages available. There are many books that are in more than one language, like the award-winning The Blue Sky by Andrea Petrlik Huseinović, which you can read in English, German, Italian, Spanish, Russian, Slovak, Romanian or Farsi! The website is easy to explore and gives you access to hundreds of books that you can read online for free! Books are sorted by language, but also by country. You can decide whether you’d like to read books in English from America, Australia or Great Britain – just as we have different accents, you might notice some differences in how we write and tell stories! Try finding some books that are available in both your native language and English. Read them together in your native language first, then read them together in English! Look for the White Ravens tag as well – these are top quality books that have been approved by a panel of language specialists! We enjoyed: The Hare of Inaba, a translation of a famous Japanese fairy tale in English, German, Italian and Spanish. The story is short and easy to read and comes with many beautiful Japanese paintings! Suggested Age Range: 5-12 years Project Gutenberg is the world’s best resource for finding books for free online. Most of the books are older books, but there are still plenty of children’s books that can be read online or downloaded to read on your phone, tablet or kindle. Make sure to check the publication date of a book before using it because some of them are hundreds of years old and the English in them doesn’t look anything like modern English! Project Gutenberg adds books to its library when they come into the public domain, so they add hundreds of new books every January 1st. It’s a particularly useful site for teenagers who are interested in history! Try reading different versions of a famous story, like Cinderella or Goldilocks and the Three Bears, and see what new words you can learn from different books! Make sure to use the Bookshelves feature to make it easy to find the kind of books you’re looking for, such as “Children’s Picture Books”! We enjoyed: The Blue Fairy Book is a very pretty e-book with nearly forty famous stories that are easy to read quickly! Suggested age range: 5+ years Breaking News English is a website that updates every day with news articles written in English at different fluency levels, ranked from 0 to 6. The articles written at levels 0-3 should be interesting to children, but the higher levels are excellent practise for teenagers and adults, too! Each article comes with several activities attached, with different activities for each level. The goal of these tasks is to encourage children to think and talk about the news that they’ve just read and to help your child take an interest in what’s happening in the world! When looking for articles, try picking a Theme that interests your child – e.g. the environment or technology. Older learners can look at articles focused around education and even business! We enjoyed: Using the listening section to hear articles read aloud. You can listen to them at different speeds to make it easier or more challenging. You can also listen to them in either American or British English so you can get used to different accents! Suggested Age Range: 6+ years Storynory is an excellent site with stories aimed at older children, teenagers, and even adults. Above the text of each story is an audio recording, so you can listen to it as you read. Hundreds of stories, poems and non-fiction books are accessible completely free! Storynory also contains a sizeable library of poems, rhymes and music. Children have an easier time learning rhymes and songs because they have a rhythm. Practising some of the classics available on the site can really help your child remember new vocabulary too! Try reading the comments that people have left under each story and decide if you agree or disagree. Once you’ve read a story, leave your own comment saying what you think of it! The Junior Stories section contains short stories for younger children, and while it’s not very big right now, the collection is growing fast! We Enjoyed: The Histories of Herodotus have been written in easy English, so your child can learn some history in a fun and engaging way! Suggested Age Range: 8+ years Wikipedia is one of the most useful sites on the internet for learning new things, but you can also use it to help your kids learn English! Simple English Wikipedia rewrites wikipedia articles to make them easier to read, which can be very useful when you’re learning English as a second language. This website is not the best resource for absolute beginners. However, it can be very helpful for anybody with a couple of years’ experience reading in English who wants to expand their general knowledge and their reading ability. Try reading about your own country or your child’s favourite movie and see if you can learn something new together! Alternatively, set your child a project to do on a particular subject and show them how to use wikipedia to research their topic online. This is a useful skill to learn. We enjoyed: The page on Basic English includes a picture wordlist, which is a great resource for learning core English vocabulary and making your own flashcards at home! Suggested age range: 10+ years Want to improve your child’s English fast? We can help! Our British English teachers offer specialised 1-to-1 classes for kids that are educational and fun. We believe that a positive experience at an early age leads to a love of English and better fluency for life. Contact us today to book your free consultation and see how our online lessons can help your child succeed! In this study guide, we will teach you 21 common phrasal verbs with ‘put’. Learn their many meanings, explore real native examples in context, and try our final quiz to test your understanding. You can also save a copy of this great guide to use later. Ready? Let’s take a look! Continue reading
<urn:uuid:5c9b037d-a770-425e-9089-bd089b42074b>
CC-MAIN-2021-43
https://onlineteachersuk.com/kids-english-reading/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.956171
2,830
3.703125
4
Misunderstanding our system of government will be the first step in the direction of future abuses of power and unconstitutional action. Sri Lanka will elect a new president on 16 November 2019. While unusually that will be neither the current President nor the current Prime Minister, what is less well understood is that the office itself to which the new president is elected will be a new office in terms of its powers and functions. That is because certain provisions of the 19th Amendment to the Constitution, enacted in 2015, cease to operate with the election of Maithripala Sirisena’s successor, with the net effect of further reducing the powers of the presidency. That will not render the office a titular or ceremonial presidency as under the 1972 Constitution, as some experts have tried to argue, because the president continues to be directly elected by the whole country and in possession of key constitutional functions within the Executive. But it does mean that some important powers and functions so far associated with the office will no longer be available to the new president, together with limitations that have already come into effect with the 19th Amendment. That the 19th Amendment made fundamental changes to the scope of presidential powers is generally well known. But there is still a lot of confusion about what precisely those changes are, and what consequences they have. Many politicians especially do not seem to appreciate that the presidency is no longer the over-mighty institution it once was. At least part of the explanation for the constitutional crisis of October-December 2018 was that even those in the highest positions of authority were ignorant of how the 19th Amendment had changed the 1978 Constitution. This lack of knowledge cannot be once again a reason for constitutional crises after the next election. Economic analysts have pointed out persuasively how we can scarce afford new crises. What follows is an attempt to explain the powers and functions that will – and critically, will not – be available to the President who will be elected to office in November. So as not to vex the general reader, I will avoid making copious references to legal provisions and authorities, as well as avoid too much detail on the minutiae of presidential functions. I will instead provide a descriptive account of the main features and principles of the framework of executive power after the 19th Amendment, without at the same time oversimplifying the complexities of the system. Main institutional characteristics of the executive presidency after 16 November There will be a directly-elected president, with a fixed term of five years, although a president in his first term has the discretion of offering himself for re-election after four and half years of that term. There is a two-term limit on which one person can hold the office. The president is the only elected official who enjoys a personal mandate from the whole country, and this will continue to be a source of substantial power and legitimacy for the way in which a president performs his constitutional functions. The president will be the head of state, head of government, and the commander-in-chief. These are not merely formal roles, because as head of the Executive and of the government, the president is also a member and the head of the cabinet of ministers. That the defence of Sri Lanka is specifically mentioned as being a part of the sovereign Executive power of the people that the president exercises could give him considerable authority in relation to defence, military, intelligence, and even law enforcement matters. The President also continues to appoint the secretary to the cabinet, the secretary to the prime minister, and all ministry secretaries. Although all such administrative heads, except the cabinet secretary, are subject to the direction of the prime minister or the respective minister once appointed, the power of appointment which remains solely in the president’s hands is not insignificant. Finally, the president remains in sole charge of the vast resources and apparatus of the Presidential Secretariat. Unless strongly resisted by the cabinet and Parliament, the Secretariat is an enormous source and instrument for the informal projection of presidential power. Against these sources of formal and informal power must be weighed the 19th Amendment limitations. The president and the Constitutional Council This is perhaps the best-known aspect of the 19th Amendment. Good governance is intended to be achieved through depoliticising the process of appointments to key constitutional offices and independent commissions. In other words, through introducing the Constitutional Council into the process of public appointments, the president’s powers have been significantly curtailed. This happens in two ways: either the president has to appoint on the recommendation of the Council, or appoint only subject to the approval of the Council. Independent commissions are established to oversee elections, the public service, the police, the public audit, human rights, bribery and corruption, devolved finance, delimitation of electoral boundaries, and public procurement. Appointments to these independent bodies in turn can be made only on the recommendation of the Council. If the president fails to heed recommendations within 14 days, then appointments are deemed made by operation of law. Appointments (and even acting appointments) to other senior constitutional offices of the state can only be made by the president if approved by the Constitutional Council. These include the chief justice and judges of the Supreme Court, the president and judges of the Court of Appeal, the members of the Judicial Services Commission, the Attorney General, the Auditor General, the Inspector General of Police, the ombudsman, and the Secretary General of Parliament. In relation to the appointment of judges, the Council also needs to obtain the views of the Chief Justice. No longer therefore can the president act on his own discretion in these broad areas. The Constitutional Council is chaired by the speaker and comprises the prime minister, the leader of the opposition, other MPs, and three distinguished persons who are not politicians who have been approved by Parliament. The president and the prime minister and cabinet The most significant way in which the 19th Amendment both changed the Constitution and curtailed the powers of the president was by strengthening the position of the prime minister within the Executive. Two key changes abolished or reduced what were previously virtually unlimited presidential powers. Firstly, it strengthened the position of the prime minister by removing the unilateral power of appointment and dismissal of the prime minister from the president. That is, the principle that only the Member of Parliament commanding the confidence of Parliament can be appointed the prime minister has been strengthened. More explicitly, the prime minister can now only be dismissed expressly on the loss of the confidence of Parliament in the government as a whole, his death, resignation, or on him ceasing to be a Member of Parliament. Secondly, the president is now required to act on the advice of the prime minister when appointing and dismissing cabinet and other ministers. However, the president need only consult the prime minister when determining the number of cabinet ministries, and the assignment and reassignment of subjects to ministers, and in this way, continues to play a significant role within the Executive. However, what this means is that Executive power is no longer something exclusively held and unilaterally exercised by the president (with the prime minister and cabinet being his subordinates), but that Executive power is indisputably shared between the president and the prime minister. Another significant limitation comes into force when special provisions applicable only to President Sirisena lapse when he relinquishes office. It was provided that President Sirisena is permitted to hold the ministries of defence, Mahaweli Development, and the environment so long as he remains President. Once he demits office, only Members of Parliament can be appointed to cabinet and other ministerial office. Therefore, the new President assuming office after 16 November is no longer permitted to assign ministries to himself. The president and Parliament Aside from the requirement of parliamentary confidence in the Prime minister which now determines the latter’s life in office rather than the president’s wishes, and the role of the Constitutional Council (which is essentially a parliamentary body), the most significant way in which the 19th Amendment has curtailed presidential powers in relation to Parliament is by the removal of the power of dissolution. Parliament can now only be dissolved at the sole discretion of the president in the last six months of its five-year term. During the first four-and-a-half years of Parliament’s term, it can only be dissolved if Parliament itself requests an early dissolution by a resolution passed by a two-thirds majority. And of course, Parliament retains the power to dismiss a government by defeating the government’s statement of policy, or the budget, but in these respects, it is the prime minister and cabinet that are affected rather than the president himself. The president remains responsible, but not answerable, to Parliament, except in the exceptional situations that might trigger impeachment proceedings. The president’s power to prorogue Parliament also formally remains without any limitation, although this is restricted in indirect ways (e.g., the requirement to summon Parliament in order to pass appropriations). The President also retains the right to attend, address, and send messages to Parliament. The president and the courts The President continues to enjoy general legal immunity while in office for both official and private actions and omissions. However, the Supreme Court can now entertain fundamental rights applications instituted against the Attorney General challenging official acts or omissions of the President (except declarations of war and peace). This outline of the post-16 November presidency tells us a number of important things. First, that formalistic legal arguments based on the constitutional text alone to the effect that the presidency becomes a merely ceremonial institution after the next presidential election are wide of the mark. Second, and conversely, expectations that the newly-elected president can behave in the fashion of the pre-19th Amendment presidency are ignorant of the law. In particular, the presidential election in no way legally affects the continuation in office of the sitting prime minister. The latter of course can be removed from office in various ways, but it cannot be through a simple presidential dismissal. Third, the new president will also not be able to dissolve Parliament until February 2020 (which is when the current Parliament elected in August 2015 enters the last six months of its life), unless Parliament itself resolves an earlier dissolution by a two-thirds majority. Fourthly, that a new institutional framework is now in place governing relations within the Executive as well as between Executive and Legislature, and while this does not prevent expansive political action by a new president, it is crucial for the avoidance of confusion – and even crisis and chaos – that the proper legal parameters of politics are well understood by those who will be in power after 16 November. Finally, and more generally, it will be seen that while the 19th Amendment made certain fundamental changes to the structure of government under the 1978 Constitution, it did not abolish the semi-presidential model of government underpinning that Constitution. Accordingly, the president will be directly elected to perform certain functions, Parliament will sanction the performance of certain functions by the prime minister and the cabinet, and all three institutions perform distinctive roles under the constitution in order to ensure both effective government and effective accountability. Misunderstanding our system of government will be the first step in the direction of future abuses of power and unconstitutional action, borne of incomprehension and frustration, if the new president and those around him come to the conclusion he cannot do what he was elected to do. (Dr. Asanga Welikala is a Lecturer in Public Law at the School of Law, University of Edinburgh, and the Director of the Edinburgh Centre for Constitutional Law.) (Courtesy of the Daily FT /Photo courtesy of the Telegraph)
<urn:uuid:412f2909-0c85-47cd-bfb1-f58122559d83>
CC-MAIN-2021-43
https://srilankabrief.org/sri-lanka-the-powers-and-functions-of-the-new-president-dr-asanga-welikala/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00350.warc.gz
en
0.962784
2,354
3.171875
3
By Angela Daly – Chinese University of Hong Kong Additive manufacturing, better known as ‘3D printing’, is an innovative manufacturing technique which involves the creation of three-dimensional objects using a variety of methods, based on a digital design file containing a 2D blueprint for the final object. In recent years, inexpensive 3D printers using plastics as raw materials have appeared on the market, with some costing as little as US$300, cheaper than many smartphones. While still rudimentary, these machines allow their owners to make objects that previously would have required much more expensive equipment in a factory setting. Furthermore, digital design files for 3D printing are available freely on Internet sites such as Thingiverse and GitHub. The decentralised and digitised aspects of 3D printing have raised similar issues for intellectual property infringement and intermediary liability as we have seen with the Internet itself. This author has been working on legal and regulatory aspects of 3D printing for more than 5 years and wrote the first research monograph on this topic, Socio-Legal Aspects of the 3D Printing Revolution, in 2016, although that book focussed on developments in Northern/Western jurisdictions such as the United States, European Union and Australia. This reflected much of the existing research on 3D printing being located in the Global North/West, despite the fact that it has been posited by various commentators, including WIPO, that 3D printing may have different, and more impactful, trajectories in the Global South and also in remote areas. It was this gap which inspired the proposal of a research project examining 3D printing’s current and potential future relationship with intellectual property (IP) in some developed and emerging economies to the United Kingdom Intellectual Property Office. This international, interdisciplinary project, 3D Printing and Intellectual Property Futures, ran from 2016 and concluded in late 2018 with the publication of a Final Report. In the 3DPIP Futures project, we investigated the actual and potential future relationship between 3D printing and IP, in various countries in Europe and Asia, namely China, France, India, Russia, Singapore and the UK. We conducted fieldwork in the form of horizon scanning workshops with 3D printing ecosystem stakeholders in all these locations between September 2017 and April 2018. A general overview of the project and its IP-related findings is available elsewhere. This brief analysis will focus on our research on the three BRICS countries, China, India and Russia, which we included in our project. Project constraints entailed that we could not conduct research in all BRICS countries (as much as we would have liked to!), and our existing knowledge, experience and contacts led us to choose China, India and Russia to include in our project, over Brazil and South Africa. The following paragraphs offer an overview of the 3DPIP situation in China, India and Russia, before moving to some relevant project findings. China presents a very interesting example of the development of 3D printing and IP. China has its own culture of making: there are many makerspaces throughout the country, after the opening of Shanghai’s XinCheJian (新车间) in 2010, and Chinese maker culture is characterised by the idea of ‘shanzhai’ (山寨), which represents an alternative, self-reliant open innovation vision for novel and remixed products ‘created in China’. Similarly to Chinese Internet services, there is an emerging localised ecosystem of 3D printing service providers, and Western entities such as Thingiverse are less prevalent there. There are also various government initiatives to develop Chinese industry, technology and the economy which affect or involve 3D printing, the most (in)famous being the Made in China 2025 policy. There is a complex relationship between 3D printing and IP in China. Open innovation approaches, and the expiry of patents on 3D printing, have been viewed as key developments for the Chinese 3D printing industry. Yet attitudes towards IP have evolved in China: previously it has been viewed as a ‘user’ of the IP of others, especially in the West, but the Chinese Government has undertaken significant reform of IP law (although enforcement remains a challenge). The presence of 3D printing in China itself, both in terms of most of the 3D printing machines used globally being made there, and the more creative and innovative uses of 3D printing within China, which may stymie attempts by Western and developed economies to ‘onshore’ manufacturing back from China through 3D printing and other forms of ‘making’. We viewed China as moving towards a capitalist innovation paradigm and in many cases instituting policies not so dissimilar from the West. This includes a complex relationship with IP, both currently and in the projected futures for 3D printing. 3D printing is less prevalent in India compared to China, but India shows significant potential for future development. Similarly to China (and in fact most of the other countries we investigated), the Indian Government has implemented policies to stimulate digitisation and advanced manufacturing in the country. Notable among these policies are Digital India and Make in India. The latter policy draws on ideas of ‘swadeshi’, promoting Indian business and products through protectionist strategies and the formation of innovative ICT, service and manufacturing clusters. These clusters are now beginning to attract skilled graduates especially from IITs who might otherwise have left the country, which seems to be having a knock-on effect on the increased number of patents now being filed in India. In India, the role of open innovation and commons approaches was viewed as significant in the past, present and going into the future for the country, particularly along with local technology diffusion to facilitate India becoming a fully digital society. Yet other trends in India for 3D printing, innovation and IP may tend towards more conventional capitalist characteristics. Thus, the picture again for India is complex. India’s very large young population, emerging middle class and pre-existing manufacturing and logistics conditions make it a possible site for a large manufacturing paradigm change involving 3D printing. Although India may not replace China as the ‘World’s Factory’ it could be the site of pioneering localised and distributed manufacturing, a model which may be adopted elsewhere especially in other parts of the Global South. 3D printing has been present in Russia since at least 2011. Prior to this, especially in the 1990s and early 2000s, Russia had the reputation for widespread music, video and software piracy. However, this is no longer the case, with the current trend being for Russians to buy original, official products, to the extent that they will take out loans to buy smartphones. The Russian government has its own innovation policies, which involve improving legal protection for Russian innovations and by 2035 creating a network of Factories of the Future in Russia. However, at present there is much less patenting activity in Russia regarding 3D printing compared to China and India. At the consumer end, while 3D printers have come down in price, they are still quite expensive for the average Russian consumer, comprising more than half of the average monthly salary for a cheap 3D printer. Yet Russians can access 3D printers in many schools, universities and FabLabs. A more localised future scenario with individuals engaging in co-creation (although not necessarily owning their own 3D printers at home) was viewed as a likely future scenario in Russia, which may involve capitalist IP production and usage. Overall, we found a number of similarities across the countries we investigated, both BRICS and non-BRICS. These similarities included government policies to stimulate the creation and take-up of new technologies including 3D printing. Another similarity is that 3D printing does not currently appear to be posing fundamental threats to IP in any of the countries we looked at. But IP is also not unimportant for 3D printing in the countries, as we saw from the emphasis on patenting activity, the expiry of patents leading to greater technology dissemination and the possibility of more IP litigation. A point of difference among the countries comprised the future outlooks for 3D printing and IP, which varied significantly. The BRICS countries we examined, and Singapore, were broadly aligned to a capitalist future outlook which would likely implement/preserve ‘conventional’ IP laws and practices. The future outlook for the UK and France diverged from this picture by opening more possibilities for commons-based scenarios which may challenges conventional IP. The complexities of the 3D printing and IP relationship both now and in the future in the countries we looked at seems to evidence the need for research on digital technologies to adopt more global approaches and scopes, and in particular take account of developments in BRICS countries. A traditional approach in Western jurisdictions has been to examine the US and European Union developments, especially regarding Internet governance and regulation. This approach is too limited given global digitisation, technology use and technology development. The emerging strength of BRICS countries in technology governance, and the complex pictures this comprises in each country, must be taken account of in any internationalised study of emerging digital technology and its interaction with laws, regulation and other forms of governance – something we aimed to do in the 3DPIP Futures project. Our research offers an insight into how some BRICS countries are developing, implementing and governing 3D printing. However, it is only one insight – continuing research is necessary to examine and document how digital technologies including 3D printing are being used, applied and regulated around the world, especially in the many Asian, African and Latin American countries we were unable to consider. Angela Daly is a European socio-legal scholar currently based at the Chinese University of Hong Kong Faculty of Law. Her research is on the regulation of new digital technologies, and how ‘large jurisdictions’ of the EU, US and BRICS approach this topic. She is a CyberBRICS associated scholar. The author would like to acknowledge the UK Intellectual Property Office which funded this research, and her project collaborators Thomas Birtchnell, Thierry Rayna, Ludmila Striukova, Luke Heemsbergen and Jiajie Lu.
<urn:uuid:a25a6231-f553-49f4-9185-1653c517a8d3>
CC-MAIN-2021-43
https://cyberbrics.info/3d-printing-and-ip-futures-what-did-we-learn-from-china-india-and-russia/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.954773
2,079
2.640625
3
“In Yosemite Valley, one morning about two o’clock I was aroused by an earthquake; and though I had never before enjoyed a storm of this sort, the strange, wild thrilling motion and rumbling could not be mistaken, and I ran out of my cabin, near the Sentinel Rock, both glad and frightened, shouting, “A noble earthquake!” feeling sure I was going to learn something”, John Muir, great American naturalist, writing about his feeling the March 26, 1872, Owens Valley earthquake. There is nothing more exciting to a seismologist than to feel the ground shaking during an earthquake. The sense that the Earth is alive, that geology is dynamic, and for a brief moment in time it is possible to actually “see” tall mountains rise and deep valleys sink is palpable. Alas, even seismologists rarely experience a large earthquake first hand – although there are 10-20 magnitude 7+ earthquakes annually, only a very few are located near population centers. Seismologists mostly reside in the dingy halls of academic institutions, or worse yet, within the sterile offices of government agencies (first hand experience). It is only with great serendipity that seismologists have the happy happenstance to be standing on the ground above a suddenly slipping fault. That “slip” is the breaking of rock caused by the accumulation of strain driven by the ceaseless movement within the Earth’s plates. A small amount of the energy “released” by the rock breaking is converted to seismic waves that travel through the Earth. The quote at the top of the article is from John Muir, and was his emotional response to feeling a large earthquake in Owens Valley 100 km from his cabin. Muir’s words capture the pure joy seismologists feel when they recognize the vibrations from an earthquake. My wife and I planned a trip to visit southern most Chile to celebrate our anniversary over Christmas break. The highlight of the trip was a visit to Patagonia (which is a subject of the article “Collisions at the Bottom of the World II), and trekking within Torres del Paine national park. I worked on various seismic experiments within Chile in the 1990s, but I never had the opportunity to visit Patagonia; I love the high Andes of central and northern Chile (along with Bolivia and Argentina), but pined for the “Blue Towers” at the very end of South America. The long planned anniversary trip started not the most auspiciously—plane mechanical issues and gross incompetence by American Airlines meant we missed our plane to Santiago not once, but TWICE; we arrived in Santiago on the evening of the 24th instead of the planned morning of December 22. Finally, on Christmas Eve we made it to Chiloe, a beautiful island at the northern end of the Chiloe Archipelago. We were to stay a few days at an absolutely spectacular hotel, Tierra Chiloe (http://www.tierrahotels.com/tierra-chilo-hotel-boutique/). We planned for some trekking on the island mostly to see something unique culturally. Earthquakes never crossed my mind, although that probably is a remarkable confession! Early Christmas morning we arranged to trek on the Pacific Coast —and at the very beginning of our trek we got, oh so, oh so very close to the John Muir feeling of the “noble earthquake”. Chile: Home of the Monster Earthquake The entire coast line of Chile—all 2500+ miles of it from the border with Peru to the overlook into the Drake Passage, is a convergent boundary. Mostly this convergent boundary is between the oceanic Nazca plate and the continental land mass of South America on the South American Plate. The Nazca and South America are converging at a rate on the order of 10 cm/yr, and the Nazca plate disappears beneath Chile in a subduction zone. This subduction gives rise to volcanoes, and the uplift of the Andes; it also makes Chile one of the most seismically active regions in the world. In fact, Chile has seem more magnitude 8 earthquakes in the last 150 years than all other countries combined. However, the subduction along the length of Chile is complicated by the oblique angle between the South American coastline and the Nazca-Pacific spreading direction. In the north, the coastline is 1000s of km from the spreading center, but near Chiloe the spreading center is only a few hundreds of km from the coast. The ocean crust of the Nazca plate is very young when it descends beneath Chiloe, and very old when it subducts beneath Iquique near Peru. The young crust is very warm and therefore buoyant, thus it resists descending through the mantle. This buoyancy translates to a very “stiff” subduction zone, and very large earthquakes. In fact, the largest earthquake known occurred along the southern section of the Chilean subduction zone on May 22, 1960. The figure above shows the area that slipped in that earthquake (the pink color). The earthquake ruptured a fault that started in the north (the epicenter of the earthquake) and moved to the south almost 1000 km. The fault had a maximum slip of about 25 m – an extraordinary number! A single earthquake moved one side of the fault almost 100 feet relative to the opposite side. This earthquake created a huge tsunami that traveled across the Pacific ocean and caused fatalities in Hawaii and Japan. Seismologists measure the size of earthquakes with seismic moment, which is defined as Mo = u D A. This simple formula states that moment (Mo) is the product of the fault slip (D), fault area (A) and the rigidity (u) of the fault (think of this as the strength of the rock that slips along the fault surface during an earthquake). The long age of the Nazca plate translates to a large value for rigidity. It is possible to convert seismic moment to a value of magnitude – which is not particularly useful to seismologists, but is very important to the public because of their familiarity with Richter’s magnitude. For the 1960 earthquake the magnitude is calculated to be 9.6, by far the largest earthquake ever. A careful examination of the map of the Chilean earthquake fault zone above will show that the very center of the fault is …. Chiloe! The large size of the 1960 earthquake obviously causes every resident of Chiloe to treat terremotos with concern. However, it is possible to calculate the average “return time” for the 1960 event by comparing convergence rate and slip in the event. This return rate is about 300 years. This means it is unlikely to have another monster earthquake (M > 9.0) near Chiloe in the next few decades; but it also means that great earthquakes (M>8.0) are going to happen every 50 years or so, and large earthquakes (M>7.0) every few decades. In other words, when I made our plans for visiting Chiloe I should have AT LEAST THOUGHT about earthquakes! Missed it by that Much! We started our Christmas day trek with a drive to the west coast of the Chiloe Island about 8:45 am. Around 11:15 we made a stop near Cucoa on our way to Muelle de las Almas. The stop was at a remarkable pebble bar that broke the surf. The bar is about 8 m high, and 30 m wide, and with every surge of the surf, the pebbles are pulled seaward causing a loud clacking. The bar was once a site of a placer operation that recovered meager amounts of gold. We were on a tight schedule or I would have explored the bar for much longer. However, we got back into our 4WD vehicle and headed for the trailhead. Within minutes of getting into the car we noticed that the power poles were swaying—the wires between poles looked to be moving 3 or more meters. My first thought was where the heck did those hurricane force winds come from? Within another few minutes we had started our trek and my phone went crazy with emergency notifications. At first I thought they were from New Mexico, but closer examination, it became obvious that they where Chilean, and warned that a large earthquake had just occurred and a Tsunami warning was issued. Soon, our guide was being called on his radio, and told to evacuate immediately. A quick search showed that the USGS had reported a magnitude 7.7 (later downgraded to 7.6) earthquake under the southern tip of Chiloe – only 45 km south of us! My strong inclination was to continue the trek and wait on a high ridge to see a tsunami come ashore. However, I was over ruled by the guide (for the record, Michelle was voting with me – wait for the frick’in waves!). There were numerous reports of landslide, and within 30 minutes there were reports of 20 homes destroyed at Puerto Quellon. Discretion once again trumped valor — we abandon the trek and headed back to the hotel. Along the road we encountered numerous landslides, and cracked roads and bridges. The earthquake knocked out power to the entire island, and broke water pipes up to 80 km from the epicenter. When we got back to Castro there were lines of cars trying to fill up with gasoline – scores of cars at each station. The lack of power and the concern of future earthquakes caused a mini-panic. I don’t mean to give the impression of chaos, just concern driven by the haunting memory of 1960. In the end, we ended up with a cancelled trek, a quiet afternoon looking for birds instead of interesting rocks, and thoughts about if we had only waited 10 minutes on the gravel bar we would have experience shaking with an intensity of 6 or 7. Instead, we had the soft rubber of tires and the suspension system of a truck to damp out the shaking…missed it by oh so little. There remains a remote chance that this earthquake is a foreshock to a large earthquake. But it seems unlikely. However, it is still a great anniversary present to an old seismologist on vacation.
<urn:uuid:1cb21f62-4b30-4c6e-a21e-3e84062a5b88>
CC-MAIN-2021-43
https://wallaceterrycjr.com/tag/chile-geology/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00590.warc.gz
en
0.967297
2,117
2.515625
3
This type of testing utilizes high-frequency sound waves that are transmitted throughout. Non Destructive Testing with Ultrasonic Technology. Step 1: The UT probe is placed on the root of the blades to be inspected with the help of a special borescope tool (video probe). Step 2: Instrument settings are input. Step 3: The probe is scanned over the blade root. In this case, an indication (peak in the data) through the red line (or gate) indicates a good blade; an indication to the left of that range indicates a crack. Ultrasonic testing (UT) is a family of non-destructive testing techniques based on the propagation of ultrasonic waves in the object or material tested. In most common UT applications, very short ultrasonic pulse-waves with center frequencies ranging from 0.1-15 MHz, and occasionally up to 50 MHz, are transmitted into materials to detect internal flaws or to characterize materials. A common example is ultrasonic thickness measurement, which tests the thickness of the test object, for example, to monitor pipework corrosion. Ultrasonic testing is often performed on steel and other metals and alloys, though it can also be used on concrete, wood and composites, albeit with less resolution. It is used in many industries including steel and aluminium construction, metallurgy, manufacturing, aerospace, automotive and other transportation sectors. On May 27, 1940, U.S. researcher Dr. Floyd Firestone of the University of Michigan applies for a U.S. invention patent for the first practical ultrasonic testing method. The patent is granted on April 21, 1942 as U.S. Patent No. 2,280,226, titled 'Flaw Detecting Device and Measuring Instrument'. Extracts from the first two paragraphs of the patent for this entirely new nondestructive testing method succinctly describe the basics of such ultrasonic testing. 'My invention pertains to a device for detecting the presence of inhomogeneities of density or elasticity in materials. For instance if a casting has a hole or a crack within it, my device allows the presence of the flaw to be detected and its position located, even though the flaw lies entirely within the casting and no portion of it extends out to the surface. .. The general principle of my device consists of sending high frequency vibrations into the part to be inspected, and the determination of the time intervals of arrival of the direct and reflected vibrations at one or more stations on the surface of the part.' James F. McNulty (U.S. radio engineer) of Automation Industries, Inc., then, in El Segundo, California, an early improver of the many foibles and limits of this and other nondestructive testing methods, teaches in further detail on ultrasonic testing in his U.S. Patent 3,260,105 (application filed December 21, 1962, granted July 12, 1966, titled “Ultrasonic Testing Apparatus and Method”) that “Basically ultrasonic testing is performed by applying to a piezoelectric crystal transducer periodic electrical pulses of ultrasonic frequency. The crystal vibrates at the ultrasonic frequency and is mechanically coupled to the surface of the specimen to be tested. This coupling may be effected by immersion of both the transducer and the specimen in a body of liquid or by actual contact through a thin film of liquid such as oil. The ultrasonic vibrations pass through the specimen and are reflected by any discontinuities which may be encountered. The echo pulses that are reflected are received by the same or by a different transducer and are converted into electrical signals which indicate the presence of the defect.” To characterize microstructural features in the early stages of fatigue or creep damage, more advanced nonlinear ultrasonic tests should be employed. These nonlinear methods are based on the fact that an intensive ultrasonic wave is getting distorted as it faces micro damages in the material. The intensity of distortion is correlated with the level of damage. This intensity can be quantified by the acoustic nonlinearity parameter (β). β is related to first and second harmonic amplitudes. These amplitudes can be measured by harmonic decomposition of the ultrasonic signal through fast Fourier transformation or wavelet transformation. How it works Gta 5 ps3 hack download. In ultrasonic testing, an ultrasound transducer connected to a diagnostic machine is passed over the object being inspected. The transducer is typically separated from the test object by a couplant (such as oil) or by water, as in immersion testing. However, when ultrasonic testing is conducted with an Electromagnetic Acoustic Transducer (EMAT) the use of couplant is not required. There are two methods of receiving the ultrasound waveform: reflection and attenuation. In reflection (or pulse-echo) mode, the transducer performs both the sending and the receiving of the pulsed waves as the 'sound' is reflected back to the device. Reflected ultrasound comes from an interface, such as the back wall of the object or from an imperfection within the object. The diagnostic machine displays these results in the form of a signal with an amplitude representing the intensity of the reflection and the distance, representing the arrival time of the reflection. In attenuation (or through-transmission) mode, a transmitter sends ultrasound through one surface, and a separate receiver detects the amount that has reached it on another surface after traveling through the medium. Imperfections or other conditions in the space between the transmitter and receiver reduce the amount of sound transmitted, thus revealing their presence. Maayka serial zee tv episodes 1. Using the couplant increases the efficiency of the processby reducing the losses in the ultrasonic wave energy due to separation between the surfaces. - High penetrating power, which allows the detection of flaws deep in the part. - High sensitivity, permitting the detection of extremely small flaws. - In many cases only one surface needs to be accessible. - Greater accuracy than other nondestructive methods in determining the depth of internal flaws and the thickness of parts with parallel surfaces. - Some capability of estimating the size, orientation, shape and nature of defects. - Some capability of estimating the structure of alloys of components with different acoustic properties - Non-hazardous to operations or to nearby personnel and has no effect on equipment and materials in the vicinity. - Capable of portable or highly automated operation. - Results are immediate. Hence on the spot decisions can be made. - Manual operation requires careful attention by experienced technicians. The transducers alert to both normal structure of some materials, tolerable anomalies of other specimens (both termed “noise”) and to faults therein severe enough to compromise specimen integrity. These signals must be distinguished by a skilled technician, possibly requiring follow up with other nondestructive testing methods. - Extensive technical knowledge is required for the development of inspection procedures. - Parts that are rough, irregular in shape, very small or thin, or not homogeneous are difficult to inspect. - Surface must be prepared by cleaning and removing loose scale, paint, etc., although paint that is properly bonded to a surface need not be removed. - Couplants are needed to provide effective transfer of ultrasonic wave energy between transducers and parts being inspected unless a non-contact technique is used. Non-contact techniques include Laser and Electro Magnetic Acoustic Transducers (EMAT). - International Organization for Standardization (ISO) - ISO 2400: Non-destructive testing - Ultrasonic testing - Specification for calibration block No. 1 (2012) - ISO 7963: Non-destructive testing — Ultrasonic testing — Specification for calibration block No. 2 (2006) - ISO 10863: Non-destructive testing of welds -- Ultrasonic testing -- Use of time-of-flight diffraction technique (TOFD) (2011) - ISO 11666: Non-destructive testing of welds — Ultrasonic testing — Acceptance levels (2010) - ISO 16809: Non-destructive testing -- Ultrasonic thickness measurement (2012) - ISO 16831: Non-destructive testing -- Ultrasonic testing -- Characterization and verification of ultrasonic thickness measuring equipment (2012) - ISO 17640: Non-destructive testing of welds - Ultrasonic testing - Techniques, testing levels, and assessment (2010) - ISO 22825, Non-destructive testing of welds - Ultrasonic testing - Testing of welds in austenitic steels and nickel-based alloys (2012) - ISO 5577: Non-destructive testing -- Ultrasonic inspection -- Vocabulary (2000) - European Committee for Standardization (CEN) - EN 583, Non-destructive testing - Ultrasonic examination - EN 1330-4, Non destructive testing - Terminology - Part 4: Terms used in ultrasonic testing - EN 12668-1, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 1: Instruments - EN 12668-2, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 2: Probes - EN 12668-3, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 3: Combined equipment - EN 12680, Founding - Ultrasonic examination - EN 14127, Non-destructive testing - Ultrasonic thickness measurement (Note: Part of CEN standards in Germany accepted as DIN EN, in Czech Republic as CSN EN.) - Time-of-flight diffraction ultrasonics (TOFD) - Time-of-flight ultrasonic determination of 3D elastic constants (TOF) - Internal rotary inspection system (IRIS) ultrasonics for tubes - ^Matlack, K. H.; Kim, J.-Y.; Jacobs, L. J.; Qu, J. (2015-03-01). 'Review of Second Harmonic Generation Measurement Techniques for Material State Determination in Metals'. Journal of Nondestructive Evaluation. 34 (1): 273. doi:10.1007/s10921-014-0273-5. ISSN0195-9298. - ^Mostavi, Amir; Kamali, Negar; Tehrani, Niloofar; Chi, Sheng-Wei; Ozevin, Didem; Indacochea, J. Ernesto (2017). 'Wavelet Based Harmonics Decomposition of Ultrasonic Signal in Assessment of Plastic Strain in Aluminum'. Measurement. 106: 66–78. doi:10.1016/j.measurement.2017.04.013. - ^U.S. Patent 3,260,105 for Ultrasonic Testing Apparatus and Method to James F. McNulty at lines 37-48 and 60-72 of Column 1 and lines 1-4 of Column 2. |Wikimedia Commons has media related to Ultrasonic flaw detection.| - Albert S. Birks, Robert E. Green, Jr., technical editors ; Paul McIntire, editor. Ultrasonic testing, 2nd ed. Columbus, OH : American Society for Nondestructive Testing, 1991. ISBN0-931403-04-9. - Josef Krautkrämer, Herbert Krautkrämer. Ultrasonic testing of materials, 4th fully rev. ed. Berlin; New York: Springer-Verlag, 1990. ISBN3-540-51231-4. - J.C. Drury. Ultrasonic Flaw Detection for Technicians, 3rd ed., UK: Silverwing Ltd. 2004. (See Chapter 1 online (PDF, 61 kB)). - Nondestructive Testing Handbook, Third ed.: Volume 7, Ultrasonic Testing. Columbus, OH: American Society for Nondestructive Testing. - Detection and location of defects in electronic devices by means of scanning ultrasonic microscopy and the wavelet transform measurement, Volume 31, Issue 2, March 2002, Pages 77–91, L. Angrisani, L. Bechou, D. Dallet, P. Daponte, Y. Ousten - Charles Hellier (2003). 'Chapter 7 - Ultrasonic Testing'. Handbook of Nondestructive Evaluation. McGraw-Hill. ISBN978-0-07-028121-9.
<urn:uuid:5a070671-d691-4f36-91cc-4364badb1301>
CC-MAIN-2021-43
https://travel.faotas.info/ultrasonic-crack-testing-machine.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00550.warc.gz
en
0.868018
2,582
3.140625
3
Health Indicator Report of Cardiovascular Disease - Heart Disease Deaths In 2017, heart disease was the leading cause of death in New Mexico and accounted for over 20% of all deaths. NotesHeart disease mortality is defined as circulatory, Heart disease (ICD10: I00-I09, I11, I13, I20-I51). Some rows in data tables may include a note of Unstable or Very Unstable. Those rates labeled Unstable were statistically unstable (RSE >0.30 and <0.50), and may fluctuate widely across time periods due to random variation (chance). Those rates labeled Very Unstable were extremely unstable (RSE >0.50). These values should not be used to infer population risk. Data have been directly age-adjusted to the U.S. 2000 standard population. Data for the United States were obtained from the CDC/National Center for Health Statistics mortality data reports, available online at www.cdc.gov/nchs/deaths.htm. - New Mexico Death Data: Bureau of Vital Records and Health Statistics (BVRHS), New Mexico Department of Health. - Population Estimates: University of New Mexico, Geospatial and Population Studies (GPS) Program, http://gps.unm.edu/. - U.S. Data Source: Centers for Disease Control and Prevention, National Center for Health Statistics, ]http://www.cdc.gov/nchs/] DefinitionDiseases of the heart include a variety of conditions that may affect different parts of the heart, including the blood supply, the heart muscle, the internal lining and valves, the conduction system, and the membrane that surrounds the heart. Common causes of death from diseases of the heart include myocardial infarction (heart attack), heart failure, and cardiac arrest. NumeratorNumber of heart disease deaths DenominatorNew Mexico Population How Are We Doing?Generally, overall heart disease death rates have been decreasing for decades. However, heart disease and cancer deaths remain the top two leading causes of death in NM and the US. Age and Sex. As is common with chronic diseases, death rates increased as age increased, with a steep increase in the oldest age group (85+ years). In 2017, the heart disease death rate of males, 190.8 per 100,000, was statistically significantly higher than that of females, 116.1 per 100,000. This relationship to sex was seen across all but the youngest age group, 0 to 14 years of age. Race/Ethnicity: Heart disease mortality varied greatly by race and ethnicity. During the 3-year period 2015-2017, in descending order from highest rate to lowest rate, each rate statistically significantly higher than all lower rates, the rates were: Black or African Americans, 213.3 per 100,000; whites, 154.2 per 100,000; Hispanic, 137.1 per 100,000; American Indian or Alaska Native, 121.5 per 100,000; and Asian or Pacific Islander (the lowest rate), 84.8 per 100,000. County: During the period 2015-2017, heart disease mortality rate varied by county. The six counties with the highest rates, five above 200 per 100,000, were Sierra, Chaves, Lea, Curry, Luna, and Eddy counties. The six counties with the lowest rates, all below 120 per 100,000, were Catron, Los Alamos, Santa Fe, Taos, Mora, and Harding. Urban and Rural: NM counties were designated into four groups of urbanicity and rurality, using the National Center for Health Statistics classification scheme. For 2015-2017, heart disease mortality rates were highest and similar in Mixed Urban/Rural and Rural counties, and lowest in Metro and Small Metro counties. The heart disease mortality rate for Small Metro counties was statistically significantly lower than all other Urban/Rural categories. How Do We Compare With the U.S.?US and NM: NM rates were consistently lower than US rates. Rates continue to decrease in US, as a whole. NM rates have remained essentially flat since 2009 but these rates are slightly lower than the rates of earlier years. Over the past ten years, an average 3,406 heart disease deaths occurred annually in NM. Rates for the nation have been decreasing since the 1950s. Decreases in mean blood pressure levels, mean blood cholesterol levels and smoking, as well as improvements in medical care have contributed to this decline in death rates. However, heart disease and stroke remain leading causes of disability and death. (Achievements in Public Health, 1900-1999: Decline in Deaths from Heart Disease and Stroke -- United States, 1900-1999. Centers for Disease Control and Prevention.) ] What Is Being Done?The NM Department of Health Heart Disease and Stroke Prevention (HDSP) Program within the Population and Community Health Bureau uses a comprehensive, evidence-based approach to promote healthy lifestyles focused on preventing, identifying and controlling high blood pressure and high cholesterol levels among New Mexican adults. Our mission is to improve the health of New Mexicans by implementing and evaluating effective strategies for cardiovascular disease prevention and management. The HDSP program and its partners work with communities, health systems, health care providers and other organizations across the state to implement activities that improve quality of care as it relates to blood pressure and cholesterol control. This will reduce CVD-related illness, save lives and be a valuable investment in population health. Program strategies include: * Assist health systems in tracking and monitoring clinical measures to improve health care quality and identify patients with high blood pressure * Encourage team-based care practices within health systems * Promote sustainability of community health workers/community health representatives/promotoras * Increase the use of self-measured blood pressure monitoring with clinical support * Facilitate referral of adults with high blood pressure or high blood cholesterol to community programs/resources * Advance health equity to improve health outcomes and quality of life * Increase the HDSP?s capacity to achieve and sustain program goals and strategies The HDSP program consults with populations that are disproportionately affected by cardiovascular disease and stroke and/or those that serve them to develop education and services that are culturally appropriate to these populations. Evidence-based PracticesEvidence-based community health improvement ideas and interventions may be found at the following sites: - The Guide to Community Preventive Services - Health Indicators Warehouse - County Health Rankings - Healthy People 2020 Website Heart Disease and its complications can be prevented and managed through these strategies: * Clinical decision-support systems designed to assist healthcare providers in implementing clinical guidelines at the point of care. * Reducing out-of-pocket costs (ROPC) for patients with high blood pressure and high cholesterol. * Team-Based Care to Improve Blood Pressure Control. * Interventions engaging community health workers/community health representatives/promotoras * Implementing self-measured blood pressure monitoring interventions * Interactive digital interventions for blood pressure self-management * Mobile Health (mHealth) interventions for treatment adherence among newly diagnosed patients CDC recommends specific major activities to implement these seven effective strategies: 1) Clinical decision-support systems (CDSS) designed to assist healthcare providers in implementing clinical guidelines at the point of care. * Implementation of CDSS at clinics and sites that provide healthcare services along with providing technical assistance on proper use of these systems. * CDSS for cardiovascular disease prevention (CVD) include one or more of the following: * Reminders for overdue CVD preventive services including screening for risk factors such as high blood pressure, diabetes, and high cholesterol * Assessments of patients' risk for developing CVD based on their medical history, symptoms, and clinical test results * Recommendations for evidence-based treatments to prevent CVD, including intensification of treatment * Recommendations for health behavior changes to discuss with patients such as quitting smoking, increasing physical activity, and reducing excessive salt intake * Alerts when indicators for CVD risk factors are not at goal[[br]] 2) Reducing out-of-pocket costs (ROPC) for patients with high blood pressure and high cholesterol: * Reducing out-of-pocket costs involves program and policy changes that make cardiovascular disease preventive services more affordable. These services include: * Medications * Behavioral counseling (e.g. nutrition counseling) * Behavioral support (e.g. community-based weight management programs, gym membership) * Encouraging the delivery of preventive services in clinical and non-clinical settings (e.g. worksite, community). * Promoting interventions that enhance patient-provider interaction such as team-based care, medication counseling, and patient education. * Increasing awareness of covered services to providers and to patients with high blood pressure and high cholesterol using targeted messages. * Work with diabetes management and tobacco cessation programs to coordinate coverage for blood pressure and cholesterol management. [[br]] 3) Team-Based Care to Improve Blood Pressure Control: * Team-based care to improve blood pressure control is a health systems-level, organizational intervention that incorporates a multidisciplinary team to improve the quality of hypertension care for patients. * Provide technical assistance to facilitate communication and coordination of care support among various team members including the patient, the patient?s primary care provider, nurses, pharmacists, dietitians, social workers, and community health workers. * Enhance the use of evidence-based guidelines by team members. * Actively engage patients and populations at risk in their own care by providing educational materials, medication adherence support, and tools and resources for self-management (including health behavior change). 4) Interventions engaging community health workers/community health representatives/promotoras: * Screening and health education. CHWs screen for high blood pressure, cholesterol, and behavioral risk factors recommended by the United States Preventive Services Task Force (USPSTF); deliver individual or group education on CVD risk factors; provide adherence support for medications; and offer self-management support for health behavior changes, such as increasing physical activity and smoking cessation. * Outreach, enrollment, and information. CHWs reach out to individuals and families who are eligible for medical services, help them apply for these services, and provide proactive client follow-up and monitoring, such as appointment reminders and home visits. * Team-based care. As care team members, CHWs partner with clients and licensed providers, such as physicians and nurses, to improve coordination of care and support for clients. * Patient navigation. CHWs help individuals and families navigate complex medical service systems and processes to increase their access to care. * Community organizers. CHWs facilitate self-directed change and community development by serving as liaisons between the community and healthcare systems. 5) Implementing self-measured blood pressure monitoring interventions: * One-on-one patient counseling on medications and health behavior changes (e.g., diet and exercise) * Educational sessions on high blood pressure and blood pressure self-management * Access to electronic or web-based tools (e.g., electronic requests for medication refills, text or email reminders to measure blood pressure or attend appointments, direct communications with healthcare providers via secure messaging) 6) Interactive digital interventions for blood pressure self-management: In these interventions, patients who have high blood pressure use digital devices to receive personalized, automated guidance on blood pressure self-management. Devices include mobile phones, web-based programs, or telephones. Interactive content does not require direct input from a health professional. 7) Mobile Health (mHealth) interventions for treatment adherence among newly diagnosed patients: mHealth interventions for treatment adherence use mobile devices to deliver self-management guidance to patients who have been recently diagnosed with cardiovascular disease. Content must be accessible through mobile-phones, smartphones, or other hand-held devices. Interventions must include one or more of the following: * Text-messages that provide information or encouragement for treatment adherence * Text-message reminders for medications, appointments, or treatment goals * Web-based content that can be viewed on mobile devices * Applications (apps) developed or selected for the intervention with goal-setting, reminder functions, or both * An interactive component (i.e., patients enter personal data or make choices) that gives patients personally relevant, tailored information and feedback * Mobile communication or direct contact with a healthcare provider * Web-based content to supplement text-message interventions Page Content Updated On 10/31/2018, Published on 01/09/2019
<urn:uuid:881cbcab-c358-48bb-b551-00a1559f6bc9>
CC-MAIN-2021-43
https://nmtracking.org/epht-view/dataportal/indicator/view/CardioVasDiseaseHeartDeath.Year.NM_US.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.920892
2,537
2.734375
3
Types and classification of coal. As geological processes apply pressure to dead plant material, over time it is transformed into the following Peat this is not yet coal. Lignite, or brown coal, the lowest rank of coal, used almost exclusively as fuel for electric power generation. Jet is South Africa Types Of Coal, South Africa Types Of Coal Suppliers and Manufacturers Directory - Source a Large Selection of Types Of Coal Products at smokers coal,coal gas,anthracite coal from South Africa Alibaba.com Coal. Coal minings advent in South Africa can best be traced to the start of gold mining in the late 19 th century, particularly on the Witwatersrand, with the first coal in appreciable tonnages extracted on the Highveld coal field close to the nascent Witwatersrand gold mines. 7 the geology of the south African coalfields Coal is found in South Africa in 19 coalfields Figure 1.4, located mainly in KwaZulu-Natal, Mpumalanga, Limpopo and the Free State, with lesser amounts in Gauteng, the North West Province and the Eastern Cape. coal benification plant in south africa. Coal preparation research in South Africa washing characteristics of South African coal In the end a compromise was adopted with the coal being washed to between 12 and 15 ash The South African iron and steel industry had to adapt to these levels of ash in the coke feedstock and the cokes produced in South Africa therefore contained up to 20 ash Some 47 of South Africas coal is produced from bord-and-pillar mining, stoping and longwall mining, types of underground extraction which access coal up to 300 metres under the ground. Miners travel by lifts down a mine shaft to reach the depths of the mine or enter by means of decline shafts or adits for shallower mines. What are the types of coal There are four major types or ranks of coal. Rank refers to steps in a slow, natural process called coalification, during which buried plant matter changes into an ever denser, drier, more carbon-rich, and harder material. The four ranks are Anthracite The highest rank of coal. South Africa consumes 202,298,474 Tons short tons, st of Coal per year as of the year 2016. South Africa ranks 7th in the world for Coal consumption, accounting for about 17.8 of the worlds total consumption of 1,139,471,430 tons. South Africa consumes 3,599,127 cubic feet of Coal per capita every year based on the 2016 population of 56,207,646 people, or 9,861 cubic feet per capita ... The remainder of the coal we produce is used in domestic power generation in Australia and South Africa. We complement our own production with agency third party and traded volumes to help ensure our customers get the quantities and qualities of coal they require in a Electricity from coal. In most modern power stations in South Africa, coal is burned to heat water and convert it into steam. The steam is directed onto the blades of a turbine to make it rotate. This in turn rotates the magnetic rotor inside the coil to generate electricity. Once the steam has passed through the turbines, it must be cooled and ... types of surface and underground mining methods. The context of South African coal sector is then discussed, focusing on the history and the current status of the coal sector in South Africa. The role of the South African coal market within the global coal market is also discussed. The Jul 25, 2020 In the Mpumalanga city that is inside South Africa, Kendal Power Station that is the third biggest station in South Africa is a coal-filled type of power station. It is situated in a coal-mining region. AEMFCs coal mine situated at Lafontaine close to Ogives is among the sources of Kendal Power Station. Its main fuel sources are coal. Overview In May 2021 South Africa exported ZAR156B and imported ZAR109B, resulting in a positive trade balance of ZAR47B.Between May 2020 and May 2021 the exports of South Africa have increased by ZAR53.3B 52 from ZAR103B to ZAR156B, while imports increased by ZAR22.9B 26.7 from ZAR85.9B to ZAR109B. 1 day ago In terms of South Africas previous Integrated Resource Plan for Electricity, IRP 2010 2030, a further 6250 MW of new coal-fired power plant was Most of South Africas coal production is bituminous steam, with only 1.2 anthracite. Some 0,8 of the bituminous coal is converted, through beneficiation, to a coking coal product, some semi-soft, some straight. Only a few small and uneconomic deposits of lignite Currently coal is by far the major energy source for South Africa, comprising around 80 percent of the countrys energy mix. However, according to the 2019 Integrated Resource Plan IRP, 24,100 MW of conventional thermal power sources, specifically coal, are likely to Jan 01, 2001 The lithologies that contain the coal deposits are Permian in age and assigned to the Karoo Supergroup SACS, 1980 Johnson et al., 1996, and Karoo-type deposits are known from all of the countries described in this paper .The Karoo basin in South Africa is regarded as the type-locality for the southern African coals Cadle et al., 1993.This is somewhat misleading because although the ... Dec 30, 2019 Public holidays in South Africa. 2. COAL. is another very important mineral produced in South Africa. Very large deposits of coal are found beneath the Mpumalanga and northern Free State Highveld. Coal is used locally to generate electricity but the majority of the mined coal is exported to East Asia and Europe. Jan 04, 2018 In South Africa, Anglo owns and operates nine mines, six of which produce 23 million tonnes per year of thermal coal for the local market and for The role of coal in South African industry dwindles in the AC as gas and bioenergy are increasingly used, especially in steel production and in light industries. South Africa electricity access solutions by type in the Africa Case. 6.4 percent since 2004. African coal production rose 1.9 percent in 2005, compared with 2004 levels, accounting for about 5 percent of total world anthracite and bituminous coal production1. Most of the increase in African coal production was attributable to South Africa, which alone accounted for 98 percent of the regional against South Africa during the 1970s oil embargo, the apartheid regime created Sasol to produce CTL fuels domestically and, thereby, increase the fuel security of South Africa Mondliwa amp Roberts, 2019. Since then, government policies and regulatory mechanisms have cultivated Sasol and CTL fuels in South Africa. Oct 18, 2019 South Africas government announced Friday that the country would increase its use of coal-fired energy, provoking outrage from climate groups. Durban, KwaZulu-Natal, South Africa. 5 years underground coal experience in a 2.6.1 Managerial capacity MHS Act. 5 10 years underground coal mining experience. They will ... 5 months ago. Production Manager. May 02, 2019 South Africa does not have 250 years of coal reserves but still has a good amount of coal to explore both for export and for local use. Consequently, 95 of the countrys electricity supply comes from non-renewable energy sources, which coal Durban, KwaZulu-Natal, South Africa A coal mine in Mpumalanga is looking for an experienced Production Manager 2.6.1 to join their team. In 2019, production of bituminous coal for South Africa was 247,973 thousand short tons. Though South Africa production of bituminous coal fluctuated substantially in recent years, it tended to increase through 2000 - 2019 period ending at 247,973 thousand short tons in 2019. A dense coal, usually black, sometimes dark brown, often with well-defined bands of bright and dull material, used ... Black coal resources occur in New South Wales, Queensland, South Australia, Tasmania and Western Australia Figure 3.4 but New South Wales 23 and Queensland 63 have the largest share of Australias total identified in situ resources Figure 3.5. These two states are also the largest coal South Africas coal is obtained from collieries that range from among the largest in the world to small-scale producers. As a result of new entrants, operating collieries increased to 64 during 2004. Of these, a relatively small number of large-scale producers supply coal The remainder of South Africas coal production feeds the various local industries, with 53 used for electricity generation. The key role played by our coal reserves in the economy is illustrated by the fact that Eskom is the 7th largest electricity generator in the world, and Sasol the largest coal Jan 12, 2021 Unfortunately, analysts have it that coal will probably serve as the driver of energy in South Africa for the time being. In a study conducted by Statistics South Africa in 2017, 67 percent, 15 percent, 14 percent, 2 percent of the primary sources of South Africas energy supply in 2010 comes from coal, crude oil, petroleum, and nuclear power ... Oct 25, 2019 The South African coal mining sector produced 252.3 million tons of coal and contributed 2, to the countrys gross domestic product in 2017. The combined value of local sales and exports reached ... Mar 26, 2019 Trade-offs associated with a low-carbon transition are particularly acute in South Africa, a country with high levels of unemployment and inequality and an ambitious development agenda. South Africas exposure to coal mining as a source of export revenues, as a fuel for domestic power generation and as a key employer in certain provinces ... Lignite, or brown coal, the lowest rank of coal, used almost exclusively as fuel for electric power generation. Jet is a compact form of lignite that is sometimes polished and has long been used as an ornamental stone. Sub-bituminous coal is used as fuel for steam-electric power generation. Bituminous coal is a dense sedimentary rock, usually black, but sometimes dark brown. South Africas economically recoverable coal reserves are estimated at between 15 and 55 billion tonnes. 96 of reserves are bituminous coal metallurgical coal accounts for approximately 2 and anthracite another 2.2 Production is mainly steam coal of bituminous quality. The majority of South Africas reserves and mines are in the Central
<urn:uuid:5d2afc5a-a6ca-4535-9a9a-e944c3c5bea1>
CC-MAIN-2021-43
https://ma-foto.pl/news/0726_18084.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.942555
2,190
3
3
29 February 2020 In the footsteps of migrants How a unique museum of personal migration stories from the past and the present is helping a city of 170 nationalities build a sense of belonging Four months after she arrived in Antwerp, Colombian-born Laura Vargas was given a map of the city and asked to mark the five places most important to her. “I drew all the places that were helping to make me feel at home, starting with the museum, which I absolutely love because it's so incredibly welcoming,” she says. “As a migrant it's very nice to go somewhere that gives some importance to newcomers.” Laura's favourite museum, and the source of the maps used to tell the stories of people new to the city, is the Red Star Line Museum. And her beautifully illustrated map - she's an artist by profession - is now on display as part of its Safe harbour exhibition. Setting out for a better life The museum is housed in the former harbour warehouses of the Red Star Line, whose ocean steamers carried two million European emigrants from Antwerp to the United States between 1873 and 1934. But, as the Safe harbour project suggests, this is no ordinary museum of random objects, historical facts and generalised descriptions. When it was first decided to regenerate the port area in the 1990s, the first idea was to turn the line's warehouses into a maritime museum. But then research started to unearth fascinating details about the passengers who had arrived at the port, their whole lives packed into a few suitcases, hoping for a better life across the ocean. The result? A vision for a museum of migration that takes visitors on a journey back in time and enables encounters with today. How to make a museum relevant Migration has been a fundamental feature of man's history since ancient times, but the past few decades have seen unprecedented human movement. The number of displaced people has risen by 65% in the past decade. And in recent years, a record 71 million people have been forcibly displaced by war and violence according to the United Nations. As cities search for ways to integrate rising numbers of newcomers, cultural heritage is increasingly being seen as a way of encouraging and enabling participative initiatives and mutual understanding of each other's past and present. As they adjust to these changing demographics and needs, many museums have started to reassess their role as guardians of national or local culture. What they need to become, says Ekaterina Travkina, coordinator of culture, creative industries and local development at the OECD, are ‘facilitators of knowledge and hubs of living archives of local knowledge’. In his study of museums in the age of migration Chris Whitehead, professor of museology at Newcastle University, brings out the value of this new role. ‘When museums become places where people can explore the realities of migration, transnational connections and human rights, they become even more relevant as cultural institutions and can help drive positive social change, encouraging solidarity and sustainable development.’ Antwerp has good reason to be part of this cultural shift - and not just because of its heritage as a departure port for emigrants. It is also an increasingly international city, passing the threshold of 50% of citizens with a migrant background in 2019. The story of one Ukrainian girl Following extensive research at many institutions including the Ellis Island museum in New York, where the liners docked, to learn more about the personal histories of Red Star Line passengers, the museum opened in 2013. “We really believe in the power of personal stories and testimonies to provoke empathy and a sense of the universal experience of migration” “We really believe in the power of personal stories and testimonies to provoke empathy and a sense of the universal experience of migration,” says the museum's director Karen Moeskops. She cites one particular story told in the museum of a mother who travelled over three years with her four children from Ukraine to Antwerp and then to the United States, where her husband was working. When they arrived at Ellis Island, the obligatory medical examination revealed that her nine-year old daughter Ita had trachoma, a contagious eye disorder eye that meant she was turned away. “The mother had the heartbreaking choice to rejoin her husband with her three sons and send the girl back home alone for treatment or to take all the children back to Europe and not see her husband for many years. “She chose to send her daughter back and it would take her many attempts to be reunited with her family in the US. That choice is the tipping point in the experience for visitors, when they start to feel empathy, to imagine themselves in the story and to ask ‘what would I have done?’” This empathy sits at the heart of the museum's impact. For research demonstrates that empathy leads to a change in attitudes and actions. One 2019 study by psychologists at the universities of Belfast and Dublin reinforced this finding. It showed that children who listened to a storybook about the experience of a refugee soon to join their class subsequently showed more empathy and intention to help than those who were just told of the child's arrival. Immersion in the migrant experience Inside the doors of the historic warehouses, visitors step into the footsteps of emigrants fleeing poverty or persecution or looking for adventure, from the stops they made along the way from their homelands to their arrival in America. Reconstructions of a Warsaw travel agency, a train compartment, the deck of an ocean steamer and the interior of a ship provide the backdrop to families' stories. Videos, interactive computer games, documents, personal belongings and even smells help make their life-changing journeys with their high expectations and deep disappointments real. The final section of the exhibition focuses on emigrants' arrival at Ellis Island, their onward journeys to settlements across North America and the songs and newspapers typical of their new communities. These permanent exhibitions are only part of the story told within the museum's walls. Participatory projects have been part of the its DNA from the start. The museum's mini van is a familiar sight in the city, collecting stories from current newcomers for its constantly-evolving collections. Today's migrants and refugees are also invited to tell their stories their way through a programme of temporary displays and performances encompassing art, films, monologues and music. Making the unimportant important Newcomers are also drawn into the museum's orbit by specific projects. Of one recent project, Moeskops says, “our outreach team won the trust of refugees, people still in search of their place within our society, and recorded 40 interviews from all around the world about fear, suffering, courage, resilience and the power of imagination. With projects like this you see how empowering it is for people to share their stories - and the timelessness of the human aspect of migration.” For newcomer Vargas, this is the museum's strength. “It makes the people who are usually invisible visible and gives them a voice. It's about little Ita who had to come back to Antwerp on her own rather than famous passengers like Albert Einstein and Irving Berlin. The museum is also not looking just to stay in the old history but to bring that into the now and make a connection with the present migration situation.” In 2018, the museum brought history into the now in a novel, and neatly circular, way with its Rootseekers project. Working with senior school students originally from the United States, the museum's researchers uncovered stories about the lives their ancestors had built after disembarking from Red Star liners in New York. The students, and many of their American relatives, came together at the opening of the resulting Rootseekers exhibition in what Moeskops describes as, “a beautiful bridging of past and present.” Two groups of women - one vision “Emigrants stories are important because the Red Star Line has had a huge impact on the city of Antwerp,” says Moeskops. “But it's also a Belgian story, a European story, and a transatlantic story and all these layers count. But for me, the universal story it tells makes this a relevant place for the future.” “Emigrants stories are important because the Red Star Line has had a huge impact on the city of Antwerp” It is this common thread running through everything the museum does that convinces Moeskops the museum can be, “an antidote to prejudice and discrimination just by providing the context and the stories for a very broad audience.” Reaching this broad audience is key for Moeskops. The city attracts 100,000 visitors a year - 3,000 of whom are newcomers to the city - the highest figure for all the city's museums. But it is a moment's glimpse of two very different groups of visitors that delights her most. “I looked outside one morning and saw a group of classically dressed white women standing alongside a group of women from Somalia arriving for their Dutch language class wearing beautiful bright headscarves. I saw them look at each other as if to say, ‘are we both visiting the same museum’? That's what I find amazing, that we can attract both these groups.” Falling in love The Red Star Line Museum is a fine example of a cultural institution continually finding ways to bring its vision to life. In its case this means making meaningful connections between the city's history and today's citizens, newcomers and vulnerable groups - and staying connected. Laura Vargas is just one of the visitors who can vouch for this. Already a veteran of one project, she now has even closer ties to the museum. She's recently been asked to illustrate the story of her journey to Belgium, her husband's homeland, as part of a new exhibition. Through artwork, letters and personal belongings, Destination Sweetheart will tell the story of what it's like to leave home for love. There's one further reason why the museum helped her fall in love with her new city. “They saw past the fact that my Dutch isn't perfect and encouraged me to participate as more than a spectator. Then they offered me training, and later a job, as a museum guide - that doesn't happen much to a migrant!” One family's US migration story The museum is housed in the old harbour warehouses Verhalenbus @ Red Star Line Museum Helping Belgians discover their ancestors' stories © Victoriano Moreno Read all stories here.
<urn:uuid:c6268ff6-38db-475a-8437-57ad712ded15>
CC-MAIN-2021-43
https://www.100days.eurocities.eu/article/In-the-footsteps-of-migrants
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.964057
2,185
2.6875
3
This course assumes CS170, or equivalent, as a prerequisite. We will assume that the reader is familiar with the notions of algorithm and running time, as well as with basic notions of algebra (for example arithmetic in finite fields), discrete math and probability. General information about the class, including prerequisites, grading, and recommended references, are available on the class home page. Cryptography is the mathematical foundation on which one builds secure systems. It studies ways of securely storing, transmitting, and processing information. Understanding what cryptographic primitives can do, and how they can be composed together, is necessary to build secure systems, but not sufficient. Several additional considerations go into the design of secure systems, and they are covered in various Berkeley graduate courses on security. In this course we will see a number of rigorous definitions of security, some of them requiring seemingly outlandish safety, even against entirely implausible attacks, and we shall see how if any cryptography at all is possible, then it is also possible to satisfy such extremely strong notions of security. For example, we shall look at a notion of security for encryption in which an adversary should not be able to learn any information about a message given the ciphertext, even if the adversary is allowed to get encodings of any messages of his choice, and decodings of any ciphertexts of his choices, with the only exception of the one he is trying to decode. We shall also see extremely powerful (but also surprisingly simple and elegant) ways to define security for protocols involving several untrusted participants. Learning to think rigorously about security, and seeing what kind of strength is possible, at least in principle, is one of the main goals of this course. We will also see a number of constructions, some interesting for the general point they make (that certain weak primitives are sufficient to make very strong constructions), some efficient enough to have made their way in commercial products. 1. Alice, Bob, Eve, and the others Most of this class will be devoted to the following simplified setting: Alice and Bob communicate over an insecure channel, such as the internet or a cell phone. An eavesdropper, Eve, is able to see the whole communication and to inject her own messages in the channel. Alice and Bob hence want to find a way to encode their communication so as to achieve: - Privacy: Eve should have no information about the content of the messages exchanged between Alice and Bob; - Authentication: Eve should not be able to impersonate Alice, and every time that Bob receives a message from Alice, he should be sure of the identity of the sender. (Same for messages in the other direction.) For example, if Alice is your laptop and Bob is your wireless router, you might want to make sure that your neighbor Eve cannot see what you are doing on the internet, and cannot connect using your router. For this to be possible, Alice and Bob must have some secret information that Eve ignores, otherwise Eve could simply run the same algorithms that Alice does, and thus be able to read the messages received by Alice and to communicate with Bob impersonating Alice. In the classical symmetric-key cryptography setting, Alice and Bob have met before and agreed on a secret key, which they use to encode and decode message, to produce authentication information and to verify the validity of the authentication information. In the public-key setting, Alice has a private key known only to her, and a public key known to everybody, including Eve; Bob too has his own private key and a public key known to everybody. In this setting, private and authenticated communication is possible without Alice and Bob having to meet to agree on a shared secret key. This gives rise to four possible problems (symmetric-key encryption, symmetric-key authentication, public-key encrpytion, and public-key authentication, or signatures), and we shall spend time on each of them. This will account for more than half of the course. The last part of the course will deal with a fully general set-up in which any number of parties, including any number of (possibly colluding) bad guys, execute a distributed protocol over a communication network. In between, we shall consider some important protocol design problems, which will play a role in the fully general constructions. These will be commitment schemes, zero-knowledge proofs and oblivious transfer. 2. The Pre-history of Encryption The task of encoding a message to preserve privacy is called encryption (the decoding of the message is called decrpytion), and methods for symmetric-key encryption have been studied for literally thousands of years. Various substitution ciphers were invented in cultures having an alphabetical writing system. The secret key is a permutation of the set of letters of the alphabet, encryption is done by applying the permutation to each letter of the message, and decryption is done by applying the inverse permutation. Examples are - the Atbash ciphers used for Hebrew, in which the first letter of the alphabet is replaced with the last, the second letter with the second-to-last, and so on. It is used in the book of Jeremiah - the cipher used by Julius Caesar, in which each letter is shifted by three positions in the alphabet. There are reports of similar methods used in Greece. If we identify the alphabet with the integers , where is the size of the alphabet, then the Atbash code is the mapping and Caesar’s code is . In general, a substitution code of the form is trivially breakable because of the very small number of possible keys that one has to try. Reportedly, former Mafia boss Bernardo Provenzano used Caesar’s code to communicate with associates while he was a fugitive. (It didn’t work too well for him.) The obvious flaw of such kind of substitution ciphers is the very small number of possible keys, so that an adversary can simply try all of them. Substitution codes in which the permutation is allowed to be arbitrary were used through the middle ages and modern times. In a 26-letter alphabet, the number of keys is , which is too large for a brute-force attack. Such systems, however, suffer from easy total breaks because of the facts that, in any given language, different letters appear with different frequencies, so that Eve can immediately make good guesses for what are the encryptions of the most common letters, and work out the whole code with some trial and errors. This was noticed already in the 9th century A.D. by Arab scholar al-Kindy. Sherlock Holmes breaks a substitution cipher in The Adventure of the Dancing Men. For fun, try decoding the following message. (A permutation over the English alphabet has been applied; spaces have been removed before encoding.) Other substitution ciphers were studied, in which the code is based on a permutation over , where is the alphabet and a small integers. (For example, the code would specify a permutation over 5-tuples of characters.) Even such systems suffer from (more sophisticated) frequency analysis. Various tricks have been conceived to prevent frequency analysis, such as changing the permutation at each step, for example by combining it with a cyclic shift permutation. (The German Enigma machines used during WWII used multiple permutations, and applied different shift on each application.) More generally, however, most classic methods suffer from the problem of being deterministic encryption schemes: If the same message is sent twice, the encryptions will be the same. This can be disastrous when the code is used with a (known) small set of possible messages This xkcd cartoon makes this point very aptly. (The context of the cartoon is that, reportedly, during WWII, some messages were encrypted by translating them into the Navajo language, the idea being that there was no Navajo speaker outside of North America. As the comic shows, even though this could be a very hard permutation to invert without the right secret information, this is useless if the set of encrypted messages is very small.) Look also at the pictures of the two encodings of the Linux penguin on the Wikipedia page on block ciphers. Here is an approach that has large key space, which prevents single-character frequency analysis, and which is probabilistic. Alice and Bob have agreed on a permutation of the English alphabet , and they think of it as a group, for example by identifying with , the integers mod 26. When Alice has a message to send, she first picks a random letter , and then she produces an encryption by setting and . Then Bob will decode by setting . Unfortunately, this method suffers from two-character frequency analysis. You might try to amuse yourselves by decoding the following ciphertext (encoded with the above described method): As we shall see later, this idea has merit if used with an exponentially big permutation, and this fact will be useful in the design of actual secure encryption schemes. 3. Perfect Security and One-Time Pad Note that if Alice only ever sends one one-letter message , then just sending is completely secure: regardless of what the message is, Eve will just see a random letter . That is, the distribution (over the choice of the secret key ) of encodings of a message is the same for all messages , and thus, from the point of view of Eve, the encryption is statistically independent of the message. This is an ideal notion of security: basically Eve might as well not be listening to the communication, because the communication gives no information about the message. The same security can be obtained using a key of bits (instead of as necessary to store a random permutation) by Alice and Bob sharing a random letter , and having Alice send . In general, is Alice wants to send a message , and Alice and Bob share a random secret , then it is perfectly secure as above to send . This encoding, however, can be used only once (think of what happens when several messages are encoded using this process with the same secret key) and it is called one-time pad. It has, reportedly, been used in several military and diplomatic applications. The inconvenience of one-time pad is that Alice and Bob need to agree in advance on a key as large as the total length of all messages they are ever going to exchange. Obviously, your laptop cannot use one-time pad to communicate with your base station. Shannon demonstrated that perfect security requires this enormous key length. Without getting into the precise result, the point is that if you have an -bit message and you use a -bit key, , then Eve, after seeing the ciphertext, knows that the original message is one of possible messages, whereas without seeing the ciphertext she only knew that it was one of possible messages. When the original message is written, say, in English, the consequence of short key length can be more striking. English has, more or less, one bit of entropy per letter which means (very roughly speaking) that there are only about meaningful -letter English sentences, or only a fraction of all possible -letter strings. Given a ciphertext encoded with a -bit key, Eve knows that the original message is one of possible messages. Chances are, however, that only about such messages are meaningful English sentences. If is small enough compared to , Eve can uniquely reconstruct the original message. (This is why, in the two examples given above, you have enough information to actually reconstruct the entire original message.) When , for example if we use an 128-bit key to encrypt a 4GB movie, virtually all the information of the original message is available in the encryption. A brute-force way to use that information, however, would require to try all possible keys, which would be infeasible even with moderate key lengths. Above, we have seen two examples of encryption in which the key space is fairly large, but efficient algorithms can reconstruct the plaintext. Are there always methods to efficiently break any cryptosystem? We don’t know. This is equivalent to the question of whether one-way functions exist, which is probably an extremely hard question to settle. (If, as believed, one-way functions do exist, proving their existence would imply a proof that .) We shall be able, however, to prove the following dichotomy: either one-way functions do not exist, in which case any approach to essentially any cryptographic problem is breakable (with exceptions related to the one-time pad), or one-way functions exist, and then all symmetric-key cryptographic problems have solutions with extravagantly strong security guarantees. Next, we’ll see how to formally define security for symmetric-key encryption, and how to achieve it using various primitives.
<urn:uuid:c7541c41-46ec-4721-a08f-656a14226dba>
CC-MAIN-2021-43
https://lucatrevisan.wordpress.com/2009/01/20/cs276-lecture-1-introduction/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.927903
2,639
3.5625
4
Some of the common words we use are frozen mistakes. The term influenza comes from the Italian word meaning “influence”—an allusion to the influence the stars were once believed to have on our health. European explorers searching for an alternate route to India ended up in the New World and uncomprehendingly dubbed its inhabitants indios, or Indians. Neuroscientists have a frozen mistake of their own, and it is a spectacular blunder. In the mid-1800s researchers discovered cells in the brain that are not like neurons (the presumed active players of the brain) and called them glia, the Greek word for “glue.” Even though the brain contains about a trillion glia—10 times as many as there are neurons—the assumption was that those cells were nothing more than a passive support system. Today we know the name could not be more wrong. Glia, in fact, are busy multitaskers, guiding the brain’s development and sustaining it throughout our lives. Glia also listen carefully to their neighbors, and they speak in a chemical language of their own. Scientists do not yet understand that language, but experiments suggest that it is part of the neurological conversation that takes place as we learn and form new memories. If you had to blame one thing for the mistaken impression about glia, it would have to be electricity. The 18th-century physiologist Luigi Galvani discovered that if he touched a piece of electrified metal to an exposed nerve in a frog’s leg, the leg twitched. He and others went on to show that a slight pulse of electricity moving through the metal to the nerve was responsible. For two millennia physicians and philosophers had tried to find the “animal spirits” that moved the body, and Galvani discovered that impetus: It was the stuff of lightning. Over the next two centuries scientists got a clearer understanding of how those signals work. When a branch at one end of a nerve cell, or neuron, is stimulated, an electric pulse races toward the main body of the cell. Other branches might send separate pulses at the same time. The main body of the neuron conveys those pulses to an outgoing arm, or axon, which splits into numerous branches, each of which nearly touches other neurons. The slight gap between two nerve cells is called a synaptic cleft. The signal-sending neuron pumps chemicals into the space, and the signal-receiving neuron takes up some of them, triggering a new electric pulse. All neurons have certain characteristic attributes: axons, synapses, and the ability to produce electric signals. As scientists peered at bits of brain under their microscopes, though, they encountered other cells that did not fit the profile. When impaled with electrodes, these cells did not produce a crackle of electric pulses. If electricity was the language of thought, then these cells were mute. German pathologist Rudolf Virchow coined the name glia in 1856, and for well over a century the cells were treated as passive inhabitants of the brain. At least a few scientists realized that this might be a hasty assumption. The pioneering neuroscientist Santiago Ramón y Cajal earned a Nobel Prize in 1906 for what came to be known as the neuron doctrine—the theory that neurons are the fundamental units of the brain. Ramón y Cajal didn’t think glia were necessarily just glue, however. Instead, he thought they were a mystery—a mystery, he wrote, that “may remain unsolved for many years to come until physiologists find direct methods to attack it.” Today the mystery of glia is partially solved. Biologists know they come in several forms. One kind, called radial glia, serve as a scaffolding in the embryonic brain. Neurons climb along these polelike cells to reach their final location. Another kind of glia, called microglia, are the brain’s immune system. They clamber through the neurological forest in search of debris from dead or injured cells. A third class of glia, known as Schwann cells and oligodendrocytes, form insulating sleeves around neurons to keep their electric signals from diffusing. But the more neuroscientists examine glia, the more versatile these cells turn out to be. Microglia do not just keep the brain clean; they also prune away extra branches on neurons to help fine-tune their developing connections. Oligodendrocytes and Schwann cells don’t just insulate cells; they also foster new synapses between neurons. And once radial glia are finished helping neurons move around the developing brain, they don’t die. They turn into another kind of glia, called astrocytes. Astrocytes—named for their starlike rays, which reach out in all directions—are the most abundant of all glial cells and therefore the most abundant of all the cells in the brain. They are also the most mysterious. A single astrocyte can wrap its rays around more than a million synapses. Astrocytes also fuse to each other, building channels through which molecules can shuttle from cell to cell. All those connections put astrocytes in a great position to influence the goings-on in the brain. They also have receptors that can snag a variety of neurotransmitters, which means that they may be able to eavesdrop on the biochemical chatter going on around them. Yet for a long time, neuroscientists could not find any sign that astrocytes actually responded to signals from the outside. Finally, in 1990, neuroscientist Ann Cornell-Bell at Yale discovered what seemed to be a solution to the mystery. It turned out that astrocytes, like neurons, can react to neurotransmitters—but instead of electricity, the cells produce waves of charged calcium atoms. The calcium comes from sealed packets scattered through the astrocytes. When stimulated, the cells rip open the calcium packets in the ray that first senses the neurotransmitters, triggering the opening of other packets elsewhere in the cell. The astrocytes then stash the calcium atoms back in their packets, only to unleash them again when next stimulated. Cornell-Bell noticed that a wave of such activity that started in one astrocyte could spread to other astrocytes. Several research teams also discovered that astrocytes themselves release powerful neurotransmitters. They can produce glutamate (which excites neurons so that they are more likely to respond to a signal from another neuron) and adenosine (which can blunt a neuron’s sensitivity). For some brain scientists, these discoveries are puzzle pieces that are slowly fitting together into an exciting new picture of the brain. Piece one: Astrocytes can sense incoming signals. Piece two: They can respond with calcium waves. Piece three: They can produce outputs—neurotransmitters and perhaps even calcium waves that spread to other astrocytes. In other words, they have at least some of the requirements for processing information the way neurons do. Alfonso Araque, a neuroscientist at the Cajal Institute in Spain, and his colleagues make a case for a fourth piece. They find that two different stimulus signals can produce two different patterns of calcium waves (that is, two different responses) in an astrocyte. When they gave astrocytes both signals at once, the waves they produced in the cells was not just the sum of the two patterns. Instead, the astrocytes produced an entirely new pattern in response. That’s what neurons—and computers, for that matter—do. If astrocytes really do process information, that would be a major addition to the brain’s computing power. After all, there are many more astrocytes in the brain than there are neurons. Perhaps, some scientists have speculated, astrocytes carry out their own computing. Instead of the digital code of voltage spikes that neurons use, astrocytes may act more like an analog network, encoding information in slowly rising and falling waves of calcium. In his new book, The Root of Thought, neuroscientist Andrew Koob suggests that conversations among astrocytes may be responsible for “our creative and imaginative existence as human beings.” Until recently, studies of astrocytes examined only a few cells sitting in a petri dish. Now scientists are figuring out how to observe astrocytes in living animals and learning even more about the cells’ abilities. Axel Nimmerjahn of Stanford University and his colleagues, for instance, developed a way to mount microscopes on the skulls of mice. To watch the astrocytes, they inject molecules into the mice that glow when they bind to free calcium. Whenever a mouse moves one of its legs, Nimmerjahn and his colleagues can see a little burst of calcium waves. In some cases, hundreds of astrocytes may flare up at once, and the flares can last as long as several seconds. Astrocytes are also vital for synapses. Stanford University neuroscientist Ben Barres and his colleagues found that neurons that grew with astrocytes formed nearly 10 times as many synapses as neurons growing without them, and the activity in those synapses was nearly 100 times greater. Since synapses change as we learn and form new memories, Marie E. Gibbs of Monash University in Australia suspected that astrocytes might be important to our ability to learn. To test that idea, she and her colleagues gave chicks colored beads to peck at. The red beads were coated in a bitter chemical; usually a single peck was enough to make the chicks learn never to peck a red bead again. But when they were injected with a drug that prevented astrocytes from synthesizing glutamate, the birds were unable to remember the bad taste and would peck at the beads again. But these sorts of experiments have not swayed some skeptics. If the calcium waves really are so important, for instance, you would expect that a genetically engineered mouse that couldn’t make calcium waves would be one sorry rodent. Ken McCarthy, a neuroscientist at the University of North Carolina at Chapel Hill, and his colleagues engineered mice to grow astrocytes that lack a key protein required to pry open their calcium packets. These mice grew up to be indistinguishable from ordinary ones, for reasons still unclear. There is something marvelous in the fact that we barely understand what most of the cells in our brains are doing. Beginning in the 1930s, astronomers realized that all the things they could see through their telescopes—the stars, the galaxies, the nebulas—make up just a small fraction of the total mass of the universe. The rest, known as dark matter, still defies their best attempts at explanation. Between our ears, it turns out, each of us carries a personal supply of dark matter as well.
<urn:uuid:05516b1f-16cb-4f09-b17e-882159a1178c>
CC-MAIN-2021-43
https://www.discovermagazine.com/mind/the-brain-the-dark-matter-of-the-human-brain
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.96624
2,264
3.40625
3
Current public discourse teems with allegations of mistreatment. Two aspects of these proliferating claims are striking: their source — that they emerge from above rather than below — and the absence of the term “victim” itself. It’s not the poor, marginalized, or dispossessed, but those who occupy positions of privilege and power who seem most eager to assume the victim mantle today, those Lawrence Glickman classifies as “elites.” On the one hand, this is not new: American history is littered with examples of dominant groups deploying iconic imagery of suffering and appropriating paradigms of others’ oppression to vividly express their grievances. Our nation’s revolution was founded on rich and powerful slave owners bemoaning new taxation as their own ultimate form of enslavement, after all. On the other hand, the recent outpouring and logic of such declarations warrant further reflection, especially when those who merit victim status by the most stringent criteria disavow the designation, preferring “survivor” instead. Even President Trump, whose tweets are regularly filled with complaints about witch-hunts, lynching, and other forms of supposed presidential persecution, never uses “victim” to decry his treatment. In order to grasp this odd state of affairs, we need to understand how victimhood became tainted. To do so, I will trace the evolution of “victimology.” What is victimology? The term “victimology” has two antithetical meanings. The first names an area of research, often considered a branch of criminology, that is focused on crime victims. This victimology emerged in the 1930s, when French barrister, Benjamin Mendelsohn, sought to establish a new, scientific discipline to uncover and aggregate the “whole of the socio-bio-psychological traits common to all victims.” As a contribution to this etiological enterprise, he constructed a scale of victims’ responsibility, ranging from “complete innocence” to “ignorant guilt” and “false victimization.” The field was formally established in the aftermath of the Second World War with the emergence of new categories of crime. “Crimes against humanity” was one. In 1948, German researcher Hans von Hentig proposed what became the foundational paradigm for this first kind of victimology: “victim precipitation.” Criminals, Von Hentig reasoned, select their victims based on certain dispositional factors, in addition to physical or social characteristics. In this view, the paths of the criminal and his prey were on an almost inevitable collision course: the victim was always already a victim, vulnerable and “perceived by the offender to be performing the role of victim, and…therefore an appropriate target.” In its second usage, “victimology” denotes individuals and social groups invested in portraying themselves as victims, a status they claim for themselves, not one that is assigned to them by way of official criteria. The lineage of this colloquial meaning can be traced to the 1980s, when a new and cynical conception of “victim” was used to dismantle the welfare state and challenge multiculturalism, identity politics, and progressive policies such as affirmative action. Other disdainful victim idioms surfaced in tandem (e.g., “victimist,” “victimism,” “victicrat”), and “victim” itself became a term of derision deployed to condemn the character of sufferers irrespective of their condition and to chastise them for enfeebling and effeminizing the nation. This alternative use of “victimology” was integral to a campaign I call “anti-victimism.” Anti-victimists consider most assertions of victimization to be fraudulent — generated either by imposters (who are neither harmed nor deprived by any sensible standard), or swindlers (who exploit their disadvantages to achieve gains incommensurate with their actual circumstances).Here the victim’s status as a victim is not accepted but contested, and “victimology” is used to criticize groups and individuals as fake adopters of the victim label who seek to manipulate others and extract undeserved rewards. Instead of guiding us in how we might better evaluate victim claims, anti-victimists concentrate on claimants’ psychological state, their “victim mentality” — that they blame others rather than accept personal responsibility for their condition. Distinctions between deserving and undeserving sufferers are hardly new, and disentangling misfortune from injustice can be politically necessary. But individualizing systemic power and oppression as matters of choice and personal responsibility is a more recent and problematic development. Indeed, “victim mentality” entered our lexicon only three decades ago. It functions as a synonym for “victimhood,” a word coined early but not previously in everyday use. A pernicious discursive tool of neoliberalism, anti-victimism depoliticizes injustice by casting it as a matter of personal attitudes or feelings. It then becomes extremely difficult to address institutional hierarchy or privilege, systemic domination, and the pervasive social injustice that elevates some by subordinating others. After all, no one needs to be a victim because each of us could be self-determining if only we possessed the right character. Now, decades after the cultural skirmishes of the 1990s that were crucial to disseminating anti-victimism, it has returned with a vengeance. The current targets have different names — #BlackLivesMatter, #MeToo, trigger warnings, micro-aggressions — but anti-victim discourse has not substantially changed. Once again, we hear that America is imperiled by a new moral culture that, as sociologists Bradley Campbell and Jason Manning put it, displaces “standards of honor and dignity.” Simultaneously, new and dubious claims to victim status have emerged from dominant groups. Condemnations of “victimhood culture” are widespread, especially on the right, and yet in the White House our commander-in-chief professes to be a victim of a nefarious conspiracy (subjected to “presidential harassment”), and to represent “the forgotten men and women” of America who without him would be, as they were before, victimized by China, the EU, and cultural elites. Ironies (or, more precisely, hypocrisies) aside, what is perhaps more remarkable is how the anti-victimist use of the term “victim” has penetrated mainstream social science scholarship in recent years. In a glossary of core concepts featured in Routledge’s Introduction to Political Theory (2015) for instance, “victimhood” is defined as “a belief — usually from victims — that their plight is caused by…others who must be blamed and punished, as a substitute for actively seeking the roots of their problem.” Likewise, the entry for “victim” in the University of Pittsburgh’s “Keywords Project” explains: “The identity of a victim has been transformed, from being inflicted to one voluntarily adopted.” The authors clarify, “[t]his is almost certainly because individuals (and groups) have come to be identified as victims, not because…of what has happened to them, but rather because of who they are” [emphasis added]. Even more astonishing is that anti-victimism now appears in the dictionary. Merriam-Webster lists both definitions of “victimology,” offering as an example of proper usage: “Yes, victimology has actually become something of a competition, particularly on college campuses.” Although the two uses of “victimology” emerged from different contexts, both ultimately serve to curb the population of victims. And both do so primarily by finding fault within victims themselves. This has become our common sense about victimhood. The customary injunction — “Don’t be a victim” — conveys this message concisely. Unlike other warnings (e.g., “Mind the gap,” “Steep hill ahead”), the imperative simultaneously instructs us to avoid the possibility of being victimized, but also to reject the status of victimhood itself. Replacing the verb (“victimize”) with the noun (“victim”) syntactically reconfigures victimization as a function of risk and choice, depicting the victim’s behavior as the causal factor rather than the macrostructures of violence that render some more vulnerable than others. That the injunction frequently appears with the in place of a (“Don’t be the victim”) underscores the centrality of the subject position over the fact of injury. Another common construction, “Don’t play the victim,” further obscures the division between subjection to harm and performativity. I’m not a victim; I’m a survivor Much like “victim,” “survivor” has become a keyword of our era. Until the second half of the 20th century, the term “survivor” referred simply to individuals who outlived others in the aftermath of a disaster, or who stood to inherit the remains of an estate. With the notable exception of the 19th century Social Darwinist expression “survival of the fittest,” a survivor was not seen as possessing any exceptional or laudatory qualities. But today the designation “survivor” has been eagerly adopted by those who endured a variety of injuries, ailments, or hardships — from rape, sexual harassment, domestic violence, and child abuse, to cancer, AIDS, gun violence, drug addiction, and even divorce. In these broader applications “survivor” connotes agency (braving a traumatic event) and/or an accomplishment (overcomingthe physical and emotional consequences of such an event). Survivorship now abounds with positive attributes, signifying personal fortitude, courage, and insinuating a heroic moral stature. Since the 1970s it even attained a ritualistic quality, expressed in speak-outs, marches, and fundraisers that celebrate resilience. Detached from the material and structural causes of victimization, the current use of “survivor” has a different temporality — sequential rather than coterminous. RAINN, the largest anti-sexual violence organization in the United States, advises that the term “victim” should be used when referring to someone recently affected by sexual violence, whereas “survivor” should be used for someone who has successfully completed the recovery process. Predictably, the internet has become a resource for those who might be teetering on the brink of embracing victimhood. Helloflo (a popular app offering “fem-spiration”) puts it more plainly: “There’s a sense of mobility with the word ‘survivor.’ [It] implies progression over stagnancy, and serves as a term of empowerment.” Like victimhood, survivorship has become a subject position that can, and should, be chosen. Being a victim or a survivor has little to do with vulnerabilities, injuries, or injustices themselves. Rather it is an expression of “who you are.” Consider the comments of Austrian Natasha Kampusch. Abducted at the age of 10 and imprisoned in a basement for eight years, she told the press upon her release: “I am not a victim simply because other people say I am. Other people cannot make you a victim; you can only do that to yourself.” The UN Goodwill Ambassador responsible for addressing human trafficking expresses a similar sentiment: “[T]he use of the terminology ‘victim,’ is synonymous with weakness[,]…with shame. The people that I have met…are survivors, they are resourceful, alive, and productive.” At this political juncture, a therapeutic rationale that may have helped victims of harrowing experiences (from rape to genocide) “work through” and recover emotionally from trauma intersects with the anti-victimist discourse that attributes personal failure to individuals and groups who politicize their suffering. It’s hardly surprising, therefore, that genuine victims now choose to rebuff the “victim” designation, embracing “survivor” instead. Feminists have enthusiastically endorsed this lexical change, as less passive, negative, and disempowering. And victim advocates (such as those working in battered women’s shelters) followed suit by renaming their organizations “survivors’ agencies.” This preference for “survivor” not only ignores the role of anti-victimism in distorting our understanding of victimization. It also disregards how hard-fought in, for instance, the case of sexual violation, was the effort to establish that naturalized heterosexist behavior can be a form of violence, that the rape victim is, in fact, a “victim” of both a particular individual and of a larger patriarchal system. Before the mid-1970s, the term used in courts was “prosecutrix,” and defense attorneys still maintain that referring to a rape victim as a “victim” is prejudicial. Furthermore today, while victims of sexual assault and their supporters elevate survivorship as empowering, the Trump Justice Department has raised the bar on the criteria required to establish harm, making these crimes more difficult to prosecute. Changes enacted without much attention restrict the DOJ’s definition of domestic violence to physical abuse, and also roll back Title IX’s standard of “affirmative consent.” Why focus on a word? Does it matter if victims call themselves “survivors” rather than “victims”? As we can see, the term performs important political work by turning attention from the sources of injustice and injury to how the individual sufferer grapples with her suffering. We need, therefore, to wrestle “victimology” from its current uses — as a name for a subfield of criminology, or as a bludgeon to shame victims. And we should not cede to dominant groups the political potency of “victim” as a way to call foul. Following recent theorizing of epistemic injustice, I want to suggest that victimology 3.0 might instead designate victims’ distinctive perspective, which we reflexively mistrust, because it inevitably disrupts and contests the status quo. Victims are never only victims. But in order for those who have been victimized to share their knowledge, they must be able to speak as victims. I am not suggesting that “victim” has some inherent valence that is absent in other terms that constitute the vocabulary of injustice. At the same time, reclaiming “victim” — as a term of political engagement — would constitute a critical step in dismantling anti-victimism, in destigmatizing victimization, and thereby open the possibility of actually addressing injustice. Tackling injustice requires more than comprehending what is wrong. We also need to grasp how it operates: injustice often works through indirect means (such as ignorance and apathy) rather than deliberate violations. The victim’s point of view is essential to uncover these subtle routes that injustice can take. It is not that the victim’s perspective is necessarily more accurate or more comprehensive, but that we are inclined to look away, to recast injustice as a mere misfortune, and thus to dismiss victimization and silence its victims. By reflecting on the changes in how we talk about victimization and survivorship, we can see how language regulates our understandings of suffering and injustice, rendering some matters unspeakable. E.g., Shelby Steele, The Content of Our Character (1990); Dinesh D’Souza, Illiberal Education (1991); Charles Sykes,Nation of Victims (1992); Robert Hughes,Culture of Complaint (1993); Naomi Wolf,Fire with Fire (1993); Katie Roiphe,The Morning After (1993); Alan Dershowitz,The Abuse Excuse (1994); Christina Hoff Sommers,Who Stole Feminism (1994); Rene Denefeld, New Victorians (1995). E.g., Bruce Bawer, The Victims’ Revolution (2012); Diane Enns, The Violence of Victimhood (2012); Robert Juliano,Cry Bullies (2017); Joseph Epstein, Victimhood: The New Virtue (2017); Bradley Campbell & Jason Manning, The Rise of Victimhood Culture (2018); Greg Lukianoff & Jonathan Haidt, The Coddling of the American Mind (2018).
<urn:uuid:87039bd7-54e4-4677-a10a-6cda5a835d40>
CC-MAIN-2021-43
https://publicseminar.org/essays/reclaiming-victimology/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00431.warc.gz
en
0.943803
3,541
2.765625
3
Tarasov, BG 2017, 'Shear ruptures of extreme dynamics in laboratory and natural conditions', in J Wesseloo (ed.), Deep Mining 2017: Proceedings of the Eighth International Conference on Deep and High Stress Mining , Australian Centre for Geomechanics, Perth, pp. 3-50, https://doi.org/10.36487/ACG_rep/1704_0.1_Tarasov In the Earth’s crust shear ruptures are responsible for macroscopic dynamic failure causing earthquakes. Shear ruptures induced by and triggered by the mining-induced stress change sometimes result in damaging rockbursts. The fundamental mechanism of the shear rupture is critically linked to the magnitude of ground motion, and hence, any resulting damage. For the effective management of seismic hazard both from natural and mining-related causes, a comprehensive understanding of the fundamental mechanism of the shear rupture is crucial. In recent years it has been observed that shear ruptures can propagate with extreme velocities exceeding the shear wave speed. Experiments show that a remarkable feature of extreme ruptures is the fact that friction reduces toward zero in the rupture head. Coseismic reduction in friction is critical in accelerating the fault slip and to the magnitude of ground shaking which affects the amount of potential earthquake and rockburst damage. Despite the critical importance, physical processes which determine the dramatic weakening of friction are still unclear and continue to be vigorously debated. The second unresolved question is about the source of energy which provides extreme rupture dynamics. This paper shows that the nature of extreme ruptures in intact rocks and in pre-existing faults with frictional and coherent interfaces is the same. It demonstrates that in all types of extreme ruptures, the fault weakening can be explained by a recently-proposed shear rupture mechanism associated with the intensive tensilecracking process in the rupture tip observed for all extreme ruptures. The tensile-cracking process creates, in certain conditions, a fan-like fault structure, the shear resistance of which is extremely low. The fan-structure represents the basis of a self-sustaining natural mechanism of stress intensification in the rupture head providing the driving power for rupture propagation with extreme velocities. The fanmechanism causes dramatic embrittlement of intact hard rocks under high stress and makes transient strength of intact hard rocks during the rupture propagation significantly less than the frictional strength. This paper introduces features of the fanmechanism operation in primary ruptures and in natural complex faults and proposes an alternative view on the nature of earthquakes and shear rupture rockbursts generated by extreme ruptures. Keywords: super shear, extreme rupture, fan-mechanism, Ortlepp shears, rockburst, earthquake Albaric, J, Deverchere, J, Petit, C, Perrot, J & Le Gall, B 2009, ‘Crustal rheology and depth distribution of earthquakes: Insights from the central and southern East African Rift System’, Tectonophysics, vol. 468, pp. 28–41. Andrews, DJ & Ben-Zion, Y 1997, ‘Wrinkle-like slip pulse on a fault between different materials’, Journal of Geophysical Research, vol. 102, pp. 553–71. Ben-David, O, Rubinstein, SM & Fineberg, J 2010, ‘Slip-stick and the evolution of frictional strength’, Nature, vol. 463, pp. 76–79. Ben-Zion, Y 2001, ‘Dynamic ruptures in recent models of earthquake faults’, Journal of the Mechanics and Physics of Solids, vol. 49, pp. 2209–2244. Brace, WF & Kohlstedt, D 1980 ‘Limits on lithospheric stress imposed by laboratory experiments’, Journal of Geophysical Research, vol. 85, pp. 6248-6252. Brace, WF & Byerlee, JD 1966, ‘Stick-slip as a mechanism for earthquakes, Science, vol. 153 (3,739), pp. 990–992. Bowden, FP & Tabor, D 2001, The friction and lubrication of solids, Oxford University Press. Brune, JN, Brown, S & Johnson, PA 1993, ‘Rupture mechanism and interface separation in foam rugger model of earthquakes: a possible solution to the heat flow paradox and the paradox of large overthrusts’, Tectonophysics, vol. 218, pp. 59–67. Brune, JN, Henyey, TL & Roy, RF 1969, ‘Heat flow, stress, and rate of slip along the San Andreas fault, California’, Journal of Geophysical Research, vol. 74, pp. 3821–3827. Byerlee, JD 1978, ‘Friction of rocks’, Pure and Applied Geophysics, vol. 116, pp. 615–626. Cochard, A & Madariaga, R 1994, ‘Dynamic faulting under rate-dependent friction’, Pure and Applied Geophysics, vol. 142, no. 3/4, pp. 419–445. Dieterich, JH 1979, ‘Modeling of rock friction; 1. Experimental results and constitutive equations’, Journal of Geophysical Research, vol. 84, pp. 2162–2168. Di Toro, G, Goldsby, DL & Tullis, TE 2004, ‘Friction falls towards zero in quartz rock as slip velocity approaches seismic rates’, Nature vol. 427, pp. 436–439. Gay, N C & Ortlepp, W D 1979, ‘Anatomy of a mining-induced fault zone’, Geological Society of America Bulletin, vol. 90, pp. 47–58. Ghaffari, HO, Thompson, BD & Young, RP 2014, ‘Complex networks and waveforms from acoustic emissions in laboratory earthquakes’, Nonlinear Processes in Geophysics, vol. 21, pp. 763–775. Griffith, WA, Rosakis, A, Pollard, DD & Ko, CW 2009, ‘Dynamic rupture experiments elucidate tensile crack development during propagating earthquake ruptures’, Geology, vol. 37, pp. 795–798. Heaton, TH 1990, ‘Evidence for and implications of self-healing pulses of slip in earthquake rupture’, Physics of the Earth and Planetary Interiors, vol. 64, no. 1, pp. 1–20. Kanamori, H & Heaton, TH 2000, ‘Microscopic and macroscopic physics of earthquakes’, in JB Rundle, DL Turcotte & W Klein (eds), Geophysical Monograph Series: Geo Complexity and the Physics of Earthquakes, American Geophysical Union, Washington DC, vol. 120, pp. 147–163. King, GCP & Sammis, CG 1992, ‘The mechanisms of finite brittle strain’, Pure and Applied Geophysics, vol. 138, pp. 611–640. Kostrov, B 1966, ‘Self-similar problems of propagation of shear cracks’, Journal of Applied Mathematics and Mechanics, vol. 28, pp. 1077-1078. Lachenbruch, AH 1980, ‘Frictional heating, fluid pressure, and the resistance to fault motion’, Journal of Geophysical Research, vol. 85, pp. 6097–6112. Lei, X, Kusunose, K, Rao, MVMS, Nishizawa, O & Satoh, T 2000, ‘Quasi-static fault growth and cracking in homogeneous brittle rock under triaxial compression using acoustic emission monitoring’, Journal of Geophysical Research, vol. 105, pp. 6127–6139. Lu, X, Lapusta, N & Rosakis, AJ 2007, ‘Pulse-like and crack-like ruptures in experiments mimicking crustal earthquakes’, Proceedings of the National Academy of Science USA, vol. 104, pp. 18931–18936. Lu, X, Lapusta, N & Rosakis, AJ 2010, ‘Pulse-like and crack-like dynamic shear ruptures on frictional interfaces: experimental evidence, numerical modeling, and implications’, International Journal of Fracture, , pp. 27–39. Lykotrafitis, G, Rosakis, A J & Ravichandran, G 2006, ‘Self-healing pulse-like shear ruptures in the laboratory’, Science, vol. 313, pp. 1765–1768. Magloughlin, JF & Spray, JG 1992, ‘Frictional melting processes and products in geological materials: introduction and discussion’, Tectonophysics, vol. 204, pp. 197–206. Megahid, AR, Soghair, H, Hageed, MAA & Hafer AMAA 1993, ‘Strength and deformation capacity of slender RC beams’, in HP Rossmanith (ed.), Proceedings Fracture and Damage of Concrete and Rock – FDCR-2. Melosh, HJ 1979, ‘Acoustic fluidization: a new geologic process?’, Journal of Geophysical Research, vol. 84, pp. 7513–7520. McGarr, A, Pollard, D, Gay, NC & Ortlepp, WD 1979, ‘Observations and analysis of structures in exhumed mine-induced faults’, U.S. Geological Survey Open File Report, vol. 79 – 1239, pp. 101–120. Ngo, D, Huang, Y, Rosakis, A, Griffith, W A & Pollard, D 2012, ‘Off-fault tensile cracks: a link between geological fault observations, lab experiments, and dynamic rupture models’, Journal of Geophysical Research, vol. 117, Ohnaka, M & Kuwahara, Y 1990, ‘Characteristic features of local breakdown near a crack-tip in the transition zone from nucleation to unstable rupture during stick-slip shear failure’, Tectonophysics, vol. 175, pp. 197–220. Ohnaka, M & Shen, L 1999, ‘Scaling of the shear rupture process from nucleation to dynamic propagation: implications of geometric irregularity of the rupturing surface’, Journal of Geophysical Research, vol. 104, pp. 817–844. Olsen, KB, Madariaga, R & Archuleta, RJ 1997, ‘Three-dimensional dynamic simulation of the 1992 Landers earthquake’, Science, vol. 278, pp. 834–838. Ortlepp, WD 1997, Rock Fracture and Rockbursts, The South African Institute of Mining and Metallurgy, Johannesburg. Otsuki, K & Dilov, T 2005, ‘Evolution of hierarchical self-similar geometry of experimental fault zones: Implications for seismic nucleation and earthquake size’, Journal of Geophysical Research, vol. 110, B03303, Peng, S & Johnson, AM 1972, ‘Crack growth and faulting in cylindrical specimens of Chelmsford granite’, International Journal of Rock Mechanics and Mining Sciences, vol. 9, pp. 37–86. Reches, Z & Lockner, D A 1994, ‘Nucleation and growth of faults in brittle rocks’, Journal of Geophysical Research, vol. 99, pp. 18159–18173. Rice, JR 1992, ‘Fault stress states, pore pressure distributions, and the weakness of the San Andreas fault’, Fault Mechanics and Transport Properties of Rocks, Academic, San Diego, California, pp. 475–503. Rice, JR 2006, ‘Heating and weakening of faults during earthquake slip’, Journal of Geophysical Research, vol. 111, B05311, Richards, PG 1976, ‘Dynamic motions near an earthquake fault: a three-dimensional solution’, Bulletin of Seismological Society of America, vol. 66, pp. 1–32. Rosakis, A J 2002, ‘Intersonic shear cracks and fault ruptures’, Advances in Physics, vol. 51, pp. 1189-1257. Rosakis, AJ, Samudrala, O & Coker, D 1999, ‘Cracks faster than the shear wave speed’, Science, vol. 284, pp. 1337–1340. Rubinstein, S M, Cohen, G & Fineberg, J 2004, ‘Detachment fronts and the onset of dynamic friction’, Nature, vol. 430, Rummel, F & Fairhurst, C 1970, ‘Determination of the post-failure behavior of brittle rock using a servo-controlled testing machine’, Rock Mechanics and Rock Engineering, vol. 2, no. 4, pp. 189–204. Samudrala, O, Huang, Y & Rosakis, AJ 2002, ‘Subsonic and intersonic shear rupture of weak planes with a velocity weakening cohesive zone’, Journal of Geophysical Research, vol. 107 (B8), pp. 2,170, Scholz, CH 1998, ‘Earthquakes and friction laws’, Nature, vol. 391, pp. 37–42. Scholz, CH 2002, The mechanics of earthquakes and faulting, Cambridge University Press, Cambridge. Segal, P & Pollard, DD 1980, ‘Mechanics of discontinuous faulting’, Journal of Geophysical Research, vol. 85, pp. 4337–4350. Sibson, RH 1982, ‘Fault zone models, heat flow, and the depth distribution of earthquakes in the continental crust of the United States’, Bulletin of the Seismological Society of America, vol. 72, pp. 151–163. Sibson, R 1992, ‘Power dissipation and stress levels during seismic faulting’, Journal of Geophysical Research, vol. 85, pp. 6239–6247. Stavrogin, AN & Tarasov, BG 2001, Experimental Physics and Rock Mechanics, Balkema, Rotterdam. Tarasov, BG 2010, ‘Superbrittleness of rocks at high confining pressure’, in M Van Sint Jan & Y Potvin (eds), Proceedings of the Fifth International Seminar on Deep and High Stress Mining, Australian Centre for Geomechanics, Perth, pp. 119–133. Tarasov, BG 2014, ‘Hitherto unknown shear rupture mechanism as a source of instability in intact hard rocks at highly confined compression’, Tectonophysics, vol. 621, pp. 69–84. Tarasov, BG 2016a, ‘Shear fractures of extreme dynamics’, Rock Mechanics and Rock Engineering, vol. 49, no. 10, pp. 3999–4021. Tarasov, B 2016b, Fan-hinged shear, online video, 19 July, viewed 6 December 2016, Tarasov, BG & Ortlepp, WD 2007, ‘Shock loading-unloading mechanism in rockburst shear fractures in quartzite causing genesis of polyhedral sub-particle in the fault gouge’, in Y Potvin (ed.) Proceedings of the Fourth International Seminar on Deep and High Stress Mining, Australian Centre for Geomechanics, Perth, pp. 183–192. Tarasov, BG & Randolph, MF 2011, ‘Superbrittleness of rocks and earthquake activity’, International Journal of Rock Mechanics and Mining Science, vol. 48, pp. 888–898. Tarasov, B & Potvin, Y 2013, ‘Universal criteria for rock brittleness estimation under triaxial compression’, International Journal of Rock Mechanics and Mining Science, vol. 59, pp. 57–69. Wawersik, WR & Brace, WF 1971, ‘Post-failure behaviour of a granite and diabase’, Rock Mechanics, vol. 3, pp. 61–85. Wawersik, WR & Fairhurst, C 1970, ‘A study of brittle rock fracture in laboratory compression experiments’, International Journal of Rock Mechanics and Mining Science, vol. 7, pp. 561–575. Xia, K, Rosakis, A J & Kanamori, H 2004, ‘Laboratory earthquakes: the sub-Rayleigh-to-supershear rupture transition’, Science, vol. 303, pp. 1859–1861. Zheng, G & Rice, JR 1998, ‘Conditions under which velocity-weakening friction allows a self-healing versus a crack-like mode of rupture’, Bulletin of the Seismological Society of America, vol. 88, pp. 1466–1483.
<urn:uuid:225a38de-3c7d-4346-97a1-0691896a0423>
CC-MAIN-2021-43
https://papers.acg.uwa.edu.au/p/1704_0.1_Tarasov/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00351.warc.gz
en
0.788209
3,684
2.6875
3
Tuyển sinh số cập nhật đề thi thử THPT Quốc gia 2020 môn Tiếng Anh mã đề 414 kèm theo đáp án qua bài viết dưới đây. Các em cùng theo dõi nhé! Mark the letter A, B, C, or D on your answer sheet to indicate the word whose underlined part differs from the other three in pronunciation in each of the following questions. Mark the letter A, B, C, or D on your answer sheet to indicate the word that differs from the other three in the position of primary stress in each of the following questions. Mark the letter A, B, C, or D on your answer sheet to indicate the correct answer to each of the following questions. Question 5. Universities send letters of ______ to successful candidates by post. Question 6. I kept them in the ___________ A. black metal small box B. small black metal box C. small metal black box. D. metal black small box Question 7. A university is an -institution of higher education and research, which grants _______ degrees at all levels in a variety of subjects. Question 8. Dad is always willing to __________ a hand with cleaning a house. Question 9. _______ is a sport in which people or teams race against each other in boats with oars. D. Water polo Question 10. He is the man _______ car was stolen last week. Question 11. Clearing forests for timber has resulted _______ the loss of biodiversity. Question 12.Lots of houses _______ by the earthquake last week. A. are destroyed B. have been destroyed C.had been destroyed D. were destroyed Question 13.The boy ___________ next to me is my son. A. Who sit D. is sitting Question 14. _______ you study for these exams, _______ you will do. A. The harder / the better B. The more / the much C. The hardest / the best D. The more hard / the more good Question 15. Two tablets ________ twice a day to have you recover from the illness quickly. A. must take B. must be taken C. must have taken D. must be taking Question 16. I dont think Peter will come with us, _______? A. do I B. will he C. dont I D. wont he Mark the letter A, B, C or D on your answer sheet to indicate the most suitable response to complete each of the following exchanges. Question 17. Two friends Laura and Maria are talking about Marias house. Laura: "What a lovely house you have!" Maria: " ___________". A. Thank you. Hope you will drop in B. Of course not, its not costly C. I think so D. No problem Question 18. "Would you like to have coffee, lemonade, or something different?" Cathy: "_______________" . A. Im afraid not. B. Yes, please. C. Anything will do. D. Never mind. Mark the letter A, B, C, or D on your answer sheet to indicate the word(s) OPPOSITE in meaning to the underlined Question 19. Advanced students need to be aware of the importance of collocation. A. of great importance B. of high level C. of low level Question 20. School uniform is compulsory in most of Vietnamese schools. Mark the letter A, B, C, or D on your answer sheet to indicate the word(s) CLOSEST in meaning to the underlined word(s) in each of the following questions. Question 21. Whenever problems come up, we discuss them frankly and find solutions quickly. Question 22. Bone and ivory are light, strong and accessible materials for Inuit artists Mark the letter A, B, C, or D on your answer sheet to indicate the underlined part that needs correction in each of the following questions. Question 23. Both Tom and Mary , as well as John is ready for the exam. Question 24. A certificate is an official document starting that you have passed an examination, completing a course achieve some necessary qualifications Question 25. After driving for twenty miles, he suddenly realized that he has been driving in the wrong direction. A After driving B suddenly realized C has been driving D in the wrong direction Read the following passage and mark the letter A, B, C, or D on your answer sheet to indicate the correct word or phrase that best fits each of the numbered blanks. WHY DO ANIMALS GO EXTINCT? Different kinds of animals have appeared and disappeared throughout Earth’s history. Some animals go extinct because the climate (26)_______ they live changes. The climate may become wetter or drier. It may become warmer or cooler. If the animals cannot change, or adapt, to the new climate, they die. Some animals go extinct because thay cannot (27) _______ with other animals for food. Some animals go extinct because they are killed by enemies. New kinds of animals are always evolving. Evolving means that the animals are changing (28)_______ from generation to generation. Small differences between parents, children, and grandchildren slowly add up over many, many generations. Eventually, a different kind of animal evolves. Sometimes many of the animals on Earth go extinct at the (29) _______ time. Scientists call this a mass extinction. Scientists think there (30) _______ at least five mass extinctions in Earth’s history. The last mass extinction happened about 65 million years ago. This mass extinction killed off the dinosaurs. A. has been B. have been C. will be Read the following passage and mark the letter A, B, C, or D on your answer sheet to indicate the correct answer to each of the questions. After twenty years of growing student enrollments and economic prosperity, business schools in the United States have started to face harder times. Only Harvard’s MBA School has shown a substantial increase in enrollment in recent years. Both Princeton and Stanford have seen decreases in their enrollments. Since 1990, the number of people receiving Masters in Business Administration (MBA) degrees, has dropped about 3 percent to 75,000, and the trend of lower enrollment rates is expected to continue. There are two factors causing this decrease in students seeking an MBA degree. The first one is that many graduates of four-year colleges are finding that an MBA degree does not guarantee a plush job on Wall Street, or in other financial districts of major American cities. Many of the entry- level management jobs are going to students graduating with Master of Arts degrees in English and the humanities as well as those holding MBA degrees. Students have asked the question, “Is an MBA degree really what I need to be best prepared for getting a good job?” The second major factor has been the cutting of American payrolls and the lower number of entry-level jobs being offered. Business needs are changing, and MBA schools are struggling to meet the new demands. Question 31. What is the main focus of this passage? A. Jobs on Wall Street B. Types of graduate degrees C. Changes in enrollment for MBA schools D. How schools are changing to reflect the economy Question 32. The word “prosperity” in the first paragraph could be best replaced by which of the following? Question 33. Which of the following business schools has shown an increase in enrollment? Question 34. Which of the following descriptions most likely applies to Wall Street? A. a center for international affairs B. a major financial center C. a shopping district D. a neighborhood in New York Question 35: According to the passage, what are two causes of declining business school enrollments? A. lack of necessity for an MBA and an ecorfomic recession B. low salary and foreign competition C. fewer MBA schools and fewer entry-level jobs D. declining population and economic prosperity Question 36: As used in the second paragraph, the word “struggling” is closest in meaning to _________. Question 37: Which of the following might be the topic of the next paragraph? A. MBA schools’ efforts to change B. Future economic predictions C. A history of the recent economic changes D. Descriptions of non-MBA graduate programs Mark the letter A, B, C, or D on your answer sheet to indicate the sentence that best combines each pair of sentences Question 38. The gift is very expensive. He gave it to me on my 18th birthday. A. The gift which he gave to me on my 18th birthday is very expensive. B. The gift to which he gave me on my 18th birthday is very expensive. C. The gift that he gave it to me on my 18th birthday is very expensive. D. The gift is very expensive ,which he gave to me on my 18th birthday. Question 39: He is very intelligent. He can solve all the problems in no time. A. So intelligent is he that he can solve all the problems in no time. B. He is very intelligentthat he can solve all the problems in no time. C. An intelligent student is he that he can solve all the problems in no time. D. So intelligent a student is hethat he can solve all the problems in no time. Mark the letter A, B, C, or D on your answer sheet to indicate the sentence that is closest in meaning Question 40: Much to my surprise, I found her lecture on wild animals extremely interesting. A. Contrary to my expectations, her lecture on wild animals was the most fascinating of all. B. I was fascinated by what she said in her lecture on wild animals though I hadn’t expected to be. C. I hadn’t expected her to lecture on wild animals, but she spoke well. D. It was at her lecture on wild animals that I realized I needed to study it. Question 41: I had no sooner got to know my neighbors than they moved away. A. Soon after I got to know my new neighbors, I stopped having contact with them. B. If my new neighbors had stayed longer, I would have got to know them better. C. Once I had got used to my new neighbors, they moved somewhere else. D. Hardly had I become acquainted with my new neighbors when they went somewhere else to live. Question 42: No one has ever seen the old man again since then. A. The old man has not been seen again by anyone since then. B. The old man has never seen anyone since then. C. The old man was not seen by anyone since then. D. The old man has never been seen again since then. Read the following passage and mark the letter A, B, C, or D on your answer sheet to indicate the correct answer to each of the questions. While watching sports on TV, the chances are children will see professional players cheating, having tantrums, fighting, or abusing officials. In addition, it’s highly likely that children will be aware of well-known cases of sportspeople being caught using drugs to improve their performance. The danger of all this is that it could give children the idea that winning is all that counts and you should win at all costs. Good behavior and fair play aren’t the message that comes across. Instead, it looks as if cheating and bad behavior are reasonable ways of getting what you want. This message is further bolstered by the fact that some of these sportspeople acquire enormous fame and wealth, making it seem they are being handsomely rewarded either despite or because of their bad behavior. What can parents do about this? They can regard sport on television as an opportunity to discuss attitudes and behavior with their children. When watching sports together, if parents see a player swearing at the referee, they can get the child’s opinion on that behavior and discuss whether a player’s skill is more important than their behavior. Ask what the child thinks the player’s contribution to the team is. Point out that no player can win a team game on their own, so it’s important for members to work well together. Another thing to focus on is what the commentators say. Do they frown on bad behavior from players, think it’s amusing or even consider it’s a good thing? What about the officials? If they let players get away with a clear foul, parents can discuss with children whether this is right and what effect it has on the game. Look too at the reactions of coaches and managers. Do they accept losing with good grace or scowl and show a bad attitude? Parents can use this to talk about attitudes to winning and losing and to remind children that both are part of sport. However, what children learn from watching sports is by no means all negative and parents should make sure they accentuate the positives too. They should emphasise to children the high reputation that well-behaved players have, not just with their teammates but also with spectators and the media. They can focus on the contribution made by such players during a game, discussing how valuable they are in the team. In the interviews after a game, point out to a child that the well-behaved sportspeople don’t gloat when they win or sulk when they lose. And parents can stress how well these people conduct themselves in their personal lives and the good work they do for others when not playing. In other words, parents should get their children to focus on the positive role models, rather than the antics of the badly behaved but often more publicized players. (Adapter from “New English File – Advanced” by Will Maddox) Question 43. Which of the following does the passage mainly discuss? A. The importance of team spirit in sport B. The influence of model sportspeople on children C. Moral lessons for children from watching sports D. Different attitudes toward bad behavior in sport Question 44. The word “bolstered” in paragraph 1 is closest in meaning to _______. Question 45. According to paragraph 1, misconduct exhibited by players may lead children to think that _______. A. it is an acceptable way to win the game. B. it is necessary in almost any game. C. it brings about undesirable results. D. it is disadvantagesous to all concerned. Question 46. According to paragraph 2, what should parents teach their children through watching sports? A. Cheating is frowned upon by the majority of players. B. A team with badly-behaved players will not win a game. C. A player’s performance is of greater value than his behavior. D. Collaboration is fundamental to any team’s success. Question 47. The word “accentuate” in paragraph 4 can be best replaced by _______. Question 48. The word “They” in paragraph 4 refers to _______. Question 49. Which of the following about sport is NOT mentioned in the passage? A. Misconduct from sportspeople may go unpunished despite the presence of officials. B. A well-behaved player enjoys a good reputation among his teammates, spectators and the media. C. Reactions of coaches and managers when their teams lose a game may be of educational value. D. Many sportspeople help others so as to project good images of themselves. Question 50. Which of the following can be inferred from the passage? A. The media tend to turn the spotlight more on sportspeople’s wrongdoings than on their good deeds. B. The well-behaved players in a game invariably display desirable conducts when not playing. C. Players with good attitudes make a greater contribution to their teams’ budgets than others. D. Well-mannered players sometimes display strong emotions after winning or losing a game. Xem thêm các đề thi thử khác: Chuyên trang thông tin Tuyển Sinh Số cung cấp thông tin tuyển sinh chính thức từ Bộ GD & ĐT và các trường ĐH - CĐ trên cả nước. Nội dung thông tin tuyển sinh của các trường được chúng tôi tập hợp từ các nguồn: - Thông tin từ các website, tài liệu của Bộ GD&ĐT và Tổng Cục Giáo Dục Nghề Nghiệp; - Thông tin từ website của các trường - Thông tin do các trường cung cấp Giấy phép số 698/GP - TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 25/02/2019.
<urn:uuid:c9e6199c-4cc9-4351-820e-2011368c90a3>
CC-MAIN-2021-43
https://tuyensinhso.vn/de-thi-dap-an/de-thi-thu-thpt-quoc-gia-2020-mon-tieng-anh-ma-de-414-co-dap-an-c51843.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.945318
3,830
3.171875
3
A urinary tract infection or UTI is a bacterial infection usually caused by gastroin-testinal bacteria that has travelled from the anal tract to the urinary tract. The condition is uncomfortable and painful, and, if left untreated, can lead to kidney damage. Many women experience a urinary tract infection at least once in their lives. It’s a common bacterial infection that can be easily treated with antibiotics and natural remedies. However, it should be taken seriously as the possible complications can be severe. Causes and classification Urinary tract infection (UTI) is a widespread condition that can affect every part of the urinary tract (kidneys, bladder, ureters, and urethra). Anyone can get a UTI, but women are more susceptible simply because of their anatomy. Women have a 30% higher risk than men for developing a UTI because their urinary tract is shorter, making it easier for bacteria to move from the bladder to the kidneys. Urinary tract infection is usually caused by E. coli bacteria that commonly live in the large intestine. Other culprits include proteus mirabilis and klebsiella pneumoniae. If unwelcome bacteria reach the urethra, an infection can travel up the urinary tract. Many people experience a urinary tract infec-tion at least once in their lives. The severity of the condition depends on how far the bacteria have travelled, and UTIs are classified into different types based on how far up the urinary system the in-fection has spread. Period Tracker & Calendar You can track your period using WomanLog. Download WomanLog now: You can track your period using WomanLog. Download WomanLog now: Types of UTIs: Cystitis, or a bacterial infection in the bladder. This type of UTI causes discomfort and pain when urinating, and you might feel the need to urinate frequently. You may also experience a change in the colour and transparency of your urine, find blood in your urine, and feel pain in the lower abdomen. Prostatitis, or a bacterial infection of the prostate. Common symptoms are pain in the groin, painful and difficult urination, blood in urine, and a frequent, urgent need to urinate. It mostly affects men under 50. Urethritis, or a bacterial infection in the urethra. Urethritis causes a burning sensation while urinating and discharge from the urethra. Some people also report the feeling of not being able to urinate fully at once. Pyelonephritis, or a bacterial infection of the kidneys. This is the most dangerous type of UTI and typically doesn’t happen unless the person has left an infection untreated or is immuno-compromised. It can lead to kidney damage (including acute renal failure) and cause chronic infection. If bacteria enter the kidneys, the infected person is likely to experience fever, chills, nausea and vomiting, as well as pain in the upper back. Any infection can become dangerous if left untreated. Pay attention to early symptoms and seek medi-cal attention as soon as you notice that something is wrong. Prevalence and risks of a urinary tract infection Urinary tract infections are more common than you think. Researchers in the United Kingdom studied almost one million people over 10 years and found that 21% of respondents had had at least one UTI during this time. The most commonly affected were women and older people. Many factors can increase the risk of a urinary tract infection, such as being female (and therefore having a shorter urethra) and wiping ‘back to front’ after using the toilet, which can bring bacteria from the anus to the urethra. Other factors that increase your risks of getting a UTI: A weakened immune system—if your body has been under stress for whatever reason, it has fewer resources to fight off harmful bacteria. This can increase your chance of getting a urinary tract infection. Some of the most common reasons for a weakened immune system are diabetes, obesity, immunosuppressant drugs, sexually transmitted diseases, and other viral or bacterial infections. Hormonal changes—a sudden drop in estrogen has been linked to increased risk of infec-tion. Menopausal and pregnant women have a higher risk of getting a UTI due to hormonal fluctuations. Read more about hormonal changes here. Sexual activity—a urinary tract infection can occur during sex when the bacteria travel from the anus to the urethra by means of close contact with the genitalia. Engaging in unprotected sex can also enhance the chances of contracting a UTI due to disturbances in pH levels and bacterial changes in the vagina. Read more about practising safe sex here. Certain birth control—using a diaphragm as a method of birth control increases a woman’s risks of contracting a UTI, as does the use of condoms with spermicidal foam. Wearing a urinary catheter—this is a flexible tube that is inserted into the urethra to collect urine into a bag in cases when one cannot urinate normally. A urinary catheter greatly increases your chances of contracting a UTI if used during pregnancy. Adhering to hygiene standards for catheter use is also very important in order to avoid infections. Does cold weather cause UTIs? You may have been told not to sit on the cold ground and to keep your lower back warm in winter. Although cold weather doesn’t cause bacterial infections, it can be a facilitator. When our bodies must endure cold temperatures, they do their best to provide blood and oxygen to the vital organs. That means that blood circulation in the organs increases. Consequently, your kidneys need to work harder to filter blood and produce more urine. If you don’t support your body with proper hydration on cold days, it won’t work as effectively and you increase the risk of bacteria that have escaped filtration en-tering your urinary tract. Does peeing after sex prevent UTIs? Many believe that urinating after sex can help avoid urinary tract infections. While there aren’t a lot of studies that support this claim, urine does flush bacteria from the urethra. Sexual intercourse increases your risk of getting a UTI because intimate contact means an increased presence of bacteria that may travel to the urinary tract. However, peeing after sex to flush bacteria is only effective if you do it with-in 30 minutes or so afterwards. Although women have a higher chance of contracting an infection, physicians recommended that men also urinate after sex. Peeing after sex is not a magical cure; it will not prevent pregnancy or the stop spread of STDs. Always practice safe sex! A urinary tract infection can often be treated using natural remedies, but you should still see your doc-tor to make sure the bacteria have not spread and that you are not at risk of developing a chronic infec-tion. Your doctor can perform urine tests to determine the seriousness of the infection. The most common medical treatment is a course of antibiotics and a reminder to drink fluids to help flush the bacteria from your system. If you are experiencing pain, your doctor may recommend painkillers. Some specialists also recommend drinking cranberry juice or taking capsules containing tannin. Tan-nin is a natural polyphenol (micronutrient) present in cranberries that prevents E. coli bacteria from sticking to the walls of the bladder and urethra. Although quite easy to treat, ‘an ounce of prevention is worth a pound of cure’. A urinary tract infec-tion can cause complications such as narrowing of the urethra, kidney damage, chronic infection, pregnancy risks, and even sepsis. So, what can you do to prevent a UTI? Stay hydrated—urine itself has some antibacterial properties and effectively prevents bacteria from sticking to the walls of the urinary tract. Also, don’t hold your pee; regular urination helps flush bad bacteria out. Wipe ‘front to back’—after relieving yourself, always wipe from the genitals to the anus to prevent gastrointestinal tract bacteria from entering the urethra. Strengthen your immune system—your body is built to fight off bacteria and viruses. When your immune system is weakened, it becomes more difficult for the body to maintain its natural protective ‘shield’. You can strengthen your immune system by getting enough sleep, eating a balanced diet, and engaging in regular physical exercise. Practice safe sex—when having sex with a new partner always use a condom and remember to pee and wash your genitals after sex. Avoid excessively washing your genital area—harsh detergents and scented products can destroy good bacteria and increase the overgrowth of harmful bacteria. A healthy personal hy-giene routine is important, but don’t overdo it—the self-cleaning function of your vagina relies on the health of the microflora within. Limit your chances of getting a UTI by practising safe sex, staying hydrated, and maintaining proper intimate hygiene. You can track your period using WomanLog. Download WomanLog now: It won’t come as a surprise when I tell you that smoking is unhealthy. Even so, many people still smoke regularly. Even those who consider themselves to be non-smokers, occasionally smoke when out with friends for a drink. You may have experienced heartburn after eating or at other times during the day. Despite their menacing names, heartburn and acid reflux are common (yes, there is a slight difference between the two), and are not considered disorders on their own. They do, however, cause discomfort and can indicate a more serious problem if the sensation lingers for too long or occurs too often. Urinary incontinence affects 200 million people worldwide. People who develop incontinence experience urine leakage, lack of bladder control, and the frequent urge to urinate. It mainly affects older, pregnant, or postpartum women. Although many women choose to live with the discomfort, urinary incontinence can be reversed by improving one's lifestyle and incorporating regular pelvic muscle exercises. Preventative testing is a powerful tool in the battle against conditions that worsen over time. In this article, we’ll be taking a look at the preventative measures you can take to tackle HPV-related cancers. If there is does not happen to be a toilet nearby, there is nothing dangerous about holding your pee for a while. It happens to everyone, and the unpleasant feeling will be forgotten as soon as you finally have the opportunity to relieve yourself. A new member of the coronavirus family was first identified in December 2019. Over the past several months this new or novel coronavirus has spread around the world causing a global pandemic. This virus, COVID-19, is very contagious and causes mild to severe flu-like symptoms, especially affecting the respiratory organs and heart. Almost two million people have been infected worldwide and more than 100, 000 have died. Brain fog is a common experience that can happen to anyone due to lack of sleep, certain medications, or exhaustion after heavy physical activity. However, many menstruating people experience brain fog right before menstruation, and sometimes the symptoms are so intense that they disrupt daily life. The choice of dietary supplements available in any given drugstore, let alone online, is vast. They promise to provide us with vitamins, minerals, and other vital nutrients in the form of pills, drops, capsules, and powders. Some of these can be truly useful while others are merely money-making fads for their producers, and some can even be quite harmful if used recklessly or purchased from an unreliable seller. Any abusive relationship, be it with a parent, sibling, or romantic partner, leaves scars. Moving forward with your life can be daunting enough, let alone building the foundations for a new healthy relationship. Toxic shock syndrome is an acute, potentially fatal infection caused by staph or strep bacteria. Both types of bacteria can live harmlessly on your skin and in your nose and mouth—it is when there’s an overgrowth within the body that problems occur. The condition is commonly linked to the use of highly absorbent tampons during menstruation. Millions of people across the globe use antidepressants to cope with depression, anxiety, and other mood disorders. Although not a cure, the right antidepressant can be incredibly helpful with treating symptoms. Where do children come from? All parents eventually get this question. There are many different ways to explain the complicated process of forming of a new life to a child, but our answer is a short and precise—children come from the uterus. Tattoos are not a new invention, many cultures have a long history of using tattoos in their religious and symbolic practices, or purely for aesthetic reasons. Tattoos were reintroduced into popular Western culture in the 20th century. Artists such as Lyle Tuttle, Cliff Raven, and Don Nolan were some of the people who influenced the re-emergence of tattoos. A clenched jaw is an unfortunate side-effect of a stressful life. You may be overworking your jaw if you grind your teeth at night, eat too many hard foods, or have a bad posture! If left untreated, these symptoms can lead to chronic issues known as TMJ disorders. The human body is naturally covered in hair and still we have a long history of going to great lengths to remove it. Contrary to some beliefs, body hair removal serves a purely aesthetic purpose. This makes the choice to leave it there or get rid of it up to you. Sweating is a natural bodily function—all of us sweat regardless of age, gender, or intensity of physical activity. Sometimes we notice changes in how much we sweat or how our sweat smells. There are reasons for these changes. Millions of people take prescription and illicit drugs for medical or recreational reasons. When such substances are taken without precaution, they can develop into an addiction. Drug addiction is dangerous to the affected person and the people close to them. If you ask someone what migraine is, chances are they will tell you it’s a kind of severe headache. While partially true, this is an oversimplification. In this article, we explore the stages, symptoms, and myths associated with migraine, and discuss various coping strategies that help mitigate symptoms. They say you are what you eat. This idea can be helpful, provided we know what we are eating (which we often don’t). It can be very tempting to rely on an outside source to give us a list of special ingredients that will magically solve all our problems. Acne is a widespread skin condition, well known as a teenage issue, although it also affects adults. It can be tempting to cover it up (with makeup or otherwise), but this is, at best, a temporary solution and is more likely to make things worse. Many of us enjoy the occasional drink. Alcohol consumption has played a central role in almost all human cultures since at least about 4000 BC. The development of agrarian societies was based on the cultivation of grain to make bread and, the evidence tells us, to make alcohol. From the earliest recorded use of alcohol, drinking has been a social activity subject to local cultural norms. Many of us only truly learn to love ourselves and our bodies fairly late in life. Prior to that, we tend to spend time and energy judging ourselves about things we cannot change. Self-love is a skill made difficult to attain by the very impractical beauty standards that are popular today. The Covid-19 pandemic has brought about many changes in our daily lives, including the new norm of wearing a protective face mask while out in public. The mask provides necessary protection against the virus, however prolonged use can have an impact on your skin. Going to the gynaecologist can be daunting, especially if it’s your first time or if you have had a previous negative experience. Fear not! Reproductive health is an important aspect of your health, and you are in control of who you choose as your doctor and what happens at the appointment. The human papillomavirus (HPV) is the most common STD in the world: there is a very good chance that you will get at least one type of HPV during your lifetime. Preventative measures include practicing good hygiene and safe sex, getting tested regularly, and getting vaccinated, the last of which we focus on in this article. Humans have an innate need for social interactions, including physical touch. Touch is vital for a person’s wellbeing. As the Covid-19 pandemic mandates social distancing, many of us are left touch starved. What are the consequences, and is it possible to compensate for this deficit? Vaginal mycoses, or vaginal yeast infection (also called candidal vulvovaginitis, vaginal thrush, or candidiasis) is extremely common. Mycoses is found in about 20% of vaginal secretions tested in laboratories. The pain and discomfort caused by this condition often requires immediate medical attention. Food is a necessity in our lives. It gives us energy and nourishes our bodies. But sometimes, what is supposed to provide us with vitality takes it away. People with eating disorders use food as a crutch for dealing with negative or overwhelming emotions until their relationship with it becomes unhealthy. Endometriosis is a gynaecological condition caused by the presence of endometrial cells outside the uterus. It is known to be a leading cause of infertility in women. There is no known cure, but there are plenty of treatment options for combating symptoms. Millions of women worldwide struggle with hair loss. Human hair growth passes through four stages. At the end, a hair is shed, and a new hair begins to grow from the follicle. However, a stressful lifestyle, poor diet, hormonal imbalances, and certain illnesses can cause excessive hair loss and pattern baldness in women. Going green is important for the health and safety of our own bodies, as well as the safety of our planet. Making green choices and reducing your environmental footprint can start with your feminine hygiene routine. Pain is a universal human experience, but it is also highly individual. It can be hard to evaluate the exact cause of pain, but it is always a signal that something potentially dangerous is happening to your body. Hormones are responsible for myriad bodily functions, and they affect our bodies in myriad ways, including our mood. Since the menstrual cycle features so many different hormonal processes, most women experience related emotional symptoms.
<urn:uuid:bdecd5e4-96c1-4eed-a859-c068293e054f>
CC-MAIN-2021-43
http://proactiveapp.com/health/urinary-tract-infection
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00269.warc.gz
en
0.947845
3,829
3.296875
3
3D Printing in Medicine – Implications for Insurance Article information and share options In 2012, a medical team at the University of Michigan in Ann Arbor had a difficult problem with a baby who had been born with the rare condition tracheobronchomalacia, where one portion of his airway was so weak that it persistently collapsed. Breathing was difficult, even on a ventilator. How could the area of weak tissue be repaired? By a dangerous operation? The ideal tool for this delicate task was a 3-D printer. 3-D printing (3DP, also known as additive manufacturing) was developed in the 1980s by the American engineer Charles Hull. Carmakers can design complicated parts on a computer and print prototypes. Three-dimensional printers are now inexpensive. Today’s printers print in plastics, but also in metals, ceramics, wax, and even food. The medical procedure at the University of Michigan in the child with tracheobronchomalaciaprofited from experience with 3DP at the Michigan College of Engineering and the automotive industry, which is eager to transform production. CT scan data of the baby’s chest was converted into a three-dimensional virtual map of the narrowed airways. This map led to the design and printing of a splint—a small tube made of biocompatible material—that would fit over the weakened airway and hold it open. It would expand as the baby grew. The splint was biodegradable and would last for three years, long enough for the cells to grow over it. Approval for use through FDA's emergency-use authorization program had been obtained. Three weeks after the splint was implanted, the baby was home. In a 2013 issue of The New England Journal of Medicine, the Michigan team reported that the baby was thriving without any "unforeseen problems related to the splint"1. A wider medical community took notice of this medical 3DP innovation. In the meantime, additional two children have benefitted from 3DP splints for similar airways problems at the University of Michigan. Spectrum of 3D printing in medicine - From education to regeneration After Charles Hull pioneered 3D printing, few medical examples surfaced for over a decade. Over the last couple of years, however, interest in applications for medicine has been increasing. Actual and potential medical uses for 3D printing can be classified into broad categories: 1) creation of anatomical models, customised prosthetics and implants 2) pharmaceutical printing affecting drug dosage and delivery and 3) tissue and organ printing. Benefits of the new additive manufacturing in medicine include customisation and personalisation of medical products, drugs, and equipment. 3DP could be cost-effective and lead to increased productivity. The individual production steps could be split among several parties with effectively enhanced collaboration. 3DP technology is already up and running in many medical areas, as can be seen from a random list of 3D-printes examples: iLab/Haiti prints umbilical cord clamps for local hospitals Printed models of complex congenital heart disease prepare doctors and safe surgery time Cheap and easily customisable prosthetics such as hands or limbs for Uganda Tailor-made printed titanium parts replace insured or missing skull portions 3DP of intervertebral discs or human ear cartilages including built-in electronics Biocompatible, biodegradable devices deliver chemotherapy to treat bone cancer 3D skin printing directly onto the wounds of burn victims Dual-syringe 3DP of alginate, smooth muscle cells and interstitial cells produces heart valves that are tested in animals 3DP binds chemicals to ceramic powder for scaffolds that promote regenerative bone growth Bio printing blood vessels using temperature-dependent dissolving ink Expectations of the new 3DP technology are often exaggerated in the media, and the public has unrealistic expectations about how soon some of the exciting possibilities will become a reality. However, what has already been achieved seems spectacular. Many 3DP medical products are only adaptations of existing products. More transformative applications for 3DP technology will need more than a decade to evolve, and significant scientific challenges currently remain. The costs to bringing 3D printed medical devices to clinical use are high, and regulatory hurdles are considerable. But if successful, 3D bioprinting could challenge traditional paradigms of medical device manufacturing and health care. Last but not least, crowdsourced 3D printed prosthetics for disabled children in both the developing and Western world is an impressive example for how 3DP processes can lead to the democratisation of design, manufacturing and distribution. We can conclude that 3D bioprinting has already been used for the generation and transplantation of various different tissue - including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. 3D printed tissues with cells are mainly sheet-like structures with cells being laid down within a scaffold structure. Future bioprinting of tissue and organs - Major roadblocks and potential solutions Even advanced additive manufacturing is limited by the palette of standard polymers and metal alloys. A wider assortment of novel materials, from living cells to semiconductors, is under development. The precise combination and localisation of “inks” could lead to spectacular results2. Bioprinting can be seen as precise spatial patterning of living cells and biologics. Computer-aided deposition, layer-by-layer, could produce living tissue and organ analogs for regenerative medicine or biological studies. By arranging multiple cell types, 3DP can recapitulate tissue biology, and bioprinting is seen as a game-changer in the development of tissue constructs.3 However, the printing of blood vessels is not resolved. Every viable tissue needs a network of vasculature. The materials scientist Jennifer Lewis and her team at Harvard University have succeeded with a promising experiment4. Crucial to Lewis’s success is what she calls fugitive ink, which liquefies when cooled, leaving behind a hollow canal that can be filled with cells. 3-D-printed cellular tissues will eventually serve as the building blocks of whole organs. There has been significant interest in whether 3D organ printing is possible, and the general consensus is that we are a number of years away from this. Indeed, it may not be possible to replicate the complicated structure of a 3D organ, but it may be possible to develop a structure that simulates or amplifies the activities of that organ. The key conference for 3D printing is "Inside 3D Printing Conference and Expo", which started in 2013, and is currently on a world tour with events in New York, Seoul and Tokyo. This conference covers a number of different vertical streams, including medicine, technology, automotive and software. Key organisations include Organovo (developing tissues to test toxicity), Oxford Performance Materials (developing bone implants for facial reconstruction and replacing bones in feet and hands), e-NABEL (producing prosthetics), University of Michigan Ann Arbor (producing tracheal splint), Forest Baptist Medical Centre (early stage development of functional kidneys that may instead constitute layers of kidney cells), and Harvard University (printing of blood vessels through use of fugitive ink). CellLink unveiled the first universal bioink, which is aimed at 3D printing living and fully functional 3D tissue models. 3D bioprinters typically have two separate extruders, one for laying down a "bio-ink" and another for laying down cells onto that ink. CellLink mixes the cells and bioink and allows a single nozzle on the bioprinter to lay down both bioink and cells at the same time. This allows for greater detail and precision as well as the ability to speed up the printing process. Regulation - FDA approval of 3D printed medical applications In October 2014, the FDA held a workshop on 3D printing for medical device makers entitled "Additive Manufacturing of Medical Devices: An Interactive Discussion on the Technical Considerations of 3D Printing". This was interpreted as an encouragement to develop and register 3D printed products for medical applications. 3D Printing is an opportunity for innovation in the health care industry. However medical devices and drugs are tightly regulated. Hurdles to get clearance are high, development and registration costs are high, and innovation must occur within the current FDA regulatory framework for medical devices. Experience with the differences between traditional medical devices and additive manufactured products are limited. The FDA’s Additive Manufacturing Working Group is operational, but little specific guidance has been released. 3DP can be viewed as a component of precision medicine (formerly personalised medicine), complementing genomic medicine and the use of stem cells. The case of a tracheal splint by 3D printing at the University of Michigan to treat critically ill newborns illustrates the potential contribution of additive manufacturing for precision medicine. Medical device manufacturers might be hesitant to move forward with 3DP in view of regulatory uncertainty and high costs for getting FDA approval. However, the FDA does not seem to be as perplexed by additive manufacturing as it was feared. After all, 3D printing is just a manufacturing technology, an enabling technology, not something completely unusual that has not been seen before. Questions arise such as 1) Who is the manufacturer? 2) Where does manufacturing occur when 3DP is used? 3) How are products cleaned? 4) How were processing agents removed from the final product? And 5) How is biocompatibility guaranteed? Table 1. FDA approved 3D printed Medical Products – A sample of 6 examples from a list of 85 approved Medical products. Legend: OPM: Oxford Performance Materials, Inc. The FDA has already approved 85 3D printed medical devices. Most of them were handled via the 510(k) or emergency use pathways. Typical examples include spinal cages, dental devices, and hearing aids with 3-D printed components. Most approved devices are personalised, but not completely novel. Truly novel 3-D printed medical devices probably will get premarket approval (PMA). Holding back with novel 3-D printed medical devices might reflect the attitude of many companies to wait for someone else to test the regulatory waters. In 2015, the FDA approved an epilepsy medicine called Spritam that is made by 3D printers. It could be the first in a line of 3DP central nervous system drugs. The pill’s unique structure allows it to dissolve considerably faster than the average pill, which is appreciated by seizure sufferers who were prescribed large, hard-to-swallow pills. 3D printing guarantees that the medicine will be delivered in the exact dose intended, as each pill will be completely uniform. While the quick-dissolving Spritam tablet is a world away from 3D-printed organs and body parts, its approval shows that the FDA thinks certain 3D-printed materials are safe for human consumption. Implications for Insurance In 2014, biomedical 3DP applications represented approximately 14 percent of the USD 4.1 billion revenues generated by companies providing 3DP printing equipment, materials and services5. With the prospect of an ever increasing number of applications and the improvement of existing technology and processes, 3DP industry revenues are expected to reach USD 21 billion by 2020, with a 10-20 percent contribution from the biomedical sector5. This fast-evolving highly-technological industry creates novel landscapes that require novel insurance considerations. 3D printed products, such as a hand prosthesis or hip implants, are personalised, but are otherwise not significantly different from what we are used to insuring. However, the various steps of commercial 3D printing add complexity to issues such as intellectual property, data protection and product liability. Taking safety and labeling as an example, 3DP products are subject to the same regulations as conventional products, but the current global and regional regulatory environments are not prepared for the ambiguity of a 3D printing process. The prevailing concept is that 3D printed medical devices are to be manufactured at approved, fit for purpose facilities; in this context, 3D printing is not any different from manufacturing such devices in a well-controlled good manufacturing practice (GMP) environment. This is how they have been so far regulated by the FDA. However, as technology enables new applications, it is conceivable that 3D manufacturing will have to take place closer to the patient, which would complicate the chain of parties involved in the process, and lead to potentially overlapping liability responsibilities and the associated regulatory challenges. Similarly, in the case of 3D printed pharmaceuticals, who will be held liable in the case of adverse reactions? Additional risks related to 3D printed products include the acquisition and transfer of personal data, as well as the liability of designer and software engineers. For example, online platforms allow sharing computer aided design (CAD) files that users can edit and print. How will the customisation and product quality control be tracked? How will data privacy be secured? In this context, changes in product liability laws may be needed to secure adequate consumer protection. Other important questions relate to the printing materials themselves and the actual printing process. The use of novel polymers, sometimes mixed with nanoparticles, poses long term risks for implants and needs post marketing surveillance and registries. Long term risks of 3DP products depend on body location, duration and function. Could the printable ingredients or printer itself be regulated as medical devices? Similarly, a variety of questions arise in relation to new and evolving 3D printing processes, such as fused deposition modeling, selective laser sintering, stereo lithography, and 3D plotting/Direct-Write/bioprinting. As 3D printing blurs boundaries between the steps in traditional manufacturing and commercialisation chains, new business models will arise to accommodate these needs, which will require innovative insurance approaches. The impact of 3DP on the economy, and on medicine in particular, is likely to become significant within the coming years, and we can expect 3D-printed biomedical elements to become increasingly more commonplace. One could potentially imagine the delivery of a customised sterile prosthesis and instruments for joint replacement to the operating room. Also, bio-printing promises to offer great precision medical solutions through the exact placement of cells, proteins, drugs and even genes to guide tissue generation. Another exciting development is 4D printing, that is, 3D printed objects that can adjust their shape or properties to stimuli from the environment. However, current technical limitations and the high costs associated with the development of 3D printed medical devices allows progress only in incremental steps. However, insurers should take this opportunity to gather experience in the field that can help them anticipate risks as 3D printed medical devices move into more tightly regulated, but equally relevant areas. While this technology has the potential to disrupt the current health care landscape, it will certainly create challenges and opportunities for the insurance industry. 1. Zopf DA, Hollister SJ, Nelson ME, Ohye RG, Green GE. Bioresorbable airway splint created with a three-dimensional printer. N Engl J Med. 2013 May 23; 368(21): 2043-5. 2. Ledford H. The printed organs coming to a body near you. Nature. 2015 Apr 16; 520(7547):273. 3. Ozbolat IT. Bioprinting scale-up tissue and organ constructs for transplantation. Trends Biotechnol. 2015 Jul; 33(7): 395-400. 4. Compton BG, Lewis JA. 3D-printing of lightweight cellular composites. Adv Mater. 2014 Sep 10; 26(34): 5930-5. 5. Wohlers report 2015. 3D Printing And Additive Manufacturing State Of The Industry. Annual Worldwide Progress Report. ISBN 978-0-9913332-1-9. Life Guide Medical Officer, Swiss Re Urs Widmer graduated from Zurich University Medical School in 1979. After postgraduate research work in a metabolic unit and a specialty degree in internal medicine (1988) he did research at the Rockefeller University, New York on the cloning of novel chemokines. After 13 years as an attending physician in internal medicine and consultant for clinical immunology at Zurich University Hospital he joined Swiss Re in 2005 as Senior Medical Officer. Senior Risk Engineer, Swiss Re Ramiro Dip is a Senior Casualty Risk Engineer at Swiss Re, responsible for the life sciences segment. He graduated from Rosario University Veterinary School, Argentina, and after some years in private practice, he obtained a doctoral degree in toxicology and a PhD in molecular biology from the Universities of Bern and Zurich, Switzerland. He then established and led an independent research group at the University of Zurich. Between 2011 and 2015, Ramiro Dip held different R&D roles at Novartis, and after completing a MBA degree, he joined Swiss Re in 2015 and following an international assignment in Sydney, Australia, he relocated to Zurich in 2017. Besides his role at Swiss Re, Ramiro is a regular lecturer at the University of Zurich.
<urn:uuid:6ee21c6e-7c3b-4c8b-a8b5-0fda4bf4996a>
CC-MAIN-2021-43
https://www.swissre.com/institute/research/topics-and-risk-dialogues/health-and-longevity/3D-Printing-in-Medicine---Implications-for-Insurance.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.945843
3,486
3.09375
3
In September, 2007, Murray Pittock and I organized a weekend conference at the University of California, Berkeley, ‘Scottish Romanticism in World Literatures,’ which was attended by scholars from the British Isles, Spain, Italy, France, Germany, Japan and Canada, as well as all over the U.S. One year later, I’d like to take this opportunity to reflect on the terms that made up the title of the conference: Scottish Romanticism, World Literature. What happens to Romanticism when Scotland is part of the picture? English-language literary histories still identify Romanticism with “English,” if no longer so exclusively with lyric poetry or an aesthetic philosophy derived from Coleridge or Shelley. Scotland remains associated with another pseudo-historical category – that is, an ideological category disguised as a historical one – the Enlightenment. The antithesis between Scottish / Enlightenment, on the one hand, and English / Romanticism, on the other, was fixed early in the nineteenth century: Francis Jeffrey’s attack on Wordsworth in the Edinburgh Review accompanied that periodical’s retooling of Scottish political economy into an ideological program, and provoked Wordsworth’s reciprocal scorn for “Scotch philosophers” – even as his own poetry developed Enlightenment themes. There is no exact Scottish equivalent to the watershed publication of Wordsworth’s and Coleridge’s Lyrical Ballads in 1798. James Macpherson’s “Ossian” poems were arguably the founding texts of a North Atlantic Romanticism in the early 1760s; Robert Burns’s Poems Chiefly in the Scottish Dialect adapted the “language really spoken by men” to a sophisticated poetics in 1786; and Walter Scott’s Waverley changed the shape and weight of the novel after 1814. None of these, however, constituted the experimental break with eighteenth-century norms, flagged with a manifesto (Wordsworth’s 1800 preface), that would make Lyrical Ballads not just a Romantic but a proto-Modernist event – at least, in the retrospect of literary history. The “Ossian” poems appeared at the same time as the quintessentially enlightened projects of the Scottish human sciences – indeed, Edinburgh philosophers subsidized Macpherson’s mission to bring back an ancient Highland epic. As I’ve argued elsewhere, Scottish and English periodizations simply do not match; in the Scottish case it makes more sense to think of a Romantic-and-Enlightenment century, from David Hume’s Treatise of Human Nature to Thomas Carlyle’s French Revolution, than a distinct phase of Romanticism (divided between generations of major poets) opening around 1790. What happens to Romanticism when Enlightenment is part of the picture? Hume’s Treatise provided a theory of the imagination, and of fiction, as producing and produced by – rather than intrinsically alien to – “common life,” the social, the everyday. Fully actualized in Scott’s historical novels, Hume’s theory would be passed on through them to the realist fiction that dominated nineteenth-century European literature. European realism, in other words, was a Scottish philosophical invention, given fictional form and substance in the Waverley novels. Yet Scotland remains peripheral to the mainstream institutions of scholarship in the period: this summer’s (2007) joint conference of the British Association for Romantic Studies and the North American Society for Studies in Romanticism featured just ten papers on Scottish topics out of a total of nearly 250. The Scottish century of innovations in poetry, philosophy, periodicals and fiction had a massive impact outside the British Isles. Hugh Blair’s Rhetoric and Archibald Allison’s Aesthetics trained the academies of the New World. Adam Smith’s Wealth of Nations supplied decisive philosophical arguments against slavery as well as against protectionism. Napoleon and Thomas Jefferson were among the devotés of Ossian, while American poets from Freneau and Whittier to Whitman and Frost took their cue from Burns, the “people’s poet,” as Robert Crawford argued in his lecture at last year’s Scottish Romanticism conference. Scott’s novels, as Franco Moretti put it at a roundtable on “The Novel in World History,” were the most influential body of work in the history of the genre. Their planetary diffusion coincided with an imperial expansion of British military, administrative and commercial networks that were in large part managed by Scots. What kind of a “world” was it that Scottish Romanticism helped shape? A continental-European, North Atlantic, settler-colonial world, with Scotland at its center: you could map this world – one where the “tidal wave of modernization” forced a look back at the pre-modern past, materialized in “primitive” regional societies in the process of being overwhelmed – much as Eric Hobsbawm (in The Age of Capital) mapped the nineteenth-century global diffusion of opera-houses. Nor did it all flow one way. The Ossian epics set the pattern of an indigenous high culture for a counter-imperialist national imaginary, while Scott’s historical fiction spawned anti-colonial as well as colonial mutations: Ivanhoe was Ho Chi Minh’s favorite novel as well as Tony Blair’s. Pascale Casanova’s recent book The World Republic of Letters (La république mondiale des lettres, 1999; English translation, 2004) may help us to think about Scottish literature in contexts besides these British and imperial ones. Casanova shows that, from the seventeenth century onwards, Paris and the French language emerged as (respectively) the capital city and universal medium of what would eventually become “world literary space.” At the core of world literary space, or what intellectuals at the time called “the republic of letters,” was the idea of literature as an autonomous domain, one that could set its own terms and values, relatively free from dictation by church or state. The republic of letters was cosmopolitan, internationalist, or rather pre-nationalist, both in its own ideology and in its reception beyond France. Indeed, the second great stage of the historical development of world literary space, the proliferation (from the late eighteenth century) of distinctively national literatures that would later be called Romanticism, defined itself in opposition to the cosmopolitan, Enlightenment hegemony of French. Casanova rather sketchily acknowledges Great Britain as an emergent rival centre in the course of the eighteenth century, and she pays next to no attention to Scotland. Nevertheless her analysis frames Scotland as a highly interesting case, complementary to Tom Nairn’s influential account of Scotland’s anomalous relation to the historical pathways of modernization and nationalism in The Break-Up of Britain. Scotland’s project of cultural modernization in the eighteenth century, the Scottish Enlightenment, depended on the establishment of an autonomous public sphere of letters and science, based in the secular institutions of the Lowland burghs, far enough removed from the seat of government in London. While the Scots literati harnessed English as the linguistic vehicle of Enlightenment, they sought to integrate their philosophical projects with the European republic of letters of which Paris was capital city. Thus, if the Scots invented British literature, as Robert Crawford and others have claimed, it was to annex it to that grand horizon of “world literary space” – over and against a relatively provincial London, even though London may have been the imperial centre of commerce and patronage. Current disciplinary history holds that the modern, restricted category of literature in English – meaning the fictional genres of drama, prose and poetry, or writing loosened from factual or instrumental reference – emerged conceptually in the Romantic period, as it disaggegrated from the larger field of the Enlightenment republic of letters, comprising all kinds of written discourse. The republic of letters had been, at least notionally, a cosmopolitan domain (although restricted to gentlemen), and the disaggregation of literature brought a compensatory investment with nationalist associations and ideologies as well as with a vertical appeal to “the people,” including (sometimes) female people. Scotland provides an exceptionally clear view of this general transformation, in part because of the infrastructural shift that took place from the university curriculum, matrix of the projects of Enlightenment, to an industrializing literary marketplace, in the Edinburgh publishing boom of fiction and periodicals after 1800. Scott’s novels forged a synthesis that would not outlast his generation: managing a late-Enlightenment, cosmopolitan integration of Scottish literature with European (not only British) traditions, at the same time refracting newer nationalist energies. We might go back a generation, to Adam Smith, to identify an exemplary case for reading the prehistory of “literature” among the pre-disciplinary welter of subjects and discourses that comprised, in the eighteenth-century Scottish curriculum, the grand project David Hume had called “the science of man.” Smith, best known today (if not always accurately) as the prophet of free-market capitalism in his great prose georgic The Wealth of Nations, was also the author of The Theory of Moral Sentiments, a groundbreaking treatise on the ethical psychology of modern civil society. Smith developed these books from his lecture courses on jurisprudence and moral philosophy, since the Scottish universities primarily trained lawyers and ministers of the Church of Scotland. Smith also lectured on rhetoric and belles lettres, and in his own view – at a period when the modern fields of humanist and social-scientific inquiry were emerging in the Scottish universities, yet before their hard-and-fast separation into distinct disciplines – all these philosophical inquiries were interconnected, all spoke to and inflected each other. After the chilling of the Enlightenment project in the academy (due to counter-revolutionary pressure through the patronage system that controlled appointments and careers), it was resumed in the marketplace and in the booksellers’ genres of periodicals and fiction: so that Smith’s and Hume’s great theme, the science of man, would be carried forward in the Edinburgh Review and the novels of Scott and John Galt. Scottish Romanticism (to abide with the title for now) discloses a place and time, and a changing institutional terrain, when the humanities were still a human science. It helps us imagine for ourselves an intellectual matrix in which conversations among disciplines and subdisciplines might not be constrained by a distinction between “the humanities” and those (presumptively inhuman) other fields. I don’t wish to suggest we should return to that late-eighteenth-century moment of disciplinary emergence, even if we could, still less that we should make up some contemporary simulacrum or equivalent. We are better off where we are, even as that moment helped bring us here. Still, and apart from its rich resources of intrinsic interest, the case of Scottish literature circa 1740-1840 may open up an awareness of alternative ways of imagining literary history and literary genres in their relations to other fields of discourse, other ways of knowledge. References & Further Information Professor Duncan’s Scott’s Shadow: The Novel in Romantic Edinburgh will be published at the end of 2007.
<urn:uuid:b714670c-2704-4070-9f2a-f188b93ab86a>
CC-MAIN-2021-43
https://www.thebottleimp.org.uk/2007/11/scottish-romanticism-world-literature-some-reflections/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.94456
2,360
2.609375
3
Pipes are said to be in series if they are connected end to end in continuation with each other so that the fluid flows in a continuous line without any branching. The volume rate of flow through the pipes in series is the same throughout. Suppose a pipe line consists of a number of pipes of different sizes and lengths. See Fig. Let d 1d 2d 3 be the diameters of the component pipes. Let l 1l 2l 3 be the lengths of these component pipes. Let v 1v 2v 3 be the velocities in these pipes. Pipes connected in continuation as in this case are said to be connected in series. In this arrangement the rate of discharge Q is the same in all the pipes. Ignoring secondary losses the total loss of head is equal to the sum of the friction losses in the individual pipes. Let d 1d 2d 3 be the diameters, and l 1l 2l 3 be the lengths of the various pipes in a series connection. Let Q be the discharge. Let h f be the total loss of head. Let d be the diameter of an equivalent pipe of length l to replace the compound pipe to pass the same discharge at the same loss of head. Equivalent Length of a Pipe with Intermediate Fittings :. Pipes are said to be in parallel when they are so connected that the flow from a pipe branches or divides into two or more separate pipes and then reunite into a single pipe. Suppose a main pipe branched at section into two pipes of lengths l 1 and l 2 and diameters d 1 and d 2 and unite again at a section to form a single pipe. In this arrangement the total discharge Q divides into components Q 1 and Q 2 along the branch pipes such that —. In this arrangement the loss of head from section to section is equal to the loss of head in any one of the branch pipes. Hence the total discharge Q divides into components Q 1 and Q 2 satisfying the above equation. Similarly when a number of pipes be connected in parallel, then also, the total loss of head in the system is equal to the loss of head in any one of the pipes. For example in the arrangement shown in Fig.Energy is defined as ability to do work.Fluid Mechanics: Minor Losses in Pipe Flow (18 of 34) Both energy and work are measured in Newton-meter or pounds-foot in English. Kinetic energy and potential energy are the two commonly recognized forms of energy. In a flowing fluid, potential energy may in turn be subdivided into energy due to position or elevation above a given datum, and energy due to pressure in the fluid. Head is the amount of energy per Newton or per pound of fluid. Kinetic Energy and Velocity Head Kinetic energy is the ability of a mass to do work by virtue of its velocity. Velocity Head of Circular Pipes The velocity head of circular pipe of diameter D flowing full can be found as follows. Elevation Energy and Elevation Head In connection to the action of gravity, elevation energy is manifested in a fluid by virtue of its position or elevation with respect to a horizontal datum plane. Pressure Energy and Pressure Head A mass of fluid acquires pressure energy when it is in contact with other masses having some form of energy. Pressure energy therefore is an energy transmitted to the fluid by another mass that possesses some energy. The total energy or head in a fluid is the sum of kinetic and potential energies. Recall that potential energies are pressure energy and elevation energy. Power is the rate of doing work per unit of time. Neglecting head lost, the total amount of energy per unit weight is constant at any point in the path of flow. Energy Equation Neglecting Head Loss Without head losses, the total energy at point 1 is equal to the total energy at point 2. No head lost is an ideal condition leading to theoretical values in the results. Energy Equation Considering Head Loss The actual values can be found by considering head losses in the computation of flow energy. Energy Equation with Pump In most cases, pump is used to raise water from lower elevation to higher elevation. In a more technical term, the use of pump is basically to increase the energy of flow. The pump consumes electrical energy P input and delivers flow energy P output. Energy Equation with Turbine Turbines extract flow energy and converted it into mechanical energy which in turn converted into electrical energy. It is the line to which liquid rises in successive piezometer tubes.To browse Academia. Bernoulli's example problem Skip to main content. Log In Sign Up. Pita Benavides. Fluid Flow in Pipes We will be looking here at the flow of real fluid in pipes — real meaning a fluid that possesses viscosity hence looses energy due to friction as fluid particles interact with one another and the pipe wall. Recall also that flow can be classified into one of two types, laminar or turbulent flow with a small transitional region between these two. And hence how much energy must be used to move the fluid. The shear stress will vary with velocity of flow and hence with Re. Many experiments have been done with various fluids measuring the pressure loss at various Reynolds numbers. But for laminar flow it is possible to calculate a theoretical value for a given velocity, fluid and pipe dimension. As this was covered in he Level 1 module, only the result is presented here. However analytical expressions are not available so empirical relationships are required those derived from experimental measurements. Consider the element of fluid, shown in figure 3 below, flowing in a channel, it has length L and with wetted perimeter P. The flow is steady and uniform so that acceleration is zero and the flow area at sections 1 and 2 is equal to A. To make use of this equation an empirical factor must be introduced. Assessment of the physics governing the value of friction in a fluid has led to the following relationships 1. An expression that gives f based on fluid properties and the flow conditions is required. Equation the two equations for head loss allows us to derive an expression of f that allows the Darcy equation to be applied to laminar flow. A rough pipe is one where the mean height of roughness is greater than the thickness of the laminar sub-layer. Nikuradse artificially roughened pipe by coating them with sand. The regions which can be identified are: 1. Pipe flow normally lies outside this region 3. Smooth turbulent The limiting line of turbulent flow. Science Physics Fluids Fluid Dynamics. Volume flow rate and equation of continuity. What is volume flow rate? Bernoulli's equation part 1. Bernoulli's equation part 2. Bernoulli's equation part 3. Bernoulli's equation part 4. Bernoulli's example problem. What is Bernoulli's equation? Viscosity and Poiseuille flow. Turbulence at high velocities and Reynold's number. Venturi effect and Pitot tubes. Surface Tension and Adhesion. Current timeTotal duration Google Classroom Facebook Twitter. Video transcript Let's say I have a horizontal pipe that at the left end of the pipe, the cross-sectional area, area 1, which is equal to 2 meters squared. Let's say it tapers off so that the cross-sectional area at this end of the pipe, area 2, is equal to half a square meter. We have some velocity at this point in the pipe, which is v1, and the velocity exiting the pipe is v2. The external pressure at this point is essentially being applied rightwards into the pipe. Let's say that pressure 1 is 10, pascals. The pressure at this end, the pressure that's the external pressure at that point in the pipe-- that is equal to 6, pascals. Given this information, let's say we have water in this pipe. We're assuming that it's laminar flow, so there's no friction within the pipe, and there's no turbulence. Using that, what I want to do is, I want to figure out what is the flow or the flux of the water in this pipe-- how much volume goes either into the pipe per second, or out of the pipe per second? We know that those are the going to be the same numbers, because of the equation of continuity. We know that the flow, which is R, which is volume per amount of time, is the same thing as the input velocity times the input area.Recommend Documents. Numerical solution of free-boundary problems in fluid mechanics Selected Problems in Fluid Mechanics. Topology Optimization of Fluid Mechanics Problems. Engineering Fluid Mechanics Solution. Lecturing topics include: Energy management and Power generation. Warehouse layout problems : Types of problems and solution algorithms Aug 31, - warehouse, routing of pickers or automated guided vehicles AGVpersonnel and machine We discern the solutions for the warehouse layout problems into two types, Journal of Marketing, 31 3July, The surface charge of the Whatman. Fluid mechanics, turbulent flow and turbulence modeling Jan 8, Fluid mechanics, turbulent flow and turbulence modeling Solution of Flow Shop Scheduling Problems using. Key words: Interfacial flow, finite volume, body-attached mesh, slam- ming. All equations and vectors are expressed for a non-deformable moving. Fluid Mechanics Fluid forces. Visualization of working fluid flow in gravity assisted heat pipe. Fluid Mechanics Course description: The fundamentals of fluid mechanics. You will receive credit for completed homework, even if the solutions are incorrect, so try. Fluid Mechanics Fluid Mechanics.Acoustic Flow Meter Design Calculator. Solve problems related to flow meters, average axial velocity of water flow, sensors, acoustic signal upstream and downstream travel time, acoustic path length between transducer faces and angle between acoustic path and the pipe's longitudinal axis. Bernoulli Theorem Calculator. Online script for solving any variable in the Bernoulli Theorem equation. Solve for head loss, static head, elevation, pressure energy, velocity energy, density and acceleration of gravity. Assists in the computations for leak discharge, pipe networks, tanks, sluice gates, weirs, pilot tubes, nozzles and open channel flow. The flow is assumed to be streamline, steady state, inviscid and incompressible. Cauchy Number Calculator. Cavitation Number Calculator. Solve problems related to cavitation number, local pressure, fluid vapor pressure, fluid density and characteristic flow velocity. Chezy Equation Calculator. Solve problems related to Chezy equation, flow velocity, Chezy coefficient, roughness coefficient, hydraulic radius and conduit slope. Colebrook Equation Calculator. Solve problems related to Colebrook equation, turbulent flow, Darcy friction factor, absolute roughness and Reynolds number. Continuity Calculator. Darcy Weisbach Calculator. Questions & Answers – Fluid Mechanics Online solver for any variable in the Darcy-Weisbach equation. Solve for head loss, friction factor, pipe diameter, pipe length, flow velocity and acceleration of gravity. Darcy's Law Equation Calculator. Solve problems related to flow rate, hydraulic conductivity, hydraulic gradient, solids volume, saturated soil phase diagram, flow cross sectional area, darcy velocity or flux, seepage velocity, voids effective cross sectional area, flow gross cross sectional area, pressure head, solids, porosity, void ratio and length of column. Density Equations Calculator. Euler Number Calculator. Solve problems related to Euler number dimensionless value, fluid dynamics, pressure change, density and characteristic flow velocity. Fluid Pressure Calculator. Solve for different variables related to force, area, bulk modulus, compressibility, change in volume, fluid column top and bottom pressure, density, acceleration of gravity, depth, height, absolute, atmospheric and gauge pressure. Gravity Equations Calculator. Solves problems related to Newton's law of gravity, universal gravitational constant, mass, force, satellite orbit period, planet mass, satellite mean orbital radius, acceleration, critical speed, escape speed, radius from planet center and Kepler's third law. Hazen Williams Calculator.The figure on the left shows a simple Flowmaster network of a foul water pumping station, pumping to a treatment plant 10km away. The usual starting point for this and most hydraulic studies is to carry out a series of steady state analysis modelling the various flow scenarios. Flow systems are not always steady state for instance when a pump starts or a valve opens the flow will change and the system has a transient response. However for many systems the transient response is not significant and so can be ignored. The theory of flow in pipes and open channels is well documented. For a simple pipe system the analysis is relatively straightforward and the equations can be easily solved using a spreadsheet. For more complex systems such as networks a number of simultaneous equations need to be solved making a solution more difficult to find. Today there are a number of software tools designed to solve these flow problems such as Flowmaster and Wanda. A flow analysis can only be as accurate as the model that is used. Errors usually occur in modelling fittings such as bends, tees, non-standard components and in determining appropriate roughness factors. Getting the model right and knowing that the results are correct comes with experience. Analysing the behaviour of pre designed systems may only confirm what you already know. At Fluid Mechanics we have the necessary experience to offer a lot more. For new systems we can assist in the design process, providing recommendations on system design and control, optimise pipe and fitting sizing, provide performance specifications for pumps and valves and assist in supplier selection.
<urn:uuid:5ba32057-cba2-451e-b5c0-ec55177d6fe6>
CC-MAIN-2021-43
https://ggm.mirumokbazu.pw/fluid-mechanics-pipe-flow-problems.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.907614
2,851
3.640625
4
Before you’re able to do a project risk analysis, you have to acknowledge that risk is going to happen in your project and you’ll need to be prepared with a risk management plan. By planning for risks, you begin the process of knowing how to identify, monitor and close out risks when they show up in your project. Part of that risk management process is risk analysis. It’s a project planning technique that helps you to mitigate risk. There are also tools that can assist. You should at the very least, have a risk tracking software tool to identify and list those risks. ProjectManager, for instance, lets you build project plans on Gantt charts, task lists, kanban boards and more. Then, you can use our real-time tracking tools to ensure your risks stay in check and don’t turn into major issues. Try it free today. Definition of Risk Hopefully, you’re familiar with the basics of project risk management. (if not, more in a bit.) Risks are anything that can potentially disrupt any component of your project plan, such as your scope, schedule, costs or your team. Since every project is unique, no two projects are likely to have the same risks. It’s up to managers and their teams to identify risks, prioritize their impact, and create risk management plans where appropriate in case those risks become real issues. But it’s important that you also have to understand what is meant by “risk analysis” in reference to project risk management. Risk Analysis vs. Risk Identification vs. Risk Management People frequently confuse risk analysis with risk identification and risk management. Let’s clear these project management concepts up before we continue. What Is Risk Analysis? Risk analysis is the process that figures out how likely that a risk will arise in a project. It studies the uncertainty of potential risks and how they would impact the project in terms of schedule, quality and costs if in fact they were to show up. Two ways to analyze risk are quantitative and qualitative. But it’s important to know that risk analysis is not an exact science, so it’s important to track risks throughout the project life cycle. What Is Risk Identification? Risk identification is also a risk management process, but in this case it lists all the potential project risks and what their characteristics would be. If this sounds like a risk register, that’s because your findings are collected there. This information will then be used for your risk analysis. Though this process starts at the beginning of the project planning phase, it’s an iterative process and continues throughout the project life cycle. What Is Risk Management? Finally, risk management is the overall process that project managers use to minimize and manage risk. It includes risk identification, risk assessment, risk response development and risk response control. Benefits of Risk Analysis To understand risk analysis, note the importance of examining risk in methodical detail. Why? There are several reasons. - Avoid potential litigation - Address regulatory issues - Comply with new legislation - Reduce exposure - Minimize impact - Risk analysis is an important input for decision making during all the stages of the project management cycle Project managers who have some experience with risk management in projects are a great resource. We culled some advice from them, such as: - There’s no lack of information on risk - Much of that information is complex - Most industries have best practices - Many companies have risk management framework - Risk analysis is done in extremes Risk Analysis Process As we’ve mentioned before, the risk analysis process is a part of the broader risk management plan that project managers must oversee through every stage of the project life cycle. The risk analysis process has three main steps: - Identify Risks - Qualitative Analysis - Quantitative Analysis Once you’re done with these steps you’ll be ready to assign risks to your team members, plan risk responses and monitor risks until your project is complete. Let’s dig deeper, and examine both qualitative and quantitative risk analysis. Qualitative Risk Analysis The qualitative risk analysis is a risk assessment done by experts on the project teams, who use data from past projects and their expertise to estimate the impact and probability value for each risk on a scale or a risk matrix. The scale used is commonly ranked from zero to one. That is, if the likelihood of the risk happening in your project is .5, then there is a 50 percent chance it will occur. There is also an impact scale, which is measured from one to fine, with five being the most impact on the project. The risk will then be categorized as either source- or effect-based. Once risks are identified and analyzed, a project team member is designated as a risk owner for each risk. They’re responsible for planning a risk response and implementing it. Qualitative risk analysis is the base for quantitative risk analysis, and it’s beneficial because not only do you reduce uncertainty in the project, but you also focus mostly on high-impact risks, for which you can assign a risk owner and plan out an appropriate risk response. Get started with qualitative risk analysis with our free risk assessment template. Quantitative Risk Analysis By contrast, quantitative risk analysis is a statistical analysis of the effect of those identified risks on the overall project. This helps project managers and team leaders to make decisions with reduced uncertainty, and supports the process of controlling risks. Quantitative risk analysis counts the possible outcomes for the project and figures out the probability of still meeting project objectives. This helps with decision-making, especially when there is uncertainty during the project planning phase. It helps project managers create cost, schedule or scope targets that are realistic. The Monte Carlo simulation is an example of a quantitative risk analysis tool. It’s a probability technique that uses a computerized method to estimate the likelihood of a risk. It’s used as an input for project management decision making. Through qualitative and quantitative risk analysis, you can define the potential risks by determining impacts to the following aspects of your project: - Activity resource estimates - Activity duration estimates - Project schedule - Cost estimates - Project budget - Quality requirements ProjectManager.com Helps Your Risk Analysis ProjectManager.com is a cloud-based project management software that gives you real-time data to track your project and whatever risks arise during its execution. Our online Gantt chart is a great tool to schedule projects, assign tasks and link dependencies, but it can also be used risk management tool. Collect all the data you assembled associated with the risk to a task, which has unlimited file storage. Whoever on your team is risk owner for a task can comment at the task level and @ other team members, who are then notified immediately by email. You have more control over the management of project risk. Learn More About Risk Analysis If we’ve only whet your whistle when it comes to discussing risk analysis on a project, don’t worry. Watch project management guru Jennifer Bridges, PMP, as she helps you visualize how to analyze risks on your project. Here’s a shot of the whiteboard for your reference! Thanks for watching! Transcript: Risk Analysis Explained by a PMP Today, we’re talking about risk analysis, “How to Analyze Risk on Your Projects.” But before we start, I wanna stop and take a look at the word “analyze,” because so many times, I hear people interchanging different words, like risk identification, risk management, risk analysis. They’re three different words, three different things. So the Whiteboard session today, we’re gonna talk about the analysis. When we analyze the risks, we’re examining methodically in detail. And why would we wanna do this? Well, there are several really big reasons why. First of all, we’re trying to avoid any potential litigations, address maybe any regulatory issues, or comply with new legislation. Ultimately, we’re trying to reduce our exposure and minimize the impact of any risk. So what are some insights that we’ve had in working with so many projects? Well, first of all, we found that there’s no lack of information out there about risk. But what happens is sometimes much of the information is very complex and can be quite intimidating. Most industries have their own best practices, and many companies have their own framework. We found that the risk analysis can be done to extremes. On some projects, it’s not done at all because they feel like they don’t have any risk. Then on some projects, it’s done to the nth degree, I mean think about it, if you’re sending a rocket to the moon with astronauts, we want to protect those people. Risk Analysis Example So let’s look at where and when the risk analysis is done. Well, if we look at the project management process groups, the planning process is where we start looking at the risk, and it’s done throughout the entire project. So we develop our risk management plan, identify the risks, and those are captured in our risk register. So as a reminder, the risk register identifies all the risks, the impacts, the risk response, and the risk level. We’re ultimately looking at what the potential impacts to the activity resource estimates, the activity duration estimates, possibly the schedule, the cost estimates, budgets, quality, and even the procurements. So when we take the risk register, then we take those items and that’s where we do the detail analysis. We do that in two parts. The first part, we perform a qualitative risk analysis, and there what we’re doing is that’s a process of prioritizing the risk for further analysis or action, depending upon the probability and the impact of those risks. The benefits of that is it helps to reduce the level of uncertainty of those risks on the project and allows us to focus on the high priority risk. The second piece is performing the quantitative risk analysis, and what that is, it’s a process for numerically analyzing the effect of those risks on the project. The benefit of that is it helps support in decision-making to reduce the project uncertainty. Again, that can help us, number one, plan the risk responses and control those risks. So those are some great reasons why and a few tips on “How to Analyze the Risk on Your Projects.” So if you need a tool that can help you analyze the risk on your project, then sign up for our software now at ProjectManager.com.
<urn:uuid:3da0d494-2b34-4b2e-97ce-58f8b7b4767f>
CC-MAIN-2021-43
https://www.projectmanager.com/training/how-to-analyze-risks-project
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.936427
2,246
2.84375
3
Could volcanoes be causing Antarctic ice loss? by Staff WritersParis (AFP) Nov 17, 2013 Accelerating ice loss from the Antarctic icesheet could be due in part to active volcanoes under the frozen continent's eastern part, a study said on Sunday. From 2002 to 2011, the average annual rate of Antarctic icesheet loss increased from about 30 billion tonnes to about 147 billion tonnes, the UN's panel of climate scientists reported in September. The icesheet is a mass of glacial land ice -- one such sheet covers most of Greenland and the other Antarctica, and together they contain most of the freshwater on Earth. The sheets are constantly moving, slowly flowing downhill and seawards under their own weight. Portions that extend out over the water are called ice shelf. Previous research has blamed warmer seas swirling in a circular fashion around Antarctica for the quicker pace of icesheet loss from the southernmost continent. These waters erode ice shelves, went the theory. And as more of the shelves disappeared, the quicker the sheet would flow and lose ice to the sea. But in a new paper in the journal Nature Geoscience geologists led by Amanda Lough at Washington University in St. Louis, Missouri, suggested that, in West Antarctica, the faster flow may be also be due to volcanoes. These heat the underside of the ice, causing melting that lubricates the flow, they suggested. Evidence for this comes from recently deployed sensors that recorded two "swarms" of seismic activity under Mary Byrd Land, a highland region of West Antarctica, in 2010 and 2011. Using ice-penetrating radar, the team found an intriguing elliptically-shaped deposit, measuring about 1,000 square kilometres (386 square miles) in the area, at a depth of 1,400 metres (4,550 feet). The deposit is believed to be volcanic ash, spewed out by an enormous eruption some 8,000 years ago -- an estimate reached on the assumption it has since been covered by ice accumulating at the rate of 12.5 centimetres (five inches) a year. "Together, these observations provide strong evidence for ongoing magmatic activity and demonstrate that volcanism continues to migrate southwards." Several volcanoes were known to exist in West Antarctica, but none were thought to be active. "Eruptions at this site are unlikely to penetrate the 1.2 to two-km (0.75-1.2-mile) -thick overlying ice, but would generate large volumes of melt water that could significantly affect ice stream flow," said the study. from Science Daily: Volcano Discovered Smoldering Under a Kilometer of Ice in West Antarctica: Heat May Increase Rate of Ice Loss Nov. 17, 2013 — It wasn't what they were looking for but that only made the discovery all the more exciting. In January 2010 a team of scientists had set up two crossing lines of seismographs across Marie Byrd Land in West Antarctica. It was the first time the scientists had deployed many instruments in the interior of the continent that could operate year-round even in the coldest parts of Antarctica. Like a giant CT machine, the seismograph array used disturbances created by distant earthquakes to make images of the ice and rock deep within West Antarctica. There were big questions to be asked and answered. The goal, says Doug Wiens, professor of earth and planetary science at Washington University in St. Louis and one of the project's principle investigators, was essentially to weigh the ice sheet to help reconstruct Antarctica's climate history. But to do this accurately the scientists had to know how Earth's mantle would respond to an ice burden, and that depended on whether it was hot and fluid or cool and viscous. The seismic data would allow them to map the mantle's properties. In the meantime, automated-event-detection software was put to work to comb the data for anything unusual. When it found two bursts of seismic events between January 2010 and March 2011, Wiens' PhD student Amanda Lough looked more closely to see what was rattling the continent's bones. Was it rock grinding on rock, ice groaning over ice, or, perhaps, hot gases and liquid rock forcing their way through cracks in a volcanic complex? Uncertain at first, the more Lough and her colleagues looked, the more convinced they became that a new volcano was forming a kilometer beneath the ice. The discovery of the new as yet unnamed volcano is announced in the Nov. 17 advanced online issue of Nature Geoscience. Following the trail of clues The teams that install seismographs in Antarctica are given first crack at the data. Lough had done her bit as part of the WUSTL team, traveling to East Antarctica three times to install or remove stations in East Antarctica. In 2010 many of the instruments were moved to West Antarctica and Wiens asked Lough to look at the seismic data coming in, the first large-scale dataset from this part of the continent. "I started seeing events that kept occurring at the same location, which was odd, "Lough said. "Then I realized they were close to some mountains-but not right on top of them." "My first thought was, 'Okay, maybe its just coincidence.' But then I looked more closely and realized that the mountains were actually volcanoes and there was an age progression to the range. The volcanoes closest to the seismic events were the youngest ones." The events were weak and very low frequency, which strongly suggested they weren't tectonic in origin. While low-magnitude seismic events of tectonic origin typically have frequencies of 10 to 20 cycles per second, this shaking was dominated by frequencies of 2 to 4 cycles per second. Ruling out ice But glacial processes can generate low-frequency events. If the events weren't tectonic could they be glacial? To probe farther, Lough used a global computer model of seismic velocities to "relocate" the hypocenters of the events to account for the known seismic velocities along different paths through the Earth. This procedure collapsed the swarm clusters to a third their original size. It also showed that almost all of the events had occurred at depths of 25 to 40 kilometers (15 to 25 miles below the surface). This is extraordinarily deep -- deep enough to be near the boundary between the earth's crust and mantle, called the Moho, and more or less rules out a glacial origin. It also casts doubt on a tectonic one. "A tectonic event might have a hypocenter 10 to 15 kilometers (6 to 9 miles) deep, but at 25 to 40 kilometers, these were way too deep," Lough says. A colleague suggested that the event waveforms looked like Deep Long Period earthquakes, or DPLs, which occur in volcanic areas, have the same frequency characteristics and are as deep. "Everything matches up," Lough says. An ash layer encased in ice The seismologists also talked to Duncan Young and Don Blankenship of the University of Texas who fly airborne radar over Antarctica to produce topographic maps of the bedrock. "In these maps, you can see that there's elevation in the bed topography at the same location as the seismic events," Lough says. The radar images also showed a layer of ash buried under the ice. "They see this layer all around our group of earthquakes and only in this area," Lough says. "Their best guess is that it came from Mount Waesche, an existing volcano near Mt Sidley. But that is also interesting because scientists had no idea when Mount Waesche was last active, and the ash layer is sets the age of the eruption at 8,000 years ago. " What's up down there? The case for volcanic origin has been made. But what exactly is causing the seismic activity? "Most mountains in Antarctica are not volcanic," Wiens says, "but most in this area are. Is it because East and West Antarctica are slowly rifting apart? We don't know exactly. But we think there is probably a hot spot in the mantle here producing magma far beneath the surface." "People aren't really sure what causes DPLs," Lough says. "It seems to vary by volcanic complex, but most people think it's the movement of magma and other fluids that leads to pressure-induced vibrations in cracks within volcanic and hydrothermal systems." Will the new volcano erupt? "Definitely," Lough says. "In fact because of the radar shows a mountain beneath the ice I think it has erupted in the past, before the rumblings we recorded. Will the eruptions punch through a kilometer or more of ice above it? The scientists calculated that an enormous eruption, one that released a thousand times more energy than the typical eruption, would be necessary to breach the ice above the volcano. On the other hand a subglacial eruption and the accompanying heat flow will melt a lot of ice. "The volcano will create millions of gallons of water beneath the ice -- many lakes full," says Wiens. This water will rush beneath the ice towards the sea and feed into the hydrological catchment of the MacAyeal Ice Stream, one of several major ice streams draining ice from Marie Byrd Land into the Ross Ice Shelf. By lubricating the bedrock, it will speed the flow of the overlying ice, perhaps increasing the rate of ice-mass loss in West Antarctica. Washington University in St. Louis (2013, November 17). Volcano discovered smoldering under a kilometer of ice in West Antarctica: Heat may increase rate of ice loss. ScienceDaily. Retrieved November 17, 2013, from http://www.sciencedaily.com/releases/2013/11/131117155609.htm from Nature Geoscience: VOLCANOLOGY Mobile magma under Antarctic ice Volcanoes have been active under the West Antarctic Ice Sheet for millions of years, and there is evidence for recent activity. Now swarms of tiny earthquakes detected in 2010 and 2011 hint at current magma movement in the crust beneath the ice. By John C. Behrendt The West Antarctic Ice Sheet is losing mass as the climate warms and the surrounding floating ice shelves that buttress the land-based ice are eaten away at their base by warmer ocean waters1. However, the ice can also melt from below on land, where subglacial volcanic activity causes a high flow of heat through the crust2–5. Writing in Nature Geoscience, Lough et al.6 use observations of hundreds of small seismic events in the crust beneath the West Antarctic Ice Sheet to infer current magma movement in a volcanic system beneath the ice, which may bring heat up to the rock–ice interface and thus affect ice flow. Late Cenozoic volcanic activity associated with the West Antarctic Riff System7 extended over a wide area of West Antarctica, including beneath the West Antarctic Ice Sheet (WAIS) that flows through it. In general, the volcanic activity seems to have migrated southwards, along north–south-oriented fractures, away from the Marie Byrd Land dome8. Active volcanism has also been reported in a few other places in the West Antarctic Riff System4,5, and aeromagnetic surveys provide evidence for a number of volcanic centres beneath the WAIS (ref. 2), but it has been unclear whether magmatic activity Lough et al.6 analysed seismic data recorded by a deployment of 37 seismic stations in Marie Byrd Land, a highland region of West Antarctica (Fig. 1). Tey identifed two swarms of earthquakes about one year apart, in 2010 and 2011. The swarms were comprised of hundreds of small seismic events, with magnitudes between 0.8 and 2.1. Te quakes occurred at depths of about 25 to 40 km, close to the boundary between the crust and mantle beneath Marie Byrd Land, much deeper than normal crustal earthquakes. Tese characteristics, as well as the observed wave frequencies, are typical for deep long-period earthquakes that have been associated with active volcanoes worldwide9–12. Lough and colleagues therefore interpret the observed seismic activity as a sign of magma movements within an active subglacial magmatic system, though it is unclear whether the observed swarm activity presages an Te earthquake swarms originated beneath a subglacial mountain complex with elevation of about 1,000 m above the surrounding low-lying areas. Aeromagnetic data show a 400 nf magnetic anomaly at the high point, suggesting that rocks in this region are highly magnetized. Shallow-source magnetic anomalies from rocks are ofen a sign of a volcanic origin, so Lough et al. interpret the subglacial mountain complex as a volcanic edifce. Tey also identify a prominent 20 × 50 km elliptical layer of ash in the ice above the subglacial peak, about 400 to 1,400 m below the ice surface. Given modern ice accumulation rates of about 12.5 cm per year, the authors estimate that the ash layer formed about 8,000 years ago and was probably sourced from nearby Radar ice-sounding data provide measurements of ice thickness in West Antarctica, but gaps in the data — the aerogeophysical data lines are spaced about 15 km apart — lead to uncertainties. Deep long-period earthquakes can occur up to 5 km away from active volcanic vents9–12, and at this distance away from the source of the earthquakes the ice is about 1,100 m thick. Lough and colleagues show that only an exceptionally large eruption could breach the ice sheet in Marie Byrd Land and vent to the surface. Te earthquake swarms, magnetic anomaly and ash layer are all located about 55 km south of Mount Sidley in the Executive Committee mountain range, south of the area of Holocene volcanic ...
<urn:uuid:f122b54b-00d7-4378-a69e-93136c16e954>
CC-MAIN-2021-43
https://hockeyschtick.blogspot.com/2013/11/new-paper-suggests-volcanoes-are.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.947718
2,985
3.375
3
All the activities in this section are based on the word lists used in our a1 movers test which is at a1 level what does a1 level mean. Fully translated for german speakers, this book forms a part of the my english skills series. English course levels a1, a2, b1, b2, b2c1, c1 aba english. Free ebooks for english learners blog free audio books. A range of a1 level interactive activities which help children to practise their english. Use features like bookmarks, note taking and highlighting while reading practise german. Sep 11, 2017 free ebooks for english learners like in your own language, reading in english is going to expand your vocabulary, enhance your sentence structure and give you exposure to a more complex language. Click on this link to view the small list of basic english level 1 lessons. Each lesson has a preparation task, a reading text and two tasks to check your understanding and to practise a. Vocabulary charts or grammar worksheets are provided to facilitate reading. Look at the email and do the exercises to practise and improve your reading skills. Look at the poster for a lost dog and do the exercises to practise and improve your reading skills. A tree is nice by janice may udry, ira sleeps over by bernard waber, arthurs eyes by marc brown, madeline by ludwig bemelmans. Level a1 a2 waystage sopravvivenza hobbs, holly the eco warrior bronte, jane eyre kipling, mowglis brothers collins, princess diana twain, the adventures of tom sawyer gerngross, the african mask dumas, the man in the iron mask eliot, the mill on the floss leroux, the phantom of the opera doyle, the redheaded league gerngross, the surprise. How to choose the right pearson english readers level with a large selection of genres graded to different levels of english, theres a reader for everyone. Find your level by doing our 56 questions of level test elementary a1. Starter to advanced a1 c2 a practical six level business english course that focuses on real, relevant communication. Starter to advanced a1c2 a practical sixlevel business english course that focuses on real, relevant communication. If its too easy they can try the a1 level activities. Short stories are very good and there are hundreds of them on the web. English ereader is the biggest online library of graded ebooks. Learning vocabulary will help you improve your language level and communicate in english confidently and effectively. Theyre organized by level beginner through advanced and topic arts and entertainment, business, science and tech and more. A1 level is a practical guide for learning english. The aba english course consists of 144 units organised into 6 levels in accordance with the common european framework of reference cefr. Guys, these are my suggestions for you beginners and preintermediate speakers. Experience a live lesson about this topic with one of our expert, native teachers in our virtual classroom. Teaching adultsyoung adults english language courses oxford. We offer materials aligned with the cefr levels of language competence. The key to success is choosing the right book for you. English texts for beginners to practice reading and comprehension online and for free. Learning english by reading books learnenglish teens. Books for languages offers language courses with an integrated foreign language curriculum and a notionalfunctional approach. Mainly because the style of the sentences becomes fixed in the students mind. Level 1 news is written in very basic english of to 1500 words. Starting from an integrated foreign language curriculum, we offer an. Language level a1 is the lowest level of proficiency according to the common european framework of reference for languages. It wont teach the rules of grammar, but it does improve fluency. The english language teaching catalogue macmillan english. We formed a team of experts who are entrusted with the responsibility of collecting these short stories from all sources possible. Usborne english readers are english language readers for young learners. Learn english with an online language book, english grammar. I had study since 8 years age my course called head way english for biginers i did level exam and i had gotten second level i go on to fourth. It is important to bear in mind that the common european framework of reference for languages cefrl is the system that defines and explains the different levels of oral and. Mark hancock and annie mcdonald, english language teaching books, teaching. Ali and his camera penguin young readers level 1 around 300 words also, a book for beginners, embark on a story about a guy who wants to take pictures in istanbul, but has a problem. English level 1 basic is for people with little or no experience of the english language. This book is not really so famous, but it is on the recommended book list. Choose the correct option, put negatives, add the words to the correct group, add correct verbs and fill the correct preposition. English ereader is the biggest online library of graded e books. Reading books can be a great way to pick up new vocabulary, see grammar in action and develop your understanding of a language. Here you can find plenty ebooks in different digital formats. Esl teens lessons for beginner level a0 a1 below are our lessons for beginner level students. It is recommended to use cefr levels in job resumes curriculum vitae, cv, europass cv and. They can be read either in class or independently in students own time. Aug 21, 2018 learn english in 30 minutes all the english basics you need duration. A books for languages textbook, aligned with the cefr. To learn more about how reading can improve your english and to find recommended ebooks, visit out free ebooks for english learners blog post. For now, do as much reading and listening as you can. English language elearning your opportunity to efficiently study english anywhere, anytime. Free ebooks for english learners like in your own language, reading in english is going to expand your vocabulary, enhance your sentence structure and give you exposure to a more complex language. Starter cefr low a1, elementary cefr a1, preintermediate cefr a2 and intermediate cefr b1. Especially designed for classes with tpr storytelling, this reader contains short stories and illustrations, plus extended readings. Previous post previous kindle ebook reader comparison 2018. Teaching materials for english at a1level the ci bookshop. You can also start with 2 books for a1 and a2, published by wi. A thriller is the kind of book that makes your heart pound fast. This book forms a part of the series my english skills. Are you a beginner cefr level a1 or preintermediate cefr level a2 learner of english. Find out how you can get your english to business c1 level with our course. Fully written in english, it serves as a base for the adaptation to different mother tongues. Fully translated for spanish speakers, this book forms a part of the my english skills series. All practice papers are the intellectual property of euroexam international and as such are protected by law. Prea1 level activities for children cambridge english. Start with level a1 and work through the activities. If youre not sure what level your child is at, ask them to try a prea1 level activity. English vocabulary beginner to preintermediate british. The texts below are designed to help you develop while giving you an instant evaluation of your progress. The six reference english levels are widely accepted as the global standard for grading an individuals language proficiency. A1 level activities for children cambridge english. Business english, fcecae exam preparation, study hall, grammar game. The vocabulary is a bit wider and the sentences are a bit longer, but still using basic verbs, nouns and cases. Look at the menu and do the exercises to practise and improve your reading skills. Again this book is aimed at young native english speakers, so if youre learning english, the level wont be so difficult. Are you a beginner cefr level a1 learner of english. So when youre taking a break from one of the great books below, check out the fluentu free trial to keep learning while having fun. Beginner a1 reading learnenglish teens british council. Watch a video about schools and education in britain and answer the questions. The team did a wonderful job of collecting these short stories by going through various materials. Treasure island by ann ward, the woman in black by susan hill, the room in the tower and other ghost stories by rudyard kipl. This section offers reading practice to help you understand simple information, words and sentences about known topics. These are beginner level short stories that are both easy to read and understand. Whats great about mieko and the fifth treasure is that its short. All the activities in this section are based on the word lists used in our a1 movers test which is at a1 level what does a1 level mean if youre not sure what level your child is. Reproduction of part or all of their contents is prohibited without our prior written permission. Download it once and read it on your kindle device, pc, phones or tablets. All the activities in this section are based on the word lists used in our a1 movers test which is at a1 level what does a1 level mean if youre not sure what level your child is at, ask them to try an a1 level activity. Small list of lessons from learning english level 1. Read the swimming pool poster and do the exercises to improve your reading skills. Click on the link icons to download lesson textbooks and powerpoint slide shows. If youre not sure what level your child is at, ask them to try an a1 level activity. Learn english in 30 minutes all the english basics you need duration. Children can continue practising their english with these fun interactive activities. Cefr english levels are used by all modern english language books and english language schools. English level 1 learning vocabulary using pictures basic english level 1. Download course materials 25 sackville street, w1s 3ax. These english short stories for beginners have been collected from diverse sources. You can use learnenglish teens as much as you like to help you improve your level of english. So it takes about 200 hours to go from a1 tro a2, then another 200 from a2 to b1. Texts include posters, messages, forms and timetables. Kate baade, michael duckworth, david grant, christopher holloway, jane hudson, john hughes, jon naunton, jim scrivener, rebecca turner, nina leeke and penny mclarty. Categories english learners tags audio books, books, english, freebooks, learning. This list of topics will develop over time the more items tagged, the bigger the word. Learn how to find the right reading level for you english. French lessons level a1 a2 b1 b2 c1 lawless french. Level 1 english news and easy articles for students of. The materials are aligned with the cefr levels of language competence. Practicing your comprehension of written english will both improve your vocabulary and understanding of grammar and word order. Beginner short stories a1a2 learn english with africa. A1 level start of the book books for languages offers language courses with an integrated foreign language curriculum and a notionalfunctional approach. Once you know which pearson english reader series you would like to use, next you need to choose which english level is appropriate. A free english learning site with over 100 50word passages, simple present tense, with audio ad exercises, for english beginners to learn english through reading. How quickly you progress depends on many factors, but very roughly speaking, it takes around 200 hours to go up a level. This book is the grammar part of the my english skills series, fully translated for spanish speakers. You will find the most basic possible verbs, nouns and cases. These activities are designed to be used on a computer or a tablet. You describe situations and express yourself coherently. English grammar a1 level for spanish speakers by books for. Since intensive and regular learning is essential to achieve this goal, an intensive language course. Level a1 practise german while reading german edition kindle edition by wexenberger, dominik. It is important to bear in mind that the common european framework of reference for languages cefrl is the system that defines and explains the different levels. English grammar a1 level for spanish speakers by books. Listening comprehension practice test for beginners and elementary students cefr a1 level. Welcome to our english language teaching catalogue for 2014. Each story is built around a specific vocabulary or grammar topic. If youve got a basic level of understanding and comprehension, these novels arent going to be a problem. Learning english by reading books learnenglish teens british. A course equipping adult learners with the english that they need to work or study in the usa. Read in english free esl books kaycontinental english. Teaching adultsyoung adults english language courses. Level a1 corresponds to basic users of the language, i. Starting from an integrated foreign language curriculum, we offer an eclectic system with a distribution of contents based on the learning objects. It will help you learn quicker and consolidate what you already know. We are working on a new section to help with speaking skills we hope it will be ready by october at the latest. The aim of this learning stage is the ability of a basic language use that is fundamental for further language learning.1078 603 191 180 1221 1050 722 1168 966 1471 1501 913 1181 1477 1271 596 803 242 376 777 369 831 139 150 751 703 967 356 855 791 286 1108 1230 358 713 487 1391 1228 1318
<urn:uuid:36c9a6f3-4f04-45af-be6b-13bc3cb5ddb5>
CC-MAIN-2021-43
https://sminutpasubs.web.app/601.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.915328
2,905
3
3
A complete beginner's guide to borrowing money May 04, 2021 Whether it’s for a big goal like buying your first home, getting a new car or starting a business — at some point in our lives most of us will look to borrow money. But for those of us who have never done so before, the world of credit can be very confusing. That’s why we created this complete beginner’s guide to borrowing money; to help you out by making it easier to understand. Let’s dig in. Why borrow money When an agreement is reached between a lender (like a bank) and a borrower (like me or you) over an amount of money the borrower can borrow from the lender, in what way (ie lump sum or instalments etc) and on what repayment terms (ie interest rates and frequency of repayments etc) — this is referred to as a credit agreement. In fewer words, a credit agreement is the mechanism through which people borrow money from financial institutions. Rather than waiting a long time to make a purchase by saving up the money you will need to buy it upfront, borrowing money (or ‘arranging credit’) gives you a way to buy it sooner than you could otherwise by paying for it with someone else’s money. Of course, borrowing money comes with a cost. This is usually charged in interest rates, though there can also be arrangement fees and/or other further charges. What this means is that borrowing money is a more expensive way to pay for something in the long run than paying for it out of savings would be. So should you do it? Should you borrow money? Debt is neither good or bad. And while not having debts to pay off affords you a certain level of freedom and flexibility in managing your income, there are big purchases in life where if you saved up enough to buy the thing outright, it would make doing so a lot harder. Sometimes impossibly so. One example is buying a house. Average house prices in the UK are now £267,000 in England, £179,000 in Wales, £164,000 in Scotland and £148,000 in Northern Ireland. So for most people saving up enough cash to buy a house outright might take most of their lives. Mortgages are loans specifically for buying property. Compared to other types of loans and credit agreements, mortgages tend to let you take on debt at more affordable rates. This debt makes sense for many people because otherwise owning a home might not be possible….at least not for a really, really long time anyway. Just remember that with any financial decision or arrangement, it’s important to know what you’re doing with your money and to be on top of managing it. Ways to borrow money There are lots of ways to borrow money. Here’s a quick summary of some of the main ways. Loans are the best option for borrowing when you’re looking to fund a bigger one-off purchase like a car. You can use credit cards in lots of ways. Be that for regular everyday spending, bigger one-off purchases like a holiday, balance or money transfers. Some credit cards also offer incentives like cashback or other rewards. However, credit cards are better thought of as spending tools, rather than borrowing tools. Borrowing on credit cards (when you spend and don’t pay off the balance in the same month) is usually a very expensive way of borrowing. A mortgage is a loan specifically for purchasing property or land. An overdraft is an option often used for short term borrowing. Particularly in situations when money is needed quickly for things like unexpected bills and everyday spending. How much can I borrow? How much money you’re able to borrow will depend on your financial situation and the lender’s risk appetite. When reviewing your financial situation lenders may look at: how much you earn, your lifestyle and expenses, how much debt you have, your credit history and a number of other things. They will use this information to check that you meet their eligibility criteria and to calculate what repayments you can afford, otherwise known as your affordability. A lender’s appetite for risk depends upon what’s currently going on in the world and the financial markets. During times of economic uncertainty lenders worry about whether the people they’re lending money to will become less able to pay them back (whether this is because they’ve lost their job or because of rising prices). That’s why during the pandemic mortgage approvals fell as uncertainty reduced lenders appetite for risk. Lenders tightened their lending criteria too during this time, with many no longer offering products like 95% mortgages. (However, 95% mortgages are now being reintroduced following the launch of a new government scheme aimed at helping first-time buyers.) How to borrow money Borrowing money to finance a purchase isn’t a decision that should be made lightly. Paying it back over time is a big commitment after all. But if after thinking about it long and hard, you’ve decided this is the best option for you, here’s what to do next: 1. Check your credit reports When gearing up to arrange credit, an important starting place is checking your credit reports. Your credit reports contain many things but lenders are particularly interested in the overview it gives of your historic experiences in borrowing money and your track record in paying it back (ie did you make payments in full and on time). If you’ve never borrowed money before this can cause a problem for you as you may be regarded as ‘thin file’. ‘Thin file’ literally means that there is little data within your credit report for the lender to base their decision upon. This creates an issue when you go to apply for credit as lenders, when reviewing your credit history, are unable to judge how reliable you’ll be in paying it back. Essentially without a track record, how you handle credit is an unknown. In the eyes of lenders this means that you’re higher risk. Consequently, you may find it challenging to get approved for credit, or if you are, that you are offered high interest rates on financial products reflecting this perceived risk. Another important thing to consider is that when you submit an application to borrow money, and the lender checks your credit report, this leaves a mark. It’s referred to as a ‘hard check’ and too many of these in a short space of time is considered a bad thing by lenders, and so it causes your credit score to go down. This is why it’s wise to look at your credit reports ahead of applying for credit to consider how likely you are to be approved. If your credit score suggests it’s unlikely, you might think about improving your chances first, then applying. LOQBOX is the free and easy way to build your credit score while you save. Find out more here. Checking your credit reports also gives you the opportunity to check that all the information on your credit reports is accurate and up to date. If not, be aware that incorrect information may damage your chances of getting approved for credit. So it’s best to take steps to correct any mistakes you find (here’s how). There are three main credit references agencies in the UK — Experian, Equifax and TransUnion. And each holds a separate credit report on you. So it’s important to check all three. It's free to check your credit reports online. Try it now using the following services: * Just to be super transparent if you sign up and follow this link although it's free for you, LOQBOX may get a small referral fee from ClearScore. This helps us to continue to improve our service for our customers and keep LOQBOX free. 2. Shop around to find the best deal for you Use comparison sites like MoneySupermarket to get quotes and compare offers tailored to your needs. However, bear in mind that sometimes it can be cheaper to go to lenders directly. Annual percentage rate (APR) represents the cost of borrowing money to you. Different products, and lenders, offer different APRs. So you want to find the deal with the lowest APR. Also note that people with good credit scores will generally be offered better deals. Find out more about APR in our video here: If you’re buying a house, you may prefer to arrange your mortgage through a mortgage advisor. Mortgage advisors have a good understanding of the market and the financial products that are currently available, so they will be able to advise you on the best deals for you. They also help with making the application, and this can be particularly reassuring for first-time buyers. Mortgage advisors may be free or they may charge a fee of somewhere around the £500 mark. 3. Make the application When you’ve chosen the product you want to apply for, it’s time to make the application. You can usually do this online via a form. Application processes can vary, but to fill in the application forms, you’ll likely need: Personal details such as your name and address Your ID (like a passport or driving license) Your bank details Your current address and address history covering the last three years Details about your incomings and outgoings (including debts) Information about your employment (including payslips) Your national insurance number 4. Make your repayments on time and in full Once your application has been approved and your credit has been arranged, be sure to make your repayments on time and in full as per the terms of your agreement. Not doing so can severely harm your credit score amongst other things that your lender will have made you aware of. How long does it take to arrange credit? The time it takes to have your application processed and receive funds will vary. But here are some general timings below for how long it could take. You may be notified instantly after submitting your application as to whether you have been successful. But it can take seven to 10 working days for your new card to come through the post. Increase your chances of getting approved for credit Improving your credit score helps to increase your likelihood of getting approved for credit — and being offered the best rates. LOQBOX is the free and easy way to build your credit score while you save. Find out more and get started now. And for more tips on how to improve your credit score, check out our article here. Build your credit score by saving as little as £20 per month.
<urn:uuid:dcd9de51-a909-415a-9e02-88795226aa18>
CC-MAIN-2021-43
https://www.loqbox.com/en_gb/blog/a-beginners-guide-to-borrowing-money
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.954329
2,207
2.578125
3
What does Mathematics look like at Easington C of E Subject Leader – Mr Churchill At Easington C of E Primary School, we aim for all of our wonderful children to develop the roots to grow and the wings to fly. Through the explicit and accurate teaching of mathematics, children will have the roots to grow by developing the understanding, confidence and independence to gain a deep and meaningful knowledge and understanding in all areas of mathematics (e.g. place value, number, geometry, calculation etc). They will then be able to ‘fly’ within the subject by applying their knowledge and skills, from all areas of mathematics, in real life situations, problem-solving contexts and across the other National Curriculum subjects being taught. A key question to be considered at Easington C of E Primary School was: What is ‘Mastery’? At Easington C of E Primary School, the term ‘mastery’ refers to the ability of the children simply to have understood a mathematical idea or concept and in practical terms, have the ability to apply mathematical knowledge in different ways (fluency, problem solving and reasoning). This would mean that the children have confidently ‘mastered’ the concept. Mathematics is taught on a daily basis with each lesson an hour in duration. - 38 school weeks of 5 x 60 minute maths lessons including cross-curricular links when appropriate. - Long Term Planning: The National Curriculum. - Medium Term Planning: White Rose Hub Yearly overviews. These are specific to each year group. An example is provided below. - Short Term Planning: There is a daily mathematics lesson-60 minutes. Essential components of each lesson include a clear, year group specific Learning Objective and a clear and concise Success Criteria. This will be evident in books using the ‘Success Criteria’ label which will also be used by the children and teachers to show assessment within each lesson. Short term planning is supported by numerous materials from a range of sources and incorporates the Teaching Cycle (Teach, Practice, Apply and Review/Assess). - Daily Maths Meeting: IN Key Stage 2, children complete a Daily Maths Meeting (DMM). The DMM will focus on recapping and consolidating key basic skills (times tables and procedural elements) and concepts, as well as basics such as counting etc identified and highlighted by ongoing AfL. This will take place EVERY DAY FROM 12.00-12.30-this will be evident in classes on Learning Walks etc. This will be monitored closely by SLT. The content for this should come from AfL from lessons as well as the achievement of the children. The children will have a ‘Daily Maths Meeting’ exercise book for this work-the DMM should be a practical, collaborative and enjoyable session. The exercise book should be used for Guided Practice, working out, jottings etc that will allow children to understand concepts. Where procedural/fluency work has been completed, this should be marked and assessed by the teacher at the point of learning (see Marking and Feedback policy for further guidance). Peer assessment should also be utilised during this session. 38 school weeks of 5 x 25 minute daily maths meetings. Teaching and Learning At Easington C of E, a ‘blocked’ approach is used. However, there is flexibility concerning the length of the blocks-these can be made longer or shorter (in consideration to time) depending on continuing Assessment for Learning and Teacher Assessment. The blocks may also be taught in a different order-it is the teacher’s professional responsibility to ensure that the expectation of the curriculum objectives being covered by the end of the academic year is met. Teachers use their professional judgement to judge what is most beneficial for their own class. The mathematics subject lead is informed of the coverage for each class on a weekly basis. This occurs before the content is taught to enable the subject lead to have an up-to-date and clear picture of mathematics across the school at any point in time. A record of this coverage will be filed by the subject lead in the relevant subject leader file. Mixed year groups (EYFS/1, Y1/2 Y3/4) will utilise the WRH yearly coverage document to ensure accurate curriculum coverage but the input and modelling for each lesson should address the higher-level concept (the higher year group). The task should then be differentiated so that it meets the requirement of each year group (e.g. Place value in Y6 is to ten million whereas in Year 5 it is to one million etc). In instances where the National Curriculum requirements for each year group do NOT align, a split teaching approach must be employed to ensure that each year group receives their curriculum entitlement. Teaching Assistants should be utilised when appropriate to facilitate this process e.g. when the teacher is providing quality first teaching inputs to one year group, the teaching assistant may be utilised in providing immediate intervention for a concept for the other year group, then the ‘teacher group’ works independently with the support of a TA whilst the other year group receives their quality first teaching input. Teachers will use their professional judgement to decide how best to utilise this support. Teachers design lessons with an emphasis on the representation of content (Concrete Pictorial Abstract-CPA-approach) and work designed to meet the needs of the objective (objective first, not task). A wide range of resources should be used in order to create specific tasks-teachers should ‘dip in’ to WRH, Maths No Problem (electronic version), Kenny’s Pouch etc to create tailor-made worksheets for their cohorts. All worksheets need to contain Fluency, Problem Solving and Reasoning tasks EVERY DAY from Monday-Thursday. Every Friday all children should complete real-life problems, investigations, puzzles etc. Worksheets are created using a standard format as this provide consistency of expectation across school and present a uniformed approach to presentation of content and appearance for moderation purposes. Children should record in squared books not on the sheet, unless it is a diagram/number-line etc to complete. Example provided below. Success Criteria labels are used as an indicator of the progress that has occurred throughout the course of the lesson. The labels need to be specific to the year group objective and the Success Criteria specific to each year group. The success criteria SHOULD NOT be generic but SHOULD BE differentiated and matched to the children’s ability. The modelling resources used by teachers to facilitate learning indicate progress clearly and use models and images vital to the children achieving mastery (by the end of the academic year). Marking and assessment within lessons (Verbal Feedback), at the point of learning, is an integral part of practice. Teachers use the ‘Next Steps’ to further challenge the Higher Standard children or to consolidate learning. It is an expectation that when the children’s work merits it, the ‘next step’ will be given. Crucially, teachers check and acknowledge the ‘next step’ feedback by marking and initialling it. Example of aligning curriculum content: Year 2/3 example: Y2 Place value – Comparing and Ordering Numbers: - Place Value: Objective 2 – Can recognise the place value of each digit in a two-digit number (tens, ones). Year 3 Place value – Comparing, ordering and rounding numbers: - Place Value: Objective 2 – Can recognise the place value of each digit in a three-digit number (hundreds, tens, ones). Year 3/4 example: Year 3 Place value – Comparing, ordering and rounding numbers: - Place Value: Objective 3 – Can compare and order numbers up to 1000. Year 4 Place value – Comparing, ordering and rounding numbers: - Place Value: Objective 5 – Can order and compare numbers beyond 1000. Year 5/6 example: Year 5 Fractions, decimals and percentages: - Fractions: Objective 22 – Can compare and order fractions whose denominators are all multiples of the same number. Year 6 Fractions, decimals and percentages: - Fractions: Objective 15 – Can compare and order fractions, including fractions less than1. Once the objectives have been carefully selected from the MTP, teachers can deliver the content. In the Year 2/3 example, all children would receive the main input to meet the Y3 objective (pre-signalling content for Y2). The Y3 children could then be sent off to begin their task whist the Y2 children remain with the teacher for the extended input to deliver the Y2 objective (this ‘extended’ time may only be 5 or 6 minutes depending on the objective/how well the objectives have been aligned). Once the Y2 input has been completed, the teacher would be able to send these children off to begin their task whilst checking the Y3 children and providing appropriate feedback. The teacher must then ensure that ALL children receive appropriate feedback throughout the course of the lesson. It should be noted that not every objective on the MTP may not require a whole lesson e.g. Year 1 Place Value – Rote counts from 0 – 30 or beyond and back from any given number up to 30. This objective may be addressed in a mental and oral starter, DMM etc. A crucial point to note is that whilst the place value and calculation content is being delivered in the 60 minute maths lesson, the KS2 DMM must be utilised to address any gaps in knowledge that have become apparent from any area of mathematics from the previous year due to COVID 19 e.g. fractions, shape, statistics etc. It is essential that the DMM of 25 minutes is utilised for the maximum benefit of the children-the content taught will vary from cohort to cohort and should be bespoke to that cohort of children’s needs and assessment for learning. Also, cross curricular are made to alleviate some of the time pressures for delivering mathematics e.g. the ‘statistics’ requirement of the mathematics NC could be taught exclusively in science but the teacher would need to ensure that the maths being delivered (through science) is age appropriate and will be used to satisfy the expectations for the maths curriculum. In all year groups, teachers will use their professional judgment to deliver the mathematics curriculum for the maximum benefit for all children within their cohort. In a mixed age class, the children from each year group should have evidence of learning from their own year group rather than it being the same e.g. in Y2/3 – the Y2 children should have evidence of Y2 National Curriculum objectives whereas the Y3 children should have evidence of Y3 National Curriculum Objectives. This also applies to the Year 5/6 mixed age class. This is a non-negotiable and will not be deviated from for any reason. There will be NO requirement for teachers to produce short term planning, unless they choose to. For moderation purposes, the children’s books and teacher’s lesson preparation materials (activeinspire files, powerpoints, notebook files etc) will be collected and analysed. There would be freedom within the teaching sequence to have specific problem solving and specific reasoning lessons where the children are taught the discrete problem solving and reasoning skills necessary to solve problems and reason within various areas of mathematics. Teachers endeavour to make maths fun and use a range of experiences and practical maths activities to engage the children. Elements of maths: Fluency (learning the skills), Problem Solving (applying the learnt skill in different contexts) and Reasoning (thinking more widely about the skill and in different ways). Children should have access to fluency, problem solving and reasoning EVERY DAY-the children may NOT always address all three elements but the children need to have access to all three to cater for their needs (HS may not need/choose not to complete all Fluency questions whereas LA may only complete the Fluency etc). Higher Standard Children Children who are assessed as being potentially Higher Standard by the end of the academic year should also have access to more complicated/sophisticated questions (NCETM Mastery booklets-‘Mastery with Greater Depth’ questions, Higher Ability challenge booklets etc) when teachers feel it is appropriate to challenge and using professional judgement. There should be evidence in potential GD children’s books of these challenges and also the use of scaffolding, layered support designed to challenge understanding to a greater degree. This will be a specific element of internal monitoring by SLT during the academic year 2020-2021, specifically for the moderation of assessment judgements at the end of each term. The monitoring of mathematics within Easington C of E is of vital importance to ensure that children are being provided with the ‘roots’ of knowledge that will allow them to flourish and grow (fly) with confidence within the subject. The monitoring of mathematics is rigorous and specific. Mathematics books are monitored on a half-termly basis, with a specific focus for each book collection (e.g. challenge for HS etc). Formal observations of mathematics take place across the academic year to ensure that Quality First Teaching is occurring and any areas for improvement are rapidly identified and addressed as well as celebrating and sharing good practice. At Easington C of E, the mastery approach is followed. This is where each child is taught explicit fluency, reasoning and problem solving skills. Only by being able to complete all three disciplines are children able to ‘master’ the concept. Resources from the White Rose Hub are utilised and the teaching of blocks of concepts (place value, geometry etc) occurs across year groups. Each block taught is then assessed throughout the year and an overall judgement made based on evidence in books, observations and results of formal assessments. By assessing each block, gaps in knowledge can be identified and addressed. This approach also ensures that children have assessments and judgements made that are specific to each concept of learning.ie. a child who is extremely strong in number and place value may not be in geometry. This approach to assessment allows each child’s mathematical curriculum to be individualised to their own specific needs. Children who are assessed as being potentially higher standard at the end of the academic year are given specific challenges and tasks, within each block, to allow them to demonstrate their level of mathematical knowledge and understanding.
<urn:uuid:792cca3a-da62-431b-98c4-4f0e3db7600f>
CC-MAIN-2021-43
https://www.easington-pri.durham.sch.uk/our-curriculum/curriculum-content/curriculum-maths/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.930811
2,999
3.71875
4
Inclusive education from the Czech Republic to Georgia to the USA If you were to tell people anywhere else in the world that inclusive education was not required by law in the Czech Republic until 2016, they would be shocked. If you were to tell them that the main reason for this change having to be forced onto the system by the state is the absurd number of Romani children attending the so-called "practical schools", they would have a hard time believing you. For that reason, I have decided to take a look at education systems abroad and how they approach the children who, in the Czech Republic, would not be considered "normal". By pure coincidence I have found information comparing two absolutely different countries - Georgia and the USA - from which I can provide both my personal experience and the knowledge of local experts. A brief history of inclusive education in the USA Racial integration in the schools first began to be discussed in the USA during the first half of the 1950s, when Oliver Brown, an African-American, decided that under the Constitution of the United States of America his daughter had the right to the same education as that provided to any white child. Their struggle through the courts lasted approximately seven years, and it was not until 1958 that the first African-American student was able to successfully complete high school studies together with white children - and most importantly, to do so under conditions of peace. During the 1980s, when the idea of inclusive education (i.e., the education of all children together, including those living, for example, with physical disabilities) began to develop in the USA, including children on the basis of ther skin color was no longer controversial. As in the case of the inclusion of African-Americans into the common school system there, it was parents who were behind the idea of inclusive education for children living with disabilities. These parents began to realize that while their children might be limited in some ways, that did not necessarily mean they had to be excluded from school collectives. Their children might not be able to participate in all of the activities - or it might sometimes take them longer to comprehend something - but in terms of their academic achievement they were able to perform to a similar level as their schoolmates. Several advantages to educating all children together were immediately ascertained: Students collaborated more, learning to aid not just those who needed assistance, but everybody else; people who had been fated to exclusion working life as adults were able to find ways to apply themselves when they grew up; and entire classes even became more enthusiastic about studying, because children living with disabilities generally complain less about their curriculum, school environment, and teachers. Just like anywhere else where something new is put into practice, it took a longer time (in the USA, an entire generation) to test various practical teaching methods, introduce technical improvements to school buildings, and mainly to win the trust of parents and school administrators in the benefits of inclusion. The change to the education system required money to train existing teachers and new assistants. Today in the USA nobody is alarmed that children with various special needs are involved in classic instruction, and it is generally known that the investments made into the program, especially those made in the beginning, are paying off. Georgia knows education deserves investment On the other side of the globe, in Georgia, a law on inclusive education was adopted 10 years ago. Most Czechs probably imagine Georgia to be an eastern country whose greatest wealth lies in its mountains and as a country that was recently at war with Russia. Until the year 2000 there was no potable water even in the capital, Tbilisi, and locals were able to use electricity just two hours a day - and to this day the GDP of Georgia is approximately 13 times smaller than that of the Czech Republic. Despite such material obstacles, though, the education of all children together is commonplace there. The driving force behind the change in Georgia was the women teaching in the small town of Zestafoni, who had no pupils living with disabilities in their classes but knew that several lived in the community. It took them approximately two years to manage to convince the children's parents they had nothing to fear should they enroll their children into school with their "normal" schoolmates. At the time, naysayers claimed that what lay behind the accelerated adoption of the law on inclusive education in Georgia was just the fact that the inclusion of individuals who had previously been left out of mainstream education was purely a cost-cutting measure. It costs more to establish separate "practical" or special schools, frequently there are not enough children with special needs living in one location to create a full class, and their families, who are on the brink of financial collapse anyway, do not have the money to regularly transport their children to more distant municipalities to attend school. The state would have to cover the costs of children's transportation and pay more teachers to open more classes - and the children would be 100 % unable to apply themselves as adults unless they achieved at least a basic education. Jeremy Gaskill, a graduate of Columbia University in the USA and the director of the McLain Association for Children organization, which works in Georgia aiding the education of children living with disabilities or social disadvantage, says that: "It was important for this process to be launched. To this day it is not perfect, neither in Georgia or in the USA, but things are improving and I see enormous positive changes that are happening. During the 10 years that inclusive education in Georgia has been a topic, the country has made big progress, despite the fact that, unlike Western countries, they face many other problems, such as high unemployment, for example." Inclusion ultimately makes more room for all In the cases of Georgia and the USA, just those children who are at least able (with the aid of their assistants/class/specially trained teachers) to keep up with the instruction, more or less, are those who are enrolled into mainstream education as a matter of course. Those considered unable to be educated together with other children are actually very few - unlike the Czech Republic, where, as of last year, as many as 23 % of Romani children were still attending separate "practical" schools. When so many children living with lesser forms of disability are included in mainstream education, then there is financing and room for quality instruction for children living with profound disability, even in a poor society like Georgia, and such children have a chance to be educated in well-equipped schools in classes of five to 10. When I visited School No. 200 in Tbilisi, which is attended just by children living with more profound disabilities, my jaw dropped at the level of quality of the classrooms, creative workshops, dining facilities, infirmary - and mainly at the individual approach taken toward each pupil. "Adequate financing for quality equipment has managed to be achieved, mainly thanks to the school administration, who visits the Education Ministry almost daily to explain what the acute funding needs are and what will happen if the money is not provided. Also, naturally, they are raising money everywhere they can," says Rimma Gelenava, director of the Disarmament and Nonviolence organization, which has long collaborated with the school. Another difference between the inclusive education systems of the Czech Republic, Georgia and the USA has to do with teacher salaries. While none of these countries remunerate teachers according to their merit and with a view to the fact that they bear the burden of educating future citizens, the average salaries of teachers in Georgia are just about CZK 4 500 (EUR 168) monthly, even though the cost of consumer goods and food in Tbilisi are basically the same as costs elsewhere in Europe. How much money and time will it cost the Czech Republic? Inclusive education in the Czech Republic will certainly be a long fight waged on many fronts, but as they can tell you anywhere else in the world where they have years of experience with it, it pays off in the long run. The achievements will be advantageous both financially and in human terms. If the disproportionate amount of special education institutions in the Czech Republic is done away with, if society is no longer automatically divided according to the "handicapped" or "less educable" children, and if that leads to their better inclusion into society as adults (e.g., children living with some types of disabilities can brilliantly perform routine labor as adults, because it does not bother them the way it does others), then this will also be a positive change for the schoolmates of these children, who will be much better prepared for adult life, mainly in the area of collaborating with others. None of this will just happen on its own, though - it is necessary to work with children (especially the kinds of children who have usually been recommended for enrollment into the special schools), starting from preschool age at the very latest, and to motivate their parents to be actively involved in this process. Additionally, it is necessary to train not just enough assistants, but mainly the pedagogues themselves in how to work with different kinds of children. "From Georgia and the USA I know that when there are children in a class who need spcial care, for any reason, then it's brilliant when at least one assistant can be present. If, however, that is not possible, for whatever reason, a well-trained teacher is capable of managing an entire class with several 'special' pupils. Study in an inclusive class is beneficial for all - from the pupil living with disability, to those without disablities, and including the teacher herself or himself," Gaskill confirms. - Commentary: Too soon to assess inclusive reforms in education - USA: Tech firm staffers refuse to work for Trump - Tabloid interview with son of former Czech President about inclusive education features untruths - Romani fashion designer wins international competition in the USA - Commentary: Corporal punishment at the "inclusive school" - Czech right-wing extremists arguing over why racist from USA didn't speak at anti-EU demo - USA: Controversy over conservative radio host's Nazi-like salute at the close of her RNC speech - Czech Republic: Canadian education expert says only an inclusive school is a really good school - Norway Grants to fund two advisors on Romani issues in Czech town - one Czech, the other Romani - Council of Europe's INSCHOOL project promotes inclusion in schools throughout the Czech Republic - Every Child Matters - online guide for Czech educators - Brooke Pavek, an American with Romani roots, has more than 700 000 social media followers - Alexander Soros: We celebrate the identity, self-determination and unity of Roma, Open Society Foundations will always support them - Czech activist on 8 April: the Romani position in society is deteriorating, zero results from the financing invested - Leading Romani activist and co-founder of Dikh TV, the musician Lajos "Paci" Balogh, has succumbed to COVID-19 at the age of 28 - Czech poll finds 30 % of the public does not want Romani children in mainstream classes and does want them to be segregated - More than 60 % of Czechs active online believe Romani people are favored over non-Roma in the Czech Republic - European Commission seeks public feedback on European Social Fund activities, deadline 24 February - Czech director of "The Painted Bird": All it takes here is to be different and you've got a problem - Michal Mižigár wins the Aspen Central Europe Leadership Award 2019
<urn:uuid:83f96c26-ef1e-42b7-9c42-d7f89d099af5>
CC-MAIN-2021-43
http://www.romea.cz/en/news/world/inclusive-education-from-the-czech-republic-to-georgia-to-the-usa
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00391.warc.gz
en
0.977211
2,359
2.890625
3
Althоugh research suggests thаt children’s eating habits аrе formed еvеn bеfоrе thеу enter thе classroom – children аѕ young аѕ twо mау аlrеаdу hаvе dietary preferences based оn thеіr parents’ food choices – health education саn play a vital role іn helping establish lifelong healthy patterns early. Research shows thаt health education hаѕ a positive impact оn health behaviors аѕ wеll аѕ academic achievement, аnd thаt thе mоѕt effective means оf improving health literacy іѕ ensuring thаt health education іѕ included іn curriculum аt аll levels оf education. U.S. schools educate 54 million students daily, аnd саn provide nоt оnlу аn outlet tо promote healthy behaviors fоr children аnd adolescents, but a place fоr thеm tо engage іn thеѕе behaviors, including eating healthy аnd participating іn physical activity. Thе U.S. іѕ іn great need оf аn improvement іn health literacy. In a 2007 UNICEF study, оur country ranked lаѕt оut оf 21 industrialized countries іn overall child health аnd safety. Approximately оnе іn fіvе оf оur high school students аrе smokers, 80 percent оf students dо nоt eat thе recommended fіvе servings оf vegetables аnd fruits реr day, аnd mоrе thаn 830,000 adolescents bесоmе pregnant еасh year. Approximately twо thirds оf thе American population іѕ estimated tо bе overweight оr obese. Furthermore, оur understandings оf health аnd health-related behaviors аrе оftеn highly influenced bу thе media аnd media images – whісh саn lead tо inaccurate assumptions аnd negative health behaviors аnd attitudes. You have to dig deeper into the motivational factors of both men and women that drive them to use SARMs in the first place, and then you might be able to find the best SARMs for women. So far as men are concerned, they use the supplement to burn fat and to increase muscle mass. On the other hand, women want to shed fat, and simultaneously want to tone their body up so as to look hard and lean, but they don’t have any desire to bulk up muscles. Let’s have a look at the best SARMs for women (top5), and find out everything about them. For a long time, women haven’t been so lucky with the supplements that shape the body. Back in those days when the popularity of prohormones and steroids reached their heights, many women put their hands on them for gaining the benefits, but they experienced plenty of adverse effects. By then, it became clear that prohormones and steroids were not meant for them, especially if they wanted to retain their feminine traits. You can discover more here about the Best SARM for women. Thе importance оf media literacy аѕ applies tо health education Self-esteem patterns аlѕо develop іn early childhood, аlthоugh thеу fluctuate аѕ kids gаіn new experiences аnd perceptions. Bесаuѕе media messages саn influence unhealthy behaviors, especially іn adolescents, a comprehensive health education program muѕt include nоt оnlу health knowledge, but media literacy аѕ іt relates tо psychological аnd physical health behaviors аѕ wеll. “To a large degree, оur images оf hоw tо bе соmеѕ frоm thе media. Thеу аrе [a] crucial shaper оf thе young lives wе аrе striving tо direct,” writes resource teacher Neil Andersen, editor оf Mediacy, thе Association fоr Media Literacy newsletter. Media awareness, Andersen explains, саn help teach students techniques tо counter marketing programs thаt prey оn thеіr insecurities tо promote negative behavior, саn explode stereotypes аnd misconceptions, саn facilitate positive attitudes аnd саn help students learn hоw tо absorb аnd question media-conveyed information. Bесаuѕе оur perceptions оf оurѕеlvеѕ аnd оthеrѕ develop early, аnd bесаuѕе wе live іn ѕuсh a media-inundated world, іt іѕ important thаt wе address thе conflicts inherent іn media values versus оur оwn values wіth оur children аnd adolescents fіrѕt, іn a factual, positive, аnd coherent wау. A comprehensive (age-appropriate) health program wоuld thеrеfоrе teach аbоut thеѕе various issues аt different stages оf development. Pre-adolescence аnd adolescence аrе especially pertinent stages іn аn individual’s growth fоr discovering thеmѕеlvеѕ аnd thеіr place іn thе world, аnd іt іѕ durіng thіѕ vital tіmе thаt media literacy іѕ absolutely key tо аn influential аnd positive health program. Issues muѕt bе addressed thаt affect positive health behavior аnd attitudes, especially іn tееn girls, including: • Digital manipulation оf thе bоdу іn advertisement – Almоѕt аll оf whаt wе ѕее іn media hаѕ bееn altered оr digitally manipulated tо ѕоmе extent. • Objectification оf thе bоdу іn media – Sіnсе thе 1960s, sexualized images оf men іn thе media hаvе increased 55 percent, whіlе sexualized images оf women increased 89 percent, according tо a University оf Buffalo study. Thеrе аrе аlѕо 10 tіmеѕ mоrе hypersexualized images оf women thаn men аnd 11 tіmеѕ mоrе non-sexualized images оf men thаn оf women. • Average women versus models – Models today аrе 23 percent skinnier thаn thе average woman, versus 9 percent skinnier іn thе 80s. Wе live іn a pop-culture thаt nоt оnlу promotes a hyper-skinny-is-best attitude, but аlѕо discourages average оr healthy bоdу ideals аnd individuals frоm feeling good аbоut simply pursuing healthy dietary choices – thеу feel thеу muѕt resort instead tо drastic (and quick) weight loss measures thаt рut unhealthy stress оn thе bоdу. Fоr example, a study released іn 2006 bу thе University оf Minnesota showed thаt 20 percent оf females hаd used diet pills bу thе tіmе thеу wеrе 20 years old. Thе researchers аlѕо fоund thаt 62.7 percent оf teenage females used “unhealthy weight control behaviors,” including thе uѕе оf diet pills, laxatives, vomiting оr skipping meals. Thе rates fоr teenage boys wеrе half thаt оf girls. “These numbers аrе startling, аnd thеу tell uѕ wе need tо dо a better job оf helping оur daughters feel better аbоut thеmѕеlvеѕ аnd avoid unhealthy weight control behaviors,” concluded Professor Dianne Neumark-Sztainer. Ovеr thе five-year period thаt thе study wаѕ conducted, mоrеоvеr, researchers fоund thаt high school-aged females’ uѕе оf diet pills nearly doubled frоm 7.5 percent tо 14.2 percent. Whаt teaching health аnd media literacy саn dо Whеn a colleague asked Doctor Caren Cooper, a Research Associate аt thе Cornell Lab оf Ornithology, whаt thе opposite оf media wаѕ, ѕhе paused оnlу briefly bеfоrе answering, “Reality, оf course.” “We еасh need logic tools tо realize thаt аll media іѕ a representation оf reality – іf wе don’t bring thіѕ realization іntо оur consciousness, wе аrе apt tо forget аnd let оur оwn reality bесоmе distorted: fostering a culture оf over-consumption, eating disorders, sexual violence, аnd climate change deniers,” ѕhе explained. Teaching health education comprehensively іn today’s rapidly changing world іѕ important fоr fostering skills thаt students wіll carry wіth thеm fоr thе rеѕt оf thеіr lives, including: • Developing positive bоdу affirmations – Accepting thеіr bodies, accepting other’s bodies, аnd showing respect fоr оnе аnоthеr. A good exercise wоuld bе tо hаvе thеm write dоwn good things аbоut еасh оthеr – wіthоut thе words beautiful, оr descriptions оf size, аѕ wеll аѕ whаt thеу love аbоut thеmѕеlvеѕ – bоth physical аnd character traits. • Understanding thе importance оf eating right – And thаt it’s nоt аbоut “dieting.” Pеrhарѕ thе biggest misconception іѕ thаt аѕ lоng аѕ a person loses weight, іt doesn’t matter whаt thеу eat. But іt does, аnd bеіng thіn аnd bеіng healthy аrе nоt thе ѕаmе thіng. Whаt уоu eat affects whісh diseases уоu mау develop, regardless оf уоur size, аnd diets thаt mау help уоu lose weight (especially quickly) саn bе vеrу harmful tо уоur health оvеr tіmе. • Understanding thе importance оf exercise – People whо eat right but don’t exercise, fоr example, mау technically bе аt a healthy weight, but thеіr fitness level doesn’t match. Thіѕ means thаt thеу mау carry tоо muсh visceral (internal) fat аnd nоt еnоugh muscle. “Given thе growing concern аbоut obesity, іt іѕ important tо let young people know thаt dieting аnd disordered eating behaviors саn bе counterproductive tо weight management,” said researcher Dianne Neumark-Sztainer, a professor іn thе School оf Public Health аt thе University оf Minnesota. “Young people concerned аbоut thеіr weight ѕhоuld bе provided support fоr healthful eating аnd physical activity behaviors thаt саn bе implemented оn a long-term basis, аnd ѕhоuld bе steered away frоm thе uѕе оf unhealthy weight control practices.” Wе muѕt аlѕо teach thеm: • Hоw tо reduce stress bу engaging іn activities аnd оthеr outlets. • Thе importance оf sleep. • Thе importance оf vitamins. • Thе importance оf nоt аlwауѕ bеіng “plugged in” – Thе natural environment hаѕ great health benefits, аnd tоо muсh technology mау еvеn bе hazardous tо оur health. “We’re surrounded bу media images fоr ѕuсh a large portion оf оur daily lives, it’s аlmоѕt impossible tо escape frоm it,” explained IFN representative Collete durіng аn interview wіth EduCoup. “We gеt thе majority оf оur information today thrоugh media, bе іt music, TV, thе internet, advertising оr magazines, ѕо іt really іѕ incredibly important fоr uѕ аѕ a society tо think аbоut thе messages wе receive frоm thе media critically.” Decoding thе overload оf overbearing messages, thеn, іѕ pertinent tо thе health оf оur minds аnd bodies, аnd teaching thеѕе skills early wіll help kids tо practice аnd maintain life-lengthening аnd positive behaviors fоr thе rеѕt оf thеіr lives.
<urn:uuid:61d0b786-447f-4bf8-9070-0430a1de1644>
CC-MAIN-2021-43
https://www.coolsculptny.com/2016/10/11/importance-of-health-and-media-literacy/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00430.warc.gz
en
0.690594
3,403
2.765625
3
what equipment needed for mining iron A Beginner's Guide to Cryptocoin Mining: What You Need to ,, Jan 26, 2020· Mining generates substantial heat, and cooling the hardware is critical for your success You absolutely need a strong appetite of personal curiosity for reading and constant learning, as there are ongoing technology changes and new techniques for optimizing coin mining resultsWhat Were Some of the Tools That Were Used in the Gold ,, Sep 29, 2017· Panning for gold was also known as "placer mining" Early miners sat by riverbeds, scooping wet soil into shallow metal pans They swirled the pans, washing away the dirt to hopefully discover particles of gold Though more complex equipment was eventually invented, pans were still a useful tool to distinguish gold from dirtWhat Equipment Is Used In Mining Of Iron Ore, Tools And Equipment Used In Iron Ore Mining 2019-10-21911mpe has small gold mining equipment for sale and more specifically mineral processing equipmentr equipment is best used in small scale extractive metallurgy operations operated by small miners or hobbyist prospectors and mining fanatics11mpe offers gold mining equipment as well as processing equipment applicable to most ,How Is Iron Extracted From the Earth?, Mar 26, 2020· Iron ores in the form of hematite (ferrous oxide) and magnetite are removed from the earth through mining The use of heavy mining equipment is necessary to dig out large pits in an area with a large deposit of iron ore; however, because iron does not occur naturally, it is necessary to use a blast furnace to separate or refine iron from the other substances in the iron orewhat is the equipments for iron ore mine, Iron Ore Mining Equipment Iron Ore Mining Crushing Jul 06 2011 Caiman ore mining equipment manufacturer supply complete Iron ore mining equipment for Iron ore mining process Iron Ore Processing Plant Iron ore is the important mine ore and plays an important role in our life It also is an important iron and steel producers of raw,. Iron ore, Iron ores are rocks and minerals from which metallic iron can be economically extracted The ores are usually rich in iron oxides and vary in color from dark grey, bright yellow, or deep purple to rusty red The iron is usually found in the form of magnetite (Fe 3 O 4, 724% Fe), hematite (Fe 2 O 3, 699% Fe), goethite (FeO(OH), 629% Fe), limonite (FeO(OH)n(H 2 O), 55% Fe) or siderite (FeCO ,Mining Safety | Mining Safety and Protective Clothing, Mining Safety and Protective Clothing: Many injuries can be prevented by wearing the correct protective clothing/ gear In this section we would like to discuss the correct clothing and gear required to perform many different mining activitiList of Mining Equipment | Career Trend, Dec 27, 2018· Each segment requires the use of specific equipment, but there are several types of mining equipment that are used throughout the industry This equipment includes excavators, draglines, drills, roof bolters, continuous miners, longwall miners, rock dusters, shuttle cars and scoopsHow iron is made, Evidence of what is believed to be the first example of iron mining and smelting points to the ancient Hittite culture in what is now Turkey Because iron was a far superior material for the manufacture of weapons and tools than any other known metal, its production was a closely guarded secretGlossary of Mining Terms, Siderite - Iron carbonate, which when pure, contains 482% iron; must be roasted to drive off carbon dioxide before it can be used in a blast furnace Roasted product is called sinter Silica - Silicon dioxide Quartz is a common example. list of machineries required for iron ore mining | Mobile ,, Apr 13, 2015· Iron Ore Mining Equipment List | Manganese Crusher Search iron ore mining equipment list to find your need Gulin Mining and Construction Machinery is a global , what machinery that is required in mining iron ore , Equipment And Machinery Used In Carajas Iron Ore Mine, Brazil,Iron Ore is the basic raw material used for the iron and steel ,Fast Facts, Minnesota's iron mining industry depends heavily on the men who worked in in miners, but it needed more than labor It needed the money of people who were willing to take tremendous financial risks The seven Merritt brothers of Duluth were among the first of those risk takers They made and lost there families fortune between 1890 and 1895equipment needed for mining iron ore, equipment needed for mining iron ore,Our company is a large-scale heavy enterprise that taking heavy mining machinery manufactory as main products and integrated with scientific research, production, and marketing We are concentrating on producing and selling machines such as jaw crusher, cone crusher, hammer crusher, ball mill, sand maker ,Mining, Mining is a skill that allows players to obtain ores and gems from rocks The higher a player's Mining level is the more likely they are to successfully extract ore With ores, a player can then either smelt bars and make equipment using the Smithing skill or sell them for profit Mining is one of the most popular skills in RuneScape as many players try to earn a profit from the skillWater Requirements of the Iron and Steel Industry, the iron and steel industry with respect to geographic distribution, plant size, and processes used Fourteen of the installations in the iron industry were operated as mine-concentration plant combinations, although in some places the distance from the mine to the concentra tion plant was a few mil Only one mine and one concentration. Aluminum Mining and Processing: Everything you Need to Know, GK Home >GK Blog >Aluminum Mining and Processing: Everything you Need to Know From the mining equipment used to the advancements made in mining technology, aluminum mining has progressed from primitive methods to the use of technologically advanced equipment and processes that promote a tremendous increase in aluminum productionIron Ore Mining | Techniques | Metal Extraction, MINING AND PROCESSING: Iron ore mining can be broadly divided into two categories namely 1) manual mining which is employed in small mines and 2) mechanized mining is suitable for large iron ore min Manual mining method is normally limited to float ores and small min Mining of reef ore is also being done manually on a small scaleEquipment In Iron Mining, Equipment In Iron Mining As well as from energy mining, building material shops, and manufacturing plant And whether iron mining equipment is 1 year, 15 years, or 2 years There are 55,509 iron mining equipment suppliers, mainly located in Asia The top supplying country or region is China, which supply 100 of iron mining equipment respectivelyIron Ore | HowStuffWorks, Instead, the iron heats up into a spongy mass containing iron and silicates from the ore Heating and hammering this mass (called the bloom) forces impurities out and mixes the glassy silicates into the iron metal to create wrought iron Wrought iron is hardy and easy to work, making it perfect for creating toolsWhat Equipment Is Needed For Iron Ore Mining, The Miferma Iron Ore Mining Project will support the exploitation of iron ore deposits , port facilities, housing, including power generation facilities as needed , mining, and transport equipment will be supplied, so as to allow ore mining, and,. New to Mining? Here are the Most Common Types of Mining ,, Dec 21, 2015· Underground mining is carried out when rocks or minerals are located at a fair distance beneath the ground But then they need to be brought to the surface Underground specialized mining equipment such as trucks, loaders, diggers etc are used to excavate the material and are normally hauled to the surface with skips or lifts for further ,Mining Technology in the Nineteenth Century | ONE, Mining technology consists of the tools, methods, and knowledge used to locate, extract, and process mineral and metal deposits in the earth The methods used to locate ore bodies range from on-the-ground reconnaissance by prospectors to remote sensing techniques such as satellite imageryWhat Job Skills Are Needed for Miners? | Work, What Job Skills Are Needed for Miners? Miners form part of the backbone of the mining industry, and the job benefits are numerous On average, coal miners earn $77,466 per year, according to the National Miners Association in 2010 That's a relatively high salary, especially considering that not all positions ,what equipment is needed for iron ore mining, What Machinery Is Needed For Iron Ore Mining Iron ore mining and processing equipments and machinery 20160213 is one of the biggest manufacturers in aggregate processing machinery for the shop for coal mining equipments in india equipments required for Iron Ore Mining Equipment Required Iron ores are rocks and minerals from which metallic iron ,Ancient Mining Tools and Techniques, Feb 22, 2020· In Northern and Northeastern Europe, bogs were the site of bog iron, the earliest form of iron used for tools Iron compounds from plant decay precipitate out and are deposited at the bog bottom These nuggets of iron were harvested and smelted to produce mining tools of wrought iron Some people harvest iron in this manner to this day. iron ore mining equipment, iron ore mining equipment ,, A wide variety of iron ore mining equipment options are available to you, There are 488 suppliers who sells iron ore mining equipment on Alibaba, mainly located in Asia The top countries of supplier is China, from which the percentage of iron ore mining equipment supply is respectivelyMining Equipment for sale | eBay, What kinds of mining equipment are available? Mining equipment can vary depending on the work being done Here are the more common types of equipment that you may need Mining drills are more common with underground mining Drilling helps to bring material to the surface so that it can be further processed Crushing equipment breaks down the , Leave Your Needs Dear friend, please fill in your message if you like to be contacted. Please note that you do not need to have a mail programme to use this function. ( The option to mark ' * ' is required ) Copyright © 2021.SBM All rights reserved. - Sitemap
<urn:uuid:8f1b6165-93c0-4f4b-b2c7-ce044c153496>
CC-MAIN-2021-43
https://www.aci-schoonmaak.nl/mobile/7634/what-equipment-needed-for-mining-iron.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00110.warc.gz
en
0.944082
2,067
2.546875
3
“I should much wish, like the Indian Vishna, to float along an infinite ocean cradled in the flower of the Lotus, and wake once in a million years for a few minutes – just to know that I was going to sleep a million years more.” – Samuel Taylor Coleridge While the medicinal properties of opium have been known since prehistoric times, it was 16th century Swiss alchemist Paracelsus who first developed laudanum. He discovered that when mixed with alcohol as opposed to water, opium’s pain-killing properties were heightened. He mixed it with crushed pearls, musk, saffron, and ambergris* and called it laudanum, from the Latin word laudare: to praise. Now thought of as primarily a Victorian drug, laudanum first reached England in the 1660s when physician Thomas Sydenham developed his own recipe. While Sydenham left out the ambergris, the fundamentals remained the same: alcohol and opium was a potent cure-all and in his Medical Observations Concerning the History and Cure of Acute Diseases (1676), he gave it the praise Paracelsus had predicted a century before. Laudanum took off during the eighteenth century and by the nineteenth, it could be found in almost every home in Britain. Although the recipe was flexible, it remained at heart an uncomplicated but potent combination of alcohol and opium. It was an over the counter drug cheap enough to be used across the social spectrum and simple enough to be brewed at home. Laudanum was used for an endless list of ailments including but not limited to teething, insomnia, anxiety, nerves, hysteria, menstrual cramps, pregnancy pains, mood swings, depression, stomach upset, diarrhea, consumption, cough, heart disease, and cholera. It was certainly an effective cough suppressant; related opioids such as morphine and codeine are still prescribed for cough today. It was a potent painkiller, induced deep sleep and vivid dreams, produced feelings of euphoria, and was addictive as it was cheap. Not to be limited to medicinal purposes, laudanum was taken recreationally or mixed with other alcohol such as absinthe to stimulate creativity among artists. Some notable fans of the substance include Dickens, Bram Stoker, Samuel Taylor Coleridge, George Elliott, Dante Gabriel Rossetti, and Rossetti’s wife, model Elizabeth Siddal, who tragically died of a laudanum overdose. Women tended to be medicated more than men, and many opium-derived medications were known euphemistically as “Woman’s Friend.” Likewise, Godfrey’s Cordial, a mixture of water, treacle, and opium specifically for infants was knows as “Mother’s Friend.” Charles Kingsley describes opium addiction in Alton Locke (1850) as ‘elevation’, a particular problem of women: “Oh! ho! ho! — yow goo into druggist’s shop o’ market-day, into Cambridge, and you’ll see the little boxes, doozens and doozens, a’ ready on the counter; and never a ven-man’s wife goo by, but what calls in for her pennord o’ elevation, to last her out the week. Oh! ho! ho! Well, it keeps women-folk quiet, it do; and it’s mortal good agin ago pains.” “But what is it?” “Opium, bor’ alive, opium!” There were several different laudanum varieties available and they could be made at home. It was dreadfully bitter, so sweeteners like honey and spice were added to improve the flavor. Sydenham’s recipe from 1660 was still in use by the 1890s when it was published in William Dick’s Encyclopedia of Practical Receipts and Processes: “Sydenham’s Laudanum: This is prepared as follows: opium, 2 ounces; saffron, 1 ounce; bruised cinnamon and bruised cloves, each 1 drachm; sherry wine, 1 pint. Mix and macerate for 15 days and filter. Twenty drops are equal to one grain of opium.” Dick’s Encyclopedia contains dozens of recipes for homemade laudanum, and even more for other remedies containing opium. As relatively appealing as cinnamon and cloves sound, by the 19th century, laudanum could also be mixed with mercury, ether, chloroform, hashish, or belladonna; if it didn’t kill you, it would make you see some very interesting things. Whether or not the malady justified the use of such a powerful drug, laudanum and other opium derivatives were used frequently and without a great deal of hesitation. It was a good cough suppressant, kept children quiet, and induced restful sleep. Rhapsodic descriptions of its effects make it sound like magic. In The Picture of Dorian Gray, Oscar Wilde conveys the horrors and pleasures of an East End opium den in a single paragraph (it isn’t exactly laudanum, but it’s the same active ingredient): “As Dorian hurried up its three rickety steps, the heavy odour of opium met him. He heaved a deep breath, and his nostrils quivered with pleasure. When he entered, a young man with smooth yellow hair, who was bending over a lamp lighting a long thin pipe, looked up at him and nodded in a hesitating manner. […] Dorian winced and looked round at the grotesque things that lay in such fantastic postures on the ragged mattresses. The twisted limbs, the gaping mouths, the staring lustreless eyes, fascinated him. He knew in what strange heavens they were suffering, and what dull hells were teaching them the secret of some new joy.” Strange heavens aside, laudanum was not a friendly substance. In 1889, The Journal of Mental Sciences published what was purported to be an anonymous letter by the wonderful title of Confessions of a Young Lady Laudanum-Drinker which describes at length her experience of addiction: “It got me into such a state of indifference that I no longer took the least interest in anything, and did nothing all day but loll on the sofa reading novels, falling asleep every now and then, and drinking tea. Occasionally I would take a walk or drive, but not often. Even my music I no longer took much interest in, and would play only when the mood seized me, but felt it too much of a bother to practice. I would get up about ten in the morning, and make a pretence of sewing; a pretty pretence, it took me four months to knit a stocking! “Worse than all, I got so deceitful, that no one could tell when I was speaking the truth. It was only this last year it was discovered; those living in the house with you are not so apt to notice things, and it was my married sisters who first began to wonder what had come over me. By that time it was a matter of supreme indifference to me what they thought, and even when it was found out, I had become so callous that I didn’t feel the least shame. (…) My memory was getting dreadful; often, in talking to people I knew intimately, I would forget their names and make other absurd mistakes of a similar kind. As my elder sister was away from home, I took a turn at being housekeeper. Mother thinks every girl should know how to manage a house, and she lets each of us do it in our own way, without interfering. Her patience was sorely tried with my way of doing it, as you may imagine; I was constantly losing the keys, or forgetting where I had left them. I forgot to put sugar in puddings, left things to burn, and a hundred other things of the same kind.” While this anonymous writer did recover, laudanum addiction was difficult to beat. People became tolerant to it quickly, and recovery was more likely to be achieved by tapering doses. Although laudanum was a common cough suppressant, it could work too well by causing shortness of breath and respiratory depression, or keeping the user from breathing at all. It can also inhibit digestion, cause constipation, depression, and itching. It was so potent that it was easy to overdose accidentally as an adult, and many infants and children died from it as well. Tragically, it was also a common method of suicide. We might not understand the appeal of such a debilitating and ultimately lethal substance, but for most people in the nineteenth century, laudanum must have felt like a godsend. Disease, poverty, and hunger were widespread, and those lucky enough to be employed suffered through long hours in terrible conditions to earn their pittance. Even for the wealthy and well-to-do, Britain was cold, wet, and overrun with discomforts that may necessitate its use. Menstrual cramps, insomnia, anxiety, nerves, cough, stomach upset, cholera, tuberculosis — if one drug could treat them all and that drug happened to be miraculously affordable and so common there was little to no stigma attached to it, there was no reason not to rely on it from time to time. Laudanum is still in production today, but it is no longer available over the counter. Now referred to almost exclusively as Tincture of Opium, it is listed as a Schedule II substance due to its highly addictive nature and is used for stomach ailments, pain, and to treat infants born to mothers with opioid addiction. Anonymous. Confessions of a Young Lady Laudanum-Drinker. The Journal of Mental Sciences January 1889 Berridge, Victoria. “Victorian Opium Eating: Responses to Opiate Use in Nineteenth-Century England,” Victorian Studies, 21(4) 1978. Dick, William B. Encyclopedia of Practical Receipts and Processes. New York: Dick & Fitzgerald, Publishers, 1890. Diniejko, Andrzej. Victorian Drug Use. The Victorian Web. http://www.victorianweb.org/victorian/science/addiction/addiction2.html Kingsley, Charles. Alton Locke (1850). O’Reilly, Edward. Laudanum: A Dose of the Nineteenth Century. Sydenham, Thomas. Medical Observations Concerning the History and Cure of Acute Diseases (1676) Wilde, Oscar. The Picture of Dorian Gray (1890). *presumably crushed diamonds would have been too extravagant
<urn:uuid:2350e4b7-e6af-4e92-9d35-43becc2b4f99>
CC-MAIN-2021-43
https://dirtysexyhistory.com/2016/09/22/suffering-in-some-strange-heaven-an-introduction-to-laudanum/?replytocom=389
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00509.warc.gz
en
0.96932
2,260
3
3
Our editors will review what you’ve submitted and determine whether to revise the article. Euphronios, also spelled Euphronius, (flourished c. 520–470 bce ), one of the most celebrated Greek painters and potters of his time. He experimented with new ideas, forms, and designs within the context of the Archaic tradition, especially the adoption and exploration of the new red-figure technique. His signature has been identified on a number of vessels, 8 signed by him as painter and at least 12 as potter. Generally, Euphronios’s earlier works were signed as painter and his later works as potter. Among the vases signed by Euphronios as painter is one of Heracles (Herakles, Hercules) wrestling Antaeus (Antaios), dated about 510–500 bce and now in the Louvre, Paris. It has been praised for its excellent drawing. A kylix (shallow earthenware cup with stem and handles), now in the State Collection of Antiquities (Staatliche Antikensammlungen) in Munich, is another example of Euphronios’s work as painter (c. 510–500 bce ). A young horseman is painted on the inside of the kylix. Heracles in combat with the triple-bodied Geryon—a monster who kept large herds of cattle, the theft of which was one of Heracles’ labours—is painted on the outside. As a potter, Euphronios worked with some of the finest vase painters of his time. The paintings of several, among them Douris, Makron, Hyakynthos, and Onesimos, have been identified on vases signed by Euphronios. Most, however, were painted by the Panaitios Painter. The Pistoxenus Painter was another of the painters of Euphronios’s pots. A white-ground cup, now in the Berlin Antiquities Collection (Antikensammlung), signed by Euphronios as potter and Pistoxenus as painter, is the last known signed work by Euphronios. In terms of its style, it could not have been made earlier than 470 bce . This article was most recently revised and updated by Kathleen Kuiper, Senior Editor. Our editors will review what you’ve submitted and determine whether to revise the article. Kleitias, also spelled Cleitias, (flourished c. 580–c. 550 bce ), Athenian vase painter and potter, one of the most outstanding masters of the Archaic period, the artist of the decorations on the François Vase. This vase, a volute krater painted in the black-figure style, is among the greatest treasures of Greek art. Dating from c. 570 bce , it was discovered in 1844 in an Etruscan tomb near Chiusi and named after its discoverer it is now in the Museo Archeologico at Florence. More than 200 figures are found among the six friezes (painted on superimposed zones) that decorate the vase’s surface. In content alone, the François Vase is an encyclopaedia of the epic themes popular during the Archaic period. The vase is signed “ Ergotimos epoiēsen Kleitias egraphsen” (“Ergotimos made [me] Kleitias painted [me]”). Kleitias’s signature has been found on five vases. Four of these, like the François Vase, are signed by Kleitias as painter and Ergotimos as potter. Also from the hands of the two masters in collaboration are two cups and some cup fragments, from which most of the signatures have been lost. Other vases and fragments of other vases have been attributed to Kleitias on the basis of style. This article was most recently revised and updated by Naomi Blumberg, Assistant Editor. Born on 26 February 1906 in Athens, Nikos Hadjikyriakos-Ghikas was a prolific painter, sculptor, engraver and writer. As a young 23-year-old, he went to Paris in 1923 to study French Literature and Esthetics at the Sorbonne University. It was there that he participated in an exhibition that took place in the Salon des Indépendants. He later furthered his education at the Academie Ranson, studying painting, and held his first exhibition at Gallerie Percier in 1927, where he even was noticed by the great Picasso himself. Back in Greece, he was part of the Generation of the Thirties, a group of Greek writers and painters who had the desire to enrich the country’s present by modernizing its ancient glories. Co-founder of the “Armos” art group, he represented Greece at the 1950 Venice Biennale. The artist gained international fame and exhibited across the globe, and even became a member of the Academy of Athens as well as of the Royal Academy in London and the Tiberiana Academy in Rome. Considered a leading Greek painter and known for his Greek landscapes, his home has been transformed into a museum run by the Benaki museum. How Do You Identify Artist Signatures on Paintings? To identify artist signatures on paintings, locate the signature or the monogram on the painting, and note the painting type. Use John Castagno's signature directories available from Scarecrow Press or as an online database on the Artists' Signatures website to verify signatures or identify symbols, monograms and illegible signatures. If the artwork is of local origin, contact a local art gallery owner, museum curator or historian. To locate the signature or monogram of the artist, check the painting's margins or backside. Sometimes, the name of the artist, the title and the year are printed on the painting's reverse side. In case of framed artworks, remove the backing to access this information. John Castagno's 12 signature directories include a list of monograms, indiscernible signatures and signatures of illustrators, abstract artists and artists from Europe, America and Latin America active from the 1800s till the present times. To purchase these directories, access the Scarecrow Press website, and type Castagno in the search box on the top right corner. The Artists' Signatures website is a database containing 55,000 signature examples that correspond to 50,000 artists. To use this site, type in the artist's name. Filter the search using the options under Featured Categories. Click on the name of the artist from the list, and log in to your account to view the full profile of the artist. To identify symbols, illegible signatures and monograms on this database, click on Reverse Lookup, and choose the appropriate option from the drop-down menu. View the database examples arranged alphabetically, and match with the one being researched. On the Artists' Signatures website, preliminary access is free. A nominal payment is required to access particular signature examples and artists' names. Motifs in Ancient Greece Many of the motifs involved Greek gods, or plants and animals, such as the set shown below. The elaborate design(s) on the set include Dionysus (the god of wine) and his wife, Ariadne. Likewise, the motif on the earrings is of a muse playing a lyre sitting above the crescent shape of the set. Set of Jewelry, Hellenistic, ca. 330-300 BC, Metropolitan Museum of Art (Heilbrunn Timeline of Art History): New York City, 2019 Animal motifs were as common as those of the gods. Below, two sets of golden rams adorn these bracelets the heads of the ram(s) extend out of ornately designed collars while the base is made of polished rock-crystal which has been shaped to appear as if it is twisting. Ganymede Jewelry (bracelets), Hellenistic, ca. 330-300 BC, Metropolitan Museum of Art (Heilbrunn Timeline of Art History): New York City, 2019 Like the rams’ heads above, this necklace located at the Walters Art Museum in Baltimore, Maryland utilizes bull heads in its design. The necklace also uses a garnet gemstone, tying together much of what makes Hellenistic period pieces identifiable and what has been discussed in this article thus far…from Persian influences to Ancient Greek motifs. Necklace with Clasp of Two Bull Heads, Classical-Hellenistic Greek, ca. 4th-3rd century BC, Walters Art Museum: Baltimore, 2019 Some common forms of painting in Ancient Greece were panel and wall paintings. Panel paintings were done on wood boards (panels) in encaustic (wax) or tempera. As with the art above, a great deal of paintings were figurative, though little to none survived to the modern era. Wall paintings were mostly frescoes, paintings done in fresh, wet plaster. One of the Pitsa tablets. Image Credit. Descriptions of panel paintings and their creators are noted in literature of the time. One set of panels, the Pitsa tablets, did survive, showing the artistic skills of the Archaic period. The panels are wooden boards painted over in stucco with figures painted in mineral pigments. They show religious scenes centered around nymphs. According to historians, these tablets were votive offerings. Like a great deal of art through history, we have an example of art created for worship’s sake. Wall fresco at the Tomb of the Diver. Image Credit. Wall paintings were used on buildings and as grave decorations. As discussed above, since a lot of buildings didn’t survive over time, neither have a lot of wall paintings. Those that do have been on tombs, such as the Tomb of the Diver. Ephesus Under Roman Rule In 129 B.C., King Attalos of Pergamon left Ephesus to the Roman Empire in his will and the city became the seat of the regional Roman governor. The reforms of Caesar Augustus brought Ephesus to its most prosperous time, which lasted until the third century A.D. Most of the Ephesian ruins seen today such as the enormous amphitheater, the Library of Celsus, the public space (agora) and the aqueducts were built or rebuilt during Augustus’s reign. During the reign of Tiberius, Ephesus flourished as a port city. A business district was opened around 43 B.C. to service the massive amounts of goods arriving or departing from the man-made harbor and from caravans traveling the ancient Royal Road. According to some sources, Ephesus was at the time second only to Rome as a cosmopolitan center of culture and commerce. Facts about Ancient Greek Art 5: the famous works in Gellenictis Period The famous works during the Hellenistic period included the Dying Gaul, Venus de Milo and the Winged Victory of Samothrace. Facts about Ancient Greek Art 6: perfection Perfection is the main character in Greek sculpture. The art of the Greek is very different with the art of Roman people. The Roman people did not mind to show the imperfection on their statues. But the Greek would never do it. The Greco-Persian Wars - Persian Wars Under Xerxes and Darius The Persian Wars are usually dated 492-449/448 B.C. However, a conflict started between the Greek poleis in Ionia and the Persian Empire before 499 B.C. There were two mainland invasions of Greece, in 490 (under King Darius) and 480-479 B.C. (under King Xerxes). The Persian Wars ended with the Peace of Callias of 449, but by this time, and as a result of actions taken in Persian War battles, Athens had developed her own empire. Conflict mounted between the Athenians and the allies of Sparta. This conflict would lead to the Peloponnesian War. Greeks were also involved in the conflict with the Persians when they hired on as mercenaries of King Cyrus (401-399) and Persians aided the Spartans during the Peloponnesian War. The Peloponnesian League was an alliance of mostly the city-states of the Peloponnese led by Sparta. Formed in the 6th century, it became one of the two sides fighting during the Peloponnesian War (431-404). The Research Library at the Getty Research Institute is not affiliated with public Web sites referenced here nor is it responsible for their content. The Internet is not the best source for signature information. Signature research can be done by checking the following books to match a signature with a name, initial, or symbol. The volumes may be arranged by last name, alphabetically by first initial, or by shape of a symbol. All of the following sources are available at the Research Library. If you are interested in using this material onsite, read about Access Policy and Reader Privileges. Bénézit, E. Dictionnaire Critique et Documentaire des Peintres, Sculpteurs, Dessinateurs et Graveurs de Tous les Temps et de Tous les Pays. 14 vols. Paris: Gründ, 1999. Castagno, John. American Artists: Signatures and Monograms, 1800. Metuchen, NJ: Scarecrow Press, 1990. Castagno, John. Artists as Illustrators: An International Directory with Signatures and Monograms, 1800Present. Metuchen, NJ: Scarecrow Press, 1989. Castagno, John. Artists' Monograms and Indiscernible Signatures: An International Directory, 1800. Metuchen, NJ: Scarecrow Press, 1991. Castagno, John. European Artists: Signatures and Monograms, 1800, Including Selected Artists from Other Parts of the World. Metuchen, NJ: Scarecrow Press, 1990. Castagno, John. Latin American Artists' Signatures and Monograms: Colonial Era to 1996. Lanham, MD: Scarecrow Press, 1997. Castagno, John. Old Masters: Signatures and Monograms, 1400Born 1800. Lanham, MD: Scarecrow Press, 1996. Caplan, H. H. and Bob Creps. Encyclopedia of Artists' Signatures, Symbols & Monograms: Old Masters to Modern, North American & European plus More 25,000 Examples. Land O'Lakes, FL: Dealer's Choice Books, 1999. Falk, Peter Hastings. Dictionary of Signatures & Monograms of American Artists: From the Colonial Period to the Mid 20th Century. Madison, CT: Sound View Press, 1988. Goldstein, Franz. Monogrammlexikon 1: Internationales Verzeichnis der Monogramme bildender Künstler seit 1850 = Dictionary of Monograms 1: International List of Monograms in the Visual Arts since 1850, 2nd ed. Berlin: Walter de Gruyter, 1999. Pfisterer, Paul, ed. Monogrammlexikon 2: Internationales Verziechnis der Monogramme bildender Künstler des 19. und 20. Jahrhunderts = Dictionary of Monograms 2: International List of Monograms in the Visual Arts of the 19th and 20th Centuries. Berlin: Walter de Gruyter, 1995. Pfisterer, Paul. Signaturenlexikon = Dictionary of Signatures. Berlin: Walter de Gruyter, 1999.
<urn:uuid:bab8948f-09d4-40f9-b9cd-b2e831f47012>
CC-MAIN-2021-43
https://bz.ciwanekurd.net/8691-greek-artists-signature.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00511.warc.gz
en
0.936051
3,312
3.15625
3
Chonburi (, , ) is a province of Thailand (''changwat'') located in eastern Thailand . Its capital is also named Chonburi. Neighbouring provinces are (clockwise from north) Chachoengsao , and Rayong , while the Gulf of Thailand is to the west. Pattaya, a major tourism destination in Thailand, is located in Chonburi, along with Laem Chabang , the country's primary seaport. The population of the province has grown rapidly and now totals 1.7 million residents, although a large portion of the population is floating or unregistered. The registered population as of 31 December 2018 was 1.535 million. The word ''chon'' originates from the Sanskrit word ''जल; jala'' meaning 'water', and the word ''buri'' from Sanskrit ''पुरि; puri'' meaning 'town' or 'city', hence the name of the province means 'city of water'. Chonburi has been recognized since the Dvaravati Period and during the reigns of the Khmer Empire and the Sukhothai Kingdom . Chonburi was initially only a small agricultural town and fishing community, but during the Ayutthaya Kingdom (1350-1767), Chonburi was classified as a commodore class city. On the Triphum map, it appeared along with more major towns such as Bangsai (บางทราย; now a sub-district of Chonburi), Bangplasoi (บางปลาสร้อย; now a downtown area in Chonburi), Bangphrarua (บางพระเรือ; now a sub-district of Si Racha ), and Banglamung (บางละมุง; now a district of Chonburi). Although it was a small town, it enriched the natural resources both on land and at sea. Moreover, those in Chonburi contacted the Chinese sailors, who came to trade with Siam. Chonburi has been settled since the prehistoric period. An important Neolithic town, Khok Phanom Di (บริเวณที่ลุ่มริมฝั่งแม่น้ำพานทอง; now Phanthong and Phanat Nikhom ) located in the Phanthong river lowlands, was found by archaeologists in 1979. Beads, bracelets, patterned pottery and polished stone axes to harvest and hunting were all found. It is supposed that the Chonburi area was the site of prosperous ancient towns such as Phra Rot, Sri Phalo and Phaya Rae. In the reign of King Nangklao, Rama III , Phra Intha-asa, The Governor of Phanat Nikhom I (Princely member of Nakhon Phanom royal family) took many immigrants (Nakhon Phanom Laotians, Named Lao Asa Pak Nam) from Samut Prakan and New Nakhon Phanom Laotians to Phanat Nikhom. The Siamese King at the time allowed them to establish a habitat between Chonburi and Chachoengsao (Named Phanat Nikhom in the present). The provincial seal shows the hill Khao Sam Muk , on which there is a ''sala '' with a statue of the goddess Chao Mae Sahm Muk, who, it is believed, protects seafarers and the local population. The provincial tree and flower is the New Guinea rosewood (''Pterocarpus indicus '', called ''Mai Pradu'' in Thai). The provincial motto is "Beautiful beaches, delicious khao lam , sweet sugar cane, delicate basketry products, and buffalo racing." The province is on the Bay of Bangkok , the northern end of the Gulf of Thailand. The Khao Khiao mountain range stretches from the northwest to the southeast of the province. The plains of the north were long used for farming. Laem Chabang, between Chonburi and Pattaya, is one of the few deep-water harbours of Thailand. The provincial permanent legal population rose at nearly four per cent annually, from 1,040,865 in 2000 to 1,554,365 in 2010. There is a large floating population of long-term non-Thai residents without permanent status, on a perpetual tourist visa and/or migrant worker s (legal or not), as well as heavy, short-term tourist influxes. According to a 2015 survey, [http://cbi.onab.go.th/index.php?option=com_content&view=article&id=327&Itemid=206 Religion in Chonburi] around 97.87% of the population of Chonburi practices Buddhism , followed by Islam with 1.56% and Christianity Chonburi Province consists of 11 districts (''amphoe ''). These are further subdivided into 92 subdistrict '') and 710 villages (''muban The local governments are overseen by the Pattaya City Special Local Government in Pattaya and the Chonburi Provincial Administrative Organisation (CPOA, '' chonburi'') throughout Chonburi. The 47 municipalities are split up into two city municipalities (''thesaban nakhon ''), 10 town municipalities (''thesaban mueang'' and 35 subdistrict municipalities (''thesaban tambon ''). Local communities are also overseen by 50 subdistrict administrative organisations (SAO, ''ongkan borihan suan tambon''). The Bangkok-Chonburi-Pattaya Motorway (Hwy 7) is linked with Bangkok 's Outer Ring Road (Hwy 9) with another intersection at Si Nakharin and Rama IX Junction. The Bang Na-Trat Highway (Hwy 34) from Bang Na travels through Bang Phli and crosses the Bang Pakong River into Chonburi. There is a Chonburi bypass that meets Sukhumvit Road (Hwy 3), passing Bang Saen Beach , Bang Phra, Pattaya and Sattahip. Chonburi is about by road from Suvarnabhumi Airport (BKK), the country's largest international airport. By road, it is accessed from Sukhumvit Road and Motorway 7 from Bangkok. Chonburi is also served by scheduled flights via U-Tapao International Airport (UTP), which is a 45-minute drive south of the city. The main road through Chonburi is Thailand Route 3 , also known as Sukhumvit Road. To the northeast, it connects to Bangkok , and to the south, it connects to Rayong Province, Chanthaburi Province and Trat Province . Route 344 leads east to Klaeng (which is also on Route 3). Route 7 runs parallel to Route 3 but bypasses the densely populated coastal area, connecting to the beach resort city of Pattaya. The State Railway of Thailand , the national passenger rail system, provides service in the province, with the main station being Chon Buri Railway Station Many hospitals exist in Chonburi, both public and private. Chonburi has one university hospital, Burapha University Hospital . Its main hospital operated by the Ministry of Public Health is Chonburi Hospital . Hospitals operated by other organisations, such as the Thai Red Cross Society 's Queen Savang Vadhana Memorial Hospital and the Queen Sirikit Naval Hospital run by the Royal Thai Navy , are also found in the province. *Kasetsart University Si Racha Campus *Rajamangala University of Technology Tawan-ok (RMUTTO) *Sripatum University Chonburi Campus *Thailand National Sports University (TNSU) *Thammasat University Pattaya Campus *Graduate School of Public Administration, National Institute of Development Administration *Interior College (IC) *Panyapiwat Institute of Management (PIM) Human Achievement Index 2017 Since 2003, the United Nations Development Programme (UNDP) in Thailand has tracked progress on human development at a sub-national level using the Human Achievement Index (HAI), a composite index covering all eight key areas of human development. The National Economic and Social Development Board (NESDB) has taken over this task since 2017. Some nine million visitors to the province were recorded in 2012, of which 6.1 million were from abroad, 2.2 million of these being Russian. One major tourist attraction is the Chonburi Buffalo Race (งานประเพณีวิ่งควาย), which takes place in the districts of Ban Bueng and Nong Yai. The animals are dressed outrageously or creatively by owners. Assembled in the courtyard in front of the town hall, the buffaloes partake in racing or physical fitness and fashion contests. The Chonburi Buffalo Race festival started over 100 years ago. Usually, the races will be complemented with booths selling locally-made items, stage performances, games, and beauty contests. The annual Buffalo Race is held around the 11th lunar month , normally in October. It takes seven days and takes place on the field in front of the city and provincial government offices. The highlight of the festival is the buffalo race, which takes place on the last two days. This race is long. The prize for the first nose past the finish line is a trophy and some money. day in Bangsaen (''Ko Phra Sai Wan Lai Bangsaen'') is a tradition that has been held continuously for over ten years at Bang Saen Beach and Laem Thaen. The event takes place between April 16–17 of each year. The highlight of this event is a contest in which the contestants build a sand Buddha at Bangsaen Beach. In each Buddha sand arch is a decoration. The combination of the sea atmosphere and Thai decorations has helped this become one of the most popular Songkran festivals in Thailand. Other activities also take place, such as meriting alms to monks, bathing Buddha images, pouring water on the elders, traditional sporting events, sea boxing competitions, and oyster sheep competitions. Seafood and local food are often sold, along with other local products as part of One Tambon One Product (OTOP). Well-known artists have also given concerts at the event. * Chonburi F.C. * Sriracha F.C. * Pattaya United F.C. * Supreme Chonburi VC Reports (data) from Thai government are "not copyrightable" (Public Domain), Copyright Act 2537 (1994), section 7. Golden Jubilee Network province guide Category:Provinces of Thailand Category:Gulf of Thailand Category:Bay of Bangkok
<urn:uuid:2c7ff2d9-0bf4-4684-9374-5edf56e67762>
CC-MAIN-2021-43
http://theinfolist.com/html/ALL/s/Chonburi_Province.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00550.warc.gz
en
0.916806
2,472
3.328125
3
« ПретходнаНастави » character and mode of warfare were wholly new, was to bo kept at bay even to enable the Pilgrims to barely preserve their lives, and to secure the exercise of that inestimable right they had periled all to maintain,-the right to worship God with an untrammeled conscience. It is not surprising, then, that the minds of the early colonists were so largely occupied with material cares, and that they were content simply to live and worship. As the colonists increased in numbers, and in their earlier settlements began to enjoy something of security and social ease, their minds, naturally alert and active, now that the excitement of war and hunting had subsided, began to demand entertainment of a more natural sort, and, being thoroughly imbued with the original Puritan piety, realized such entertainment in Theology. In this way, sermons and doctrinal treatises came to constitute the first development of American literature. Foremost in importance among the pioneers of this movement was Jonathan Edwards, author of the celebrated Treatise on the Will. “This remarkable man, the metaphysician of America, was formed among the Calvinists of New England when their stern doctrine retained its vigorous authority. His power of subtle argument, perhaps unmatched, certainly unsurpassed, among men, was joined, as in some of the ancient mystics, with a character which raised his piety to fervor."* Among others of this epoch may be named Roger Williams, the Mathers,t Cooper, Dwight, and Eliot; but of most of their writings it may be said they have become obsolete. A New Era.—The event of the Revolution brought about a new era in the history of our literature. Indeed, American * Sir James Mackintosh. † Cotton Mather (1663–1728), a man remarkable for profound learn. ing, indefatigable industry, and great zeal in the advancement of the public interests, both religious and secular. Three hundred and eighty. two of his publications have been enumerated, but this does not com plete the list. Literature, strictly speaking, may be said to have been born at the same time with American Independence. / The independence of nature, which, in the Puritans, contented itself with maintaining freedom of the conscience and religiouɛ utterance, in the people of the United Colonies de2 manded freedom of political conscience and conduct. And as the struggle for the former eventuated in the development of a theological literature, so the struggle for the latter evolved a political literature,-one original—and national too-in its principles, eloquence, and patriotic sentiment. 2/ Early Oratory.- Among the number of those who, in their advocacy of Independence, distinguished themselves for the boldness of their sentiments, the purity of their principles, and the fervor of their oratory, may be named Alexander Hamilton, Joseph Warren, John Adams, James Otis, Patrick Henry, Gouverneur Morris, Pinckney, Jay, and Rutledge. Others, like Franklin, Paine, Jefferson, Quincy, and Samuel Adams, through the public press, wrought in the national cause quite as effectively and zealously, if not as eloquently. With such an origin, and nurtured ever since by great historic events, oratory of a national type has continued to flourish in America, affording not a few most eminent examples. 3 Early History.—Not only were noble men busy projecting and shaping great national movements; some also assumed the duty of recording these events, thus originating the department of History. In the names of Belknap, Sullivan, Morton, Trumbull, Smith, Watson, Williams, Stephens, Minot, Stith, Gayerre, and Young we recognize the annalists of the original colonies; in Moultrie, Winthrop, Thatcher, Cheever, Frothingham, and Upham, the chroniclers of colonial and revolutionary warfare; and in Weems, Marshall, Tudor, Wirt, Wheaton, and others, the biographers of the prominent political actors of the times. One of the earliest and most laborious of the workers in this field was Dr. David Ramsay, a native of Pennsylvania His works were-Historical View of the World, from the earliest Record to the Nineteenth Century, with a particular Reference to the State of Society, Literature, Religion, and Form of Governnient of the United States of America; History of the Revolution in South Carolina; History of the American Revolution ; Life of Washington ; History of South Carolina; History of the United States. Most of the writings of these early historians were mere accumulations of facts and dry recitals of events, and though some of them were marked with accuracy and scholarly ability, yet all have either passed into literary oblivion or are referred to by the antiquary only. Early Poetry. Still another sort of literary product, arising out of the stirring events of our early struggle, was Poetry. Our fathers were not satisfied merely with giving eloquent utterance to political truths in their legislative halls and before the assembled people, nor yet with having the noble deeds inspired thereby coldly jotted down as memoranda; there were found among them some who sought to incite, cheer, and reward patriotic ardor and endeavor by the heart-thrill of song and by poetic visions of a future national glory. "The first metrical compositions in this country, recog. nized by popular sympathy, were the effusions of Philip Freneau, a political writer befriended by Jefferson. He wrote many songs and ballads in a patriotic and historical vein, which attracted and somewhat reflected the feelings of his contemporaries, and were not destitute of merit. Their success was owing, in part, to the immediate interest of the subjects, and in part to musical versification and pathetic sentiment."* The most memorable constellation of the times was what has been styled the “Pleiades of Connecticut." The stars of this cluster were John Trumbull, Timothy Dwight, David Humphreys, Joel Barlow, Lemuel Hopkins, Theodore Dwight, and Richard Alsop. Timothy Dwight's great work was The Conquest of Canaan ; Trumbull's, McFingal ; ::nd Barlow's, The Vision of Columbus, or, The Columbiad. Although these writers were men of sound understanding and liberal scholarship, and though their pretentious poems attained a temporary and local notoriety, yet posterity has long since refused to recognize the inspiration of the Muse in either. Their peculiarities have been summed up by a recent critic in the following language: * H. T. Tuckerman. There was not a spark of genuine poetic fire in the seven. They sang without an ear for music; they strewed their pages with faded artificial flowers, which they mistook for Nature, and endeavored to overcome sterility of imagination and want of passion by veneering with magniloquent cpithets. They padded their ill-favored Muse, belaced and beruffled her, and covered her with garments stiffened with tawdry embroidery to hide her leanness; they over-powdered and over-rouged to give her the beauty Providence had refused. I say their Muse, but they had no Muse of their own; they imported an inferior one from England, and tried her in every style—Pope's and Dryden's, Goldsmith's and Gray's—and never rose above a poor imitation, producing something which looked like a model, but lacked its flavor-wooden poetry, in short."* With the setting of the “Pleiades” closed the first quarter of the present century, and their setting, far from diminishing the light of American literature, only ushered in the dawn of a fairer day. Later Theology.-Theological controversy, which had raged through two centuries, now culminated in the recngnition of two leading parties—the Orthodox and the Liberal. Among both these have arisen divines no less renowned for their general culture and literary tastes than br their theological acumen and lore. With these, ser * Atluntic Monthly, vol. xv., p. 197. mons, from being cold, formal, argumentative, and dog, matic, put on a new livery of beautiful and apt figure anıl sensuous diction, and discoursed more of the practical duties of life and of the æsthetic and moral teachings of Nature. There was less of terror in them and more of love, less of condemnation and more of sympathy, less of argument and more of eloquence, less of imposing logic and more of winning rhetoric. These divines, moreover, have labored, to some extent, in the field of pure literature, as lecturers on moral, social, political, and æsthetic questions. Of such of the Orthodox we may name Payson, Abbott, Bedell, Todd, Sprague, Barnes, Tyng, Bushnell, George B. Cheever, and the Beechers. Of the Liberal party, Dewey, Whitman, the Channings, Frothingham, Furness, Clarke, Parker, Wasson, Thos. Starr King, and Chapin. (See Supplement A.) Later Oratory.-Oratory in America did not expire with the Revolutionary fires which kindled it, but in the questions of tariff, domestic industries, territorial acquisition, and government, national finance, slavery, and other momentous issues involved in the administration of a great republican government, has found combustible and ample fuel. And not only have extraordinary occasions for oratory occurred, but also extraordinary opportunities for it; for in our country there has always existed, as there has in no other, perfect freedom of speech. No despotic ruler or law exists to awe, compel, or subsidize to its purposes the opinion of the citizen, but, himself a partner in the national firm, his utterance may be as free as his thought. Under influences so favorable it could scarcely be otherwise than that America should be prolific in her race of orators. Prominent among these may be enumerated the Adamses, Fisher Ames, William Wirt, Chief Justice Story, Chancellor Kent, Daniel Webster, Rufus Choate, Edward Everett, Clay, Randolph, Crittenden, Preston, Hayne, Calhoun, Benton, Cass, Cushing, Johnson, Prentiss, Sprague, Sumner, Phillips,
<urn:uuid:74ed2b84-79ca-4160-8d84-7267487c74e2>
CC-MAIN-2021-43
https://books.google.rs/books?id=mlgYAAAAYAAJ&pg=PA14&focus=viewport&vq=%22Fer+the+barthrights+of+our+race%3B+They+jest+want+this+Californy+So%27s+to+lug+new+slave-states+in+To+abuse+ye,%22&dq=editions:ISBN0313293678&lr=&hl=sr&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.958313
2,188
2.921875
3
- Open Access The role of improved housing and living environments in malaria control and elimination Malaria Journal volume 19, Article number: 385 (2020) Malaria risk and endemicity is often associated with the nature of human habitation and living environment. The disappearance of malaria from regions where it had been endemic for centuries, such as coastal areas of southern England, has been attributed, at least in part, to improvement in the quality of housing. Moreover, indigenous malaria transmission ceased throughout England without the necessity to eliminate the vector mosquitoes. The principles of malaria transmission, as formulated following the thinking of the pioneers of malaria epidemiology, Ronald Ross and George Macdonald, show how this may happen. Malaria ceases to be sustainable where its reproduction number, R0, the number of new cases generated on average for each existing case of malaria, falls below 1. In the terms of a Ross/Macdonald analysis the reduced contact between humans and blood-feeding mosquitoes that is achieved through housing that is secure against mosquito entry can have a powerful effect in reducing malaria R0. The island of Sri Lanka, where malaria had been endemic probably for centuries previously, has reported no indigenous cases of malaria since 2012. The disappearance of malaria from Sri Lanka followed an effective attack upon malaria transmission by the Sri Lanka Anti Malaria Campaign. The targeted and enhanced efforts of this campaign launched in 1999, drove the malaria R0 below 1 for most of the period up to 2012, leading to a nearly continuous decline in malaria cases until their extinction. The decades leading up to the launch of these efforts were ones of general improvement of living environment and notably in the quality of housing stock. Studies in the late 1980s had shown that quality of housing in a highly malarious district of Sri Lanka was a strong determinant of malaria risk. Through its effects on malaria R0, improved housing is likely to have facilitated the malaria control and cessation of indigenous malaria transmission in Sri Lanka and that it will help reduce the risk of the re-introduction of malaria to the island. For the period of written history, and probably long before it, the nature of human habitation and the man-made environment has influenced the presence or absence of malaria transmission [1, 2]. In recent decades there has been renewed interest in this association [3,4,5,6] driven by awareness that better general standards of living and of housing tend to mitigate against malaria transmission. Here, the relationship between housing and living environment and the disappearance of malaria in a historical example in England and a recent example in Sri Lanka is discussed. The disappearance of malaria from England English wetlands, much of them southern coastal salt marsh, which had been highly malarious since at least late medieval times , became effectively malaria-free in the first decades of the 20th Century . Between 1917 and 1926, as malaria-infected soldiers returned from the First World War, malarial infections re-appeared among local inhabitants across these same areas of England . However, no further transmission took place. Even though the malaria vector mosquitoes (e.g.,. Anopheles atroparvus and Anopheles plumbeus ) were clearly still present and competent to generate new cases from introduced ones, the previously malarious regions of England had apparently become incapable of sustained malaria transmission. How could this be? The answer, James argued, lay to a large extent in two transformations. One was that by the early 20th Century the anti-malarial drug, quinine, had become widely affordable and available in England. The other lay in the quality of the human living environment, and above all of human dwellings. James describes it thus. In contrast to dwellings of “straw or stones or mud bricks, without windows or means of introducing light and ventilation….invariably infested with anopheles mosquitoes…In England… “civilising” social influences…particularly during the last seventy years (i.e. since about 1860) … (have resulted in) houses (that) are better lighted and ventilated; they have windows and are less damp; they have floors and are provided with ceilings shutting off the bedrooms from the rafters of the roof, they are more open and less crowded and are more frequently painted and whitewashed on the inside than they used to be. These changes, as well as more cleanly conditions in the home generally, have made the houses much less liable to harbour anopheles mosquitoes and have broken, to a considerable extent, the close association between those mosquitoes and man which existed when living conditions were primitive. Undoubtedly this disassociation has contributed materially towards the reduction of malaria.” The idea that James espoused as a major component to the disappearance of malaria from England was not the total elimination of the vector mosquitoes (important as their numerical reduction would have been through drainage of wetland ) but the sufficient reduction in contact between these mosquitoes and their human hosts through decent housing. Reduction in human and Anopheles contact had also occurred through increase in the cattle population as diversionary hosts to the mosquitoes . Principles of malaria transmission and the human living environment James’ ideas are well supported by the theoretical principles of malaria transmission. Pioneered by Ronald Ross they were formulated by George Macdonald [12,13,14] in terms that, although subject to ongoing analysis and modification, are still broadly accepted. A central concept presented by Macdonald is that of the “basic reproduction number for malaria”—the number of new cases resulting from each existing case of malaria—now designated R0, is given in what is widely known as a Ross/Macdonald equation . In such an equation (e.g., Box 1) R0 is, among other factors, a function of ‘M’, the number of adult female malaria vector mosquitoes in a defined locality, and of ‘a’, their daily biting rate upon humans. Reducing either or both M and a reduces R0. Because R0 is proportional to a2 (Box 1), anything that reduces a, the daily rate at which vector mosquitoes take a human blood meal, is particularly powerful in reducing the value of R0. Improved house-type construction that is secure against mosquito entry reduces a. It is likely that there are other malaria transmission-reducing effects that result from those types of housing that resist entry by mosquitoes. These include their impact upon mosquito egg-laying rates due to the lower frequency of blood meals. Recent analysis indicates that such effects on M (Box 1) could also significantly reduce R0 . Improvements in housing are, therefore, as James proposed, likely to have contributed greatly to the reduction leading to disappearance of indigenous malaria transmission in England. The termination of autochthonous malaria transmission in Sri Lanka When the malaria R0 is reduced to a stable value below 1, irrespective of the cause, then malaria incidence can be expected to decline continuously and exponentially. These expectations are well met by the recorded cases of malaria in Sri Lanka for most of the period from 2001 to 2012 (Fig. 1) [16, 17] (Fig. 2). In 2013 no indigenously acquired case of malaria was recorded in Sri Lanka. There have been none since [17,18,19] except for a recent case acquired by infection from a foreign migrant . Availability of data and materials Number of adult female malaria vector mosquitoes in a defined locality Daily mosquito biting rate upon humans Boyd MF. Malariology. Philadelphia: WB Saunders; 1949. Carter R, Mendis KN. Evolutionary and historical aspects of the burden of malaria. Clin Microbiol Rev. 2002;15:564–94. Tusting LS, Willey B, Lines J. Building malaria out: improving health in the home. Malar J. 2016;15:320. Tatem AJ, Gething PW, Smith DI, Hay SI. Urbanization and the global malaria recession. Malar J. 2013;12:133. Wang S-Q, Li Y-C, Zhang Z-M, Wang G-Z, Hu X-M, Qualls WA, et al. Prevention measures and socioeconomic development result in a decrease in malaria in Hainan, China. Malar J. 2014;13:362. Rek JC, Alegana V, Arinaitwe E, Cameron E, Kamya MR, Katureebe A, et al. Rapid improvements to rural Ugandan housing and their association with malaria from intense to reduced: a cohort study. Lancet Planet Health. 2018;2:e83–94. Dobson MJ. Malaria in England: a geographical and historical perspective. Parassitologia. 1994;36:35–60. James SP. The disappearance of malaria from England. Proc R Soc Med. 1929;23:71–87. Shute PG. Indigenous P. vivax malaria in London believed to have been transmitted by Anopheles plumbeus. Mon Bull Minist Health Public Health Lab Serv. 1954;13:48–51. Kuhn KG, Campbell-Lendrum DH, Armstrong B, David CR. Malaria in Britain: past, present, and future. Proc Natl Acad Sci USA. 2003;100:9997–10001. Ross R. The prevention of malaria. London: John Murray; 1911. Macdonald G. The analysis of equilibrium in malaria. Trop Dis Bull. 1952;49:813–1129. Macdonald G. The epidemiology and control of malaria. London: Oxford University Press; 1957. Smith DL, Battle KE, Hay SI, Barker CM, Scott TW, McKenzie FE. Ross, Macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens. PLoS Pathog. 2012;8:e1002588. Brady OJ, Godfray HCJ, Tatem AJ, Gething PW, Cohen JM, McKenzie FE, et al. Adult vector control, mosquito ecology and malaria transmission. Int Health. 2015;7:121–9. Karunaweera ND, Galappaththy GNL, Wirth DF. On the road to eliminate malaria in Sri Lanka: lessons from history, challenges, gaps in knowledge and research needs. Malar J. 2014;13:59. Wijesundere DA, Ramasamy R. Analysis of historical trends and recent elimination of malaria from Sri Lanka and its applicability to malaria control in other countries. Front Public Health. 2017;5:212. Sri Lanka free of malaria. Case study. New Delhi: World Health Organization, Regional Office for South-East Asia; 2017. Premaratne R, Wickremasinghe R, Ranaweera D, Kumudu WM, Gunasekera AW, Hevawitharana M, et al. Technical and operational underpinnings of malaria elimination from Sri Lanka. Malar J. 2019;18:256. Karunasena VM, Marasinghe M, Koo C, Amarasinghe S, Senaratne AS, Hasantha R, et al. The first introduced malaria case reported from Sri Lanka after elimination: implications for preventing re-introduction of malaria in recently eliminated countries. Malar J. 2019;18:210. Housing and Sustainable Urban Development in Sri Lanka. National Report for the Third United Nations Conference on Human Settlements Habitat III. 2015. http://habitat3.org/wp-content/uploads/Sri-Lanka-%EF%BC%88Final-in-English%EF%BC%89.pdf. Gamage-Mendis AC, Carter R, Mendis C, De Zoysa APK, Herath PRJ, Mendis KN. Clustering of malaria infections within an endemic population: risk of malaria associated with the type of housing construction. Am J Trop Med Hyg. 1991;45:77–85. Gunawardena DM, Wickremasinghe AR, Muthuwatta L, Weerasingha S, Rajakaruna J, Senanayaka T, et al. Malaria risk factors in an endemic region of Sri Lanka, and the impact and cost implications of risk factor-based interventions. Am J Trop Med Hyg. 1998;58:533–42. Premaratne R, Ortega L, Jankan N, Mendis KN. Malaria elimination in Sri Lanka: what it would take to reach the goal. WHO South-East Asia J Public Health. 2014;3:85–9. We thank Kamini Mendis, David L. Smith and Geoffrey Pasvol for their insightful comments and advice. NDK is supported by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under Award Number U01AI136033. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Ethics approval and consent to participate Consent for publication The authors declare that they have no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Cite this article Carter, R., Karunaweera, N.D. The role of improved housing and living environments in malaria control and elimination. Malar J 19, 385 (2020). https://doi.org/10.1186/s12936-020-03450-y - Malaria transmission - Malaria control - Malaria elimination - Ross/Macdonald equations - Reproduction number - Sri Lanka - Living environment - Socio-economic development
<urn:uuid:acd62724-aa5e-461f-89dc-edbddf7c37be>
CC-MAIN-2021-43
https://malariajournal.biomedcentral.com/articles/10.1186/s12936-020-03450-y
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00149.warc.gz
en
0.910554
2,836
2.890625
3
What are carbohydrates? - The carbohydrates are a group of naturally occurring carbonyl compounds (aldehydes or ketones) that also contain several hydroxyl groups. - It may also include their derivatives which produce such compounds on hydrolysis. - They are the most abundant organic molecules in nature and also referred to as “saccharides”. - The carbohydrates which are soluble in water and sweet in taste are called as “sugars”. Structure of Carbohydrates - Carbohydrates consist of carbon, hydrogen, and oxygen. - The general empirical structure for carbohydrates is (CH2O)n. - They are organic compounds organized in the form of aldehydes or ketones with multiple hydroxyl groups coming off the carbon chain. - The building blocks of all carbohydrates are simple sugars called monosaccharides. - A monosaccharide can be a polyhydroxy aldehyde (aldose) or a polyhydroxy ketone (ketose). The carbohydrates can be structurally represented in any of the three forms: - Open chain structure. - Hemi-acetal structure. - Haworth structure. Open chain structure – It is the long straight-chain form of carbohydrates. Hemi-acetal structure – Here the 1st carbon of the glucose condenses with the -OH group of the 5th carbon to form a ring structure. Haworth structure – It is the presence of the pyranose ring structure. Properties of Carbohydrates Physical Properties of Carbohydrates - Stereoisomerism – Compound shaving the same structural formula but they differ in spatial configuration. Example: Glucose has two isomers with respect to the penultimate carbon atom. They are D-glucose and L-glucose. - Optical Activity – It is the rotation of plane-polarized light forming (+) glucose and (-) glucose. - Diastereo isomers – It the configurational changes with regard to C2, C3, or C4 in glucose. Example: Mannose, galactose. - Annomerism – It is the spatial configuration with respect to the first carbon atom in aldoses and second carbon atom in ketoses. Chemical Properties of Carbohydrates - Osazone formation: Osazone are carbohydrate derivatives when sugars are reacted with an excess of phenylhydrazine. eg. Glucosazone - Benedict’s test: Reducing sugars when heated in the presence of an alkali gets converted to powerful reducing species known as enediols. When Benedict’s reagent solution and reducing sugars are heated together, the solution changes its color to orange-red/ brick red. - Oxidation: Monosaccharides are reducing sugars if their carbonyl groups oxidize to give carboxylic acids. In Benedict’s test, D-glucose is oxidized to D-gluconic acid thus, glucose is considered a reducing sugar. - Reduction to alcohols: The C=O groups in open-chain forms of carbohydrates can be reduced to alcohols by sodium borohydride, NaBH4, or catalytic hydrogenation (H2, Ni, EtOH/H2O). The products are known as “alditols”. Properties of Monosaccharides - Most monosaccharides have a sweet taste (fructose is sweetest; 73% sweeter than sucrose). - They are solids at room temperature. - They are extremely soluble in water: – Despite their high molecular weights, the presence of large numbers of OH groups make the monosaccharides much more water-soluble than most molecules of similar MW. - Glucose can dissolve in minute amounts of water to make a syrup (1 g / 1 ml H2O). Classification of Carbohydrates (Types of Carbohydrates) The simple carbohydrates include single sugars (monosaccharides) and polymers, oligosaccharides, and polysaccharides. - Simplest group of carbohydrates and often called simple sugars since they cannot be further hydrolyzed. - Colorless, crystalline solid which are soluble in water and insoluble in a non-polar solvent. - These are compound which possesses a free aldehyde or ketone group. - The general formula is Cn(H2O)nor CnH2nOn. - They are classified according to the number of carbon atoms they contain and also on the basis of the functional group present. - The monosaccharides thus with 3,4,5,6,7… carbons are called trioses, tetroses, pentoses, hexoses, heptoses, etc., and also as aldoses or ketoses depending upon whether they contain aldehyde or ketone group. - Examples: Glucose, Fructose, Erythrulose, Ribulose. - Oligosaccharides are compound sugars that yield 2 to 10 molecules of the same or different monosaccharides on hydrolysis. - The monosaccharide units are joined by glycosidic linkage. - Based on the number of monosaccharide units, it is further classified as disaccharide, trisaccharide, tetrasaccharide etc. - Oligosaccharides yielding 2 molecules of monosaccharides on hydrolysis is known as a disaccharide, and the ones yielding 3 or 4 monosaccharides are known as trisaccharides and tetrasaccharides respectively and so on. - The general formula of disaccharides is Cn(H2O)n-1and that of trisaccharides is Cn(H2O)n-2 and so on. - Examples: Disaccharides include sucrose, lactose, maltose, etc. - Trisaccharides are Raffinose, Rabinose. - They are also called as “glycans”. - Polysaccharides contain more than 10 monosaccharide units and can be hundreds of sugar units in length. - They yield more than 10 molecules of monosaccharides on hydrolysis. - Polysaccharides differ from each other in the identity of their recurring monosaccharide units, in the length of their chains, in the types of bond linking units and in the degree of branching. - They are primarily concerned with two important functions ie. Structural functions and the storage of energy. - They re further classified depending on the type of molecules produced as a result of hydrolysis. - They may be homopolysaccharidese, containing monosaccharides of the same type or heteropolysaccharides i.e., monosaccharides of different types. - Examples of Homopolysaccharides are starch, glycogen, cellulose, pectin. - Heteropolysaccharides are Hyaluronic acid, Chondroitin. Carbohydrates are widely distributed molecules in plant and animal tissues. In plants and arthropods, carbohydrates from the skeletal structures, they also serve as food reserves in plants and animals. They are important energy source required for various metabolic activities, the energy is derived by oxidation. Some of their major functions include: - Living organisms use carbohydrates as accessible energy to fuel cellular reactions. They are the most abundant dietary source of energy (4kcal/gram) for all living beings. - Carbohydrates along with being the chief energy source, in many animals, are instant sources of energy. Glucose is broken down by glycolysis/ Kreb’s cycle to yield ATP. - Serve as energy stores, fuels, and metabolic intermediates. It is stored as glycogen in animals and starch in plants. - Stored carbohydrates act as an energy source instead of proteins. - They form structural and protective components, like in the cell wall of plants and microorganisms. Structural elements in the cell walls of bacteria (peptidoglycan or murein), plants (cellulose) and animals (chitin). - Carbohydrates are intermediates in the biosynthesis of fats and proteins. - Carbohydrates aid in the regulation of nerve tissue and is the energy source for the brain. - Carbohydrates get associated with lipids and proteins to form surface antigens, receptor molecules, vitamins, and antibiotics. - Formation of the structural framework of RNA and DNA (ribonucleic acid and deoxyribonucleic acid). - They are linked to many proteins and lipids. Such linked carbohydrates are important in cell-cell communication and in interactions between cells and other elements in the cellular environment. - In animals, they are an important constituent of connective tissues. - Carbohydrates that are rich in fiber content help to prevent constipation. - Also, they help in the modulation of the immune system. - Lehninger, A. L., Nelson, D. L., & Cox, M. M. (2000). Lehninger principles of biochemistry. New York: Worth Publishers. - Madigan, M. T., Martinko, J. M., Bender, K. S., Buckley, D. H., & Stahl, D. A. (2015). Brock biology of microorganisms (Fourteenth edition.). Boston: Pearson. - Rodwell, V. W., Botham, K. M., Kennelly, P. J., Weil, P. A., & Bender, D. A. (2015). Harper’s illustrated biochemistry (30th ed.). New York, N.Y.: McGraw-Hill Education LLC.
<urn:uuid:68316c36-5216-4d61-bbe4-bc95ff9d28fb>
CC-MAIN-2021-43
https://microbenotes.com/carbohydrates-structure-properties-classification-and-functions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00431.warc.gz
en
0.872348
2,098
3.828125
4
While Jews study a number of religious books—from the Talmud to the Shulchan Aruch—the text that provides the religion’s very foundation is the Torah. And the version of the Torah most commonly studied by Jews is known as the Masoretic text, the most authoritative Hebrew version of the Torah.But it is not the only one.A small, ancient sect known as the Samaritans rely on the Torah, and the Torah alone, as their sole religious text—and the Samaritans use a somewhat different version. Two weeks ago, the first English translation of this Hebrew text was published by Samaritan historian and scholar Binyamin Tsedaka: The Israelite Samaritan Version of the Torah. There are some 6,000 instances where this version of the Torah differs from the Masoretic text; the question for scholars is which version is more complete, or more accurate.***As an ancient Semitic people, the Samaritans abide by a literal version of Torah law. Eschewing Jewish practices that are rabbinic in origins, they believe only in the Five Books of Moses and observe only holidays found in the Pentateuch, such as Passover and Sukkot, as opposed to Jewish holidays like Purim or Hanukkah whose origins are found elsewhere in Jewish scriptures.Their rituals mirror an ancient world that few religions still keep today. On Passover, for example, their high priest sacrifices a sheep in a community-wide ritual, where its blood is dabbed on foreheads and later eaten together with matzo and bitter herbs. On Shabbat, Samaritans abstain from cooking and kindling fires and pray barefoot in white, identical garments. And, echoing a routine taken straight from the text of Leviticus, Samaritan women move to their own private homes during menstruation for seven days of isolation.Much of what the Samaritans practice has some resemblance to Jewish traditions, except their beliefs surrounding the holiness of Mount Gerizim, the mountaintop they believe they were commanded by God to conquer. Tsedaka, 68, grew up in Nablus, which is in the shadow of Mount Gerizim, but after the eruption of the first Palestinian intifada in the late 1980s, two-thirds of the Samaritan population relocated. Their community is now split between Kiryat Luza in the West Bank and the Israeli city of Holon.Tsedaka, who lives in Kiryat Luza, has dedicated much of his life to the Samaritan community. As a historian, author, educator, and elder of his group, Tsedaka considers himself a guardian of his ancient tradition, as he is one of fewer than 800 Samaritans left. He has authored more than 75 pamphlets on Samaritan scholarship, but he calls his new translation of his Torah, which took him seven years to compile, his biggest achievement.“Samaritans have such beautiful traditions that when you will collect and read materials about them, you will fall in love,” Tsedaka said. “For the first time ever, English Bible researchers will be able to include my people into their explorations of the Torah.”The 6,000 differences between the two Torahs that Tsedaka highlights in bold in his book can be split into two categories: 3,000 of the differences are orthographical, meaning there are spelling differences or additional words placed in the text, while the other 3,000 are more significant in changing the Torah’s narrative.Some of the orthographical changes help make the story read more smoothly. For example, in Genesis 4:8, when Cain talks to Abel, the Masoretic version reads, “Now Cain said to his brother Abel, while they were in the field, Cain attacked his brother Abel and killed him,” whereas the Samaritan Torah contains additional words: “Now Cain said to his brother Abel, ‘Let’s go out to the field.’ ”The Samaritan Torah also offers a slightly different version of some stories. It includes parts of dialogues that are not found in the Masoretic text: For example, in Exodus chapters 7 through 11, the Samaritan Torah contains whole conversations between Moses, Aaron, and Pharaoh that the Masoretic text does not.The other differences that are significant in narrative sometimes change the story, and sometimes “fix” small sentences that appear incoherent.In Exodus 12:40, for example, the Masoretic text reads: “The length of the time the Israelites lived in Egypt was 430 years,” a sentence that has created massive chronological problems for Jewish historians, since there is no way to make the genealogies last that long. In the Samaritan version, however, the text reads: “The length of time the Israelites lived in Canaan and in Egypt was 430 years.”Earlier in Exodus, in 4:25, the Samaritan Torah offers an alternative narrative to the slightly problematic story about Moses’ son not being circumcised when an angel of God “sought to kill him.” The thought that Moses did not circumcise his son, as the Masoretic text states, seems inconceivable to many Jewish commentators, Tsedaka noted. The Samaritan text, however, reads that it was Moses’ wife, Tziporah, who had to “circumcise her blocked heart” by cutting off her belief in the idol-worshiping ways of Midyan, her homeland. A mention of an “internal circumcision” is later found in Deuteronomy 10:16 in both versions, which reads, “circumcise the foreskin of your heart, and stiffen your neck no longer.”Perhaps the most variant of texts within the two Torahs is the differences in the Ten Commandments.“The Commandments are all in the form of ‘do’ and ‘don’t do,’ ” Tsedaka asserted. “The Masoretic version includes the intro of ‘I am your God that took you out of Egypt,’ as a commandment, when we see it as an introduction. Our Ten Commandments start later, and we have our last commandment to establish Mount Gerizim.”While an “extra” commandment to establish an altar on Mount Gerizim might seem random in the Masoretic text, the part that follows the Ten Commandants in the Masoretic version talks about the forbidden action of building stairs to an altar. Some scholars believe that the Masoretic text would not be discussing steps to an altar without talking about an altar first, and so some believe there might be a part of the text that is missing in the Masoretic version.***Until the 1950s, Bible scholars turned to the Jewish Masoretic text as the definitive version of the Torah, virtually ignoring the Samaritan text. However, in the winter of 1947, a group of archeological specialists searching through 11 caves in Qumran happened upon the Dead Sea Scrolls. After rigorous study of the scrolls, researchers have come to believe there were several versions of the Torah being studied throughout Jewish history, according to Eugene Ulrich, a theology professor at University of Notre Dame.The scrolls they found in Qumran matched the Samaritan text more closely than the Masoretic text, leading some researchers to believe the Samaritan text held validity in the minds of Jews during the Second Temple period and that both texts were once studied together.“Finding the Dead Sea Scrolls proved that there were two versions, if not more, of the Torah circulating within Judaism, but they were all dealt with with equal validity and respect,” said Ulrich, who served as one of the chief editors on the Dead Sea Scrolls International Publication Project. “The Samaritan Torah and Masoretic Torah used to be studied side by side. The Masoretic text wasn’t always the authoritative version. They were both seen as important during the Second Temple time period.”Continue reading: Which came first?Ulrich said after the destruction of the Second Temple, the people split into three groups, each with their own text: The rabbis took the Masoretic text for their own, the Samaritans took theirs, and the early Christians used much of a different version called the Septuagint—a Masoretic version translated into Greek in the 2nd century BCE—in what later become the Christian Bible.While most differences between the two Torahs are only slight and may not even be apparent to an untrained eye, according to Ulrich, the Samaritan Torah provides a more coherent reading because the story flows better in its text. “There are whole passages of stories missing from the Masoretic version,” he said. “A lot of the stories in Exodus and Deuteronomy are missing parts of the conversation, leaving the reader alone to do much assumption as the story goes on. In the Samaritan Torah, however, these gaps are filled, providing a smoother encounter of what actually happened.”James Charlesworth, a professor of New Testament Language and Literature at Princeton University’s Department of Biblical studies, said the Samaritan Torah is his preferred version for some readings of the Bible. “As the stories and histories go, the Samaritan Pentateuch appears to be more favorable because the voice of the text reads more clear[ly],” he said. “In my judgment, the Masoretic version has some corrupt parts of it, and the Samaritan Torah is the best reading we have. There are sentences scholars are left to either reinterpret or simply ignore because they seem they don’t belong.”Charlesworth believes Jews and Christians have not shown the Samaritan text the proper respect it deserves: Thousands of years ago, Samaritans and Jews had a shared interest in both scriptures, but the Samaritan Torah later became shunned. Charlesworth said this English translation would finally provide the academic world insight into the origins of the development of scripture.The Samaritans claim their Torah is older and more authentic: “It’s more logical that a group of people who’ve lived in one place for thousands of years have kept their Torah preserved,” Tsedaka asserted, “as compared to a people who have moved all over the world.”But some Bible critics side with the Masoretic version, citing it as older and, indeed, more authentic. Referring to a principal of textual criticism called lectio difficilior potior, which states that a harder reading of a text is preferred to an easier reading, Yeshiva University’s Aaron Koller said some scholars believe the Samaritan Torah’s text, which presents fewer interpretive problems, proves that it had been tampered with. “Some scholars believe someone took an original version of the Torah and simplified it to the Samaritan version,” he explained. “It’s hard to believe a difficult reading of a text is original, because why would someone change a text to make it unclear? Rather, when a text is simplified, it’s easier to believe that the text was altered in order to make it simpler.”Koller noted that the consensus view held by most Bible scholars is that the Masoretic version of the Torah is the older, original version. The structural changes of the Samaritan Torah give reason to believe it’s been changed, he said, but that should not stop people from studying it. Both should be studied, he said, to understand the history of interpretations of the Torah—a book that continues to unfold with meaning as time goes on.“Outside of the Samaritan community, most believe the Samaritan Torah was an editorial revision of the Masoretic text,” Koller said. “But they are a group that consider themselves heirs to biblical Israel, just like the Jews. It’s important just to learn the remarkable tradition they’ve preserved for 2,500 years.”***Like this article? Sign up for our Daily Digest to get Tablet Magazine’s new content in your inbox each morning.Chavie Lieber has written for The New York Times, New York Magazine, The Daily Beast, the Huffington Post, Business Insider, the Times of Israel, and more. Follow her on twitter @chavielieber.
<urn:uuid:5d0e0bb1-21f3-4b2c-bf74-9427be41b86e>
CC-MAIN-2021-43
https://www.tabletmag.com/jewish-life-and-religion/132004/the-other-torah
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.963579
2,562
3.34375
3
At the end of this week’s parasha, Ki Tisa, we read how Moses descended from Mt. Sinai with his face glowing brilliantly (Exodus 34:29-30). The people could not look at his face directly, so he had to wear a mask. The exact wording in the Torah is karan ‘or panav (קָרַ֖ן ע֣וֹר פָּנָ֑יו). The latter two words are clear: “the skin of his face”. But what does karan mean? The most direct translation would be “horn” which is actually why, comically, throughout history some artists depicted Moses with horns! Another way of translating it is as “radiant” (based on this, the Modern Hebrew term for a ray or radiation is k’rinah, קרינה). Rashi comments that both are accurate; karan does indeed come from the word for “horn” because light rays shoot forth like “horns”. If we take a look at Midrash (with a little help from science), we will find that the Torah is secretly encoding something much more profound. Continue reading Jewish law forbids using the light of the Chanukah candles for mundane purposes. At first thought, this is strange since all other holiday candles which we light may be used for mundane purposes. One can have a candle-lit dinner with the Shabbat candles on the table, yet the same cannot be done with Chanukah candles. This is actually why we light an additional (“ninth”) candle called the shamash, whose job is to “protect” the light of the Chanukah candles so that we do not inadvertently make use of them. Why is it that the Chanukah lights must not be used? To answer this question, one must go all the way back, long before the Maccabees, to the very beginnings of the universe. The Light of Creation When we open the Torah we read that God’s first act of Creation within an empty universe was light. God said “Let there be light”, and so it was. A few paragraphs later, we read that on the fourth day God created various luminaries to “give light”, including the sun, moon, and stars. If the things that naturally give off light were only created on the fourth day, what was the light of the first day? The Sages (Chagigah 12a) grappled with this apparent contradiction: But was the light created on the first day? …This is [to be explained] according to Rabbi Elazar, for Rabbi Elazar said: “The light which the Holy One, blessed be He, created on the first day, one could see thereby from one end of the universe to the other; but as soon as the Holy One, blessed be He, beheld the generation of the Flood and the generation of the Dispersion, and saw that their actions were corrupt, He arose and hid it from them…” Now the Tannaim [differ on the point]: “The light which the Holy One, blessed be He, created on the first day one could see and look thereby from one end of the universe to the other,” this is the view of Rabbi Yaakov. But the Sages say: It is identical with the luminaries; for they were created on the first day, but they were not “hung up” until the fourth day. On the simple level, the Sages agreed that the light of the First and Fourth Days were really the same thing: while God created the luminaries on the First Day, He only set them in their specific locations and orbits on the Fourth Day. This is in line with another opinion that God really created everything in one instant, on the First Day, and on the subsequent “days” He simply put everything in its place. (For an explanation of this, see Pardes Rimonim 13:5). On a deeper level, as expounded by Rabbi Elazar and Rabbi Yakov, the light of the First Day was an entirely different entity. Unlike the familiar, physical light of the Fourth Day, the light of the First Day was a special, mystical light which contained the power for one to see across the universe, through all of time and space. According to the Zohar, this is the special radiance of Creation, from which all things were fashioned (see ‘The Big Bang and the Age of the Universe’ in Volume One of Garments of Light). This idea is already noted in the Midrash (Beresheet Rabbah 12:6) which adds that “the light with which God created the universe [was given] to Adam, and with it he stood and gazed from one end of the universe to the other.” Adam and Eve were given this Divine Light as a gift. However, once they consumed the Forbidden Fruit, that light disappeared. In fact, the Kabbalists explain that initially Adam and Eve saw the world entirely through this Divine Light, and themselves glowed with this light. When they looked upon each other, they saw only each other’s light, which is why they were unashamed. After consuming the Fruit, that light disappeared, and when they looked upon each other they saw frail skin, and all of its lustful trappings. This is why they were suddenly ashamed and wanted to hide. The Kabbalists explain that this is the mystical meaning of the interplay between the words for “light”, or (אור), and “skin”, ‘or (עור), words that sound the same and are written with just one substitution: The singular, holy aleph replaced with the ‘ayin, which literally means “eye” and represents this illusory physical world. Before the Fruit, Adam and Eve saw light; afterwards they saw only skin. (See Beresheet Rabbah 20:12, Zohar I, 22b, and Pardes Rimonim 13:3) This is the deeper meaning behind God’s first word to Adam and Eve after their fall: “Ayeka?” The term literally means “Where are you?” referring to the fact that Adam and Eve were hiding because they were ashamed. Of course, God knew exactly where they were. So what did He mean by Ayeka? The Sages explain that the original Divine Light, called Or HaGanuz, the “hidden light”, only shone for 36 hours. There are two opinions as to how one reaches this number. The first (as in Beresheet Rabbah cited above) is that the Light shone for the entire 24 hours of the first Shabbat, as well as the 12 hours preceding that Shabbat, from the moment that Adam and Eve were created on the sixth day. The light disappeared at Shabbat’s conclusion, which is one reason why we perform Havdallah at the end of Shabbat, symbolizing our hope for the restoration of that Light. Alternatively, the Light shone for the first three days of Creation, before those physical luminaries were created on the fourth day. Since each day had 12 hours of light and 12 hours of dark, that means each of the three days had 12 hours of this Divine Light, totalling up to 36. In reality, both opinions are correct. The Divine Light initially shone for those first three days—36 hours—and was then concealed by the new physical luminaries. On the sixth day, when God created Adam and Eve, He entrusted them with that Light, and they possessed it for 36 hours until the conclusion of the first Sabbath, neatly mirroring those 36 hours of the first three days. When Adam and Eve consumed the Fruit, the Light disappeared from them, and was taken back up to Heaven, stored beneath God’s Throne (Yalkut Shimoni, Isaiah 499). This brings us back to Ayeka. The gematria of that word (איכה) happens to be 36. When God called out to Adam and Eve and said Ayeka, what He meant was not “where are you?” but “where is the Light?” Restoring the Light While the Or HaGanuz has been hidden for now, it reveals itself in this world through several channels. We find that same number 36 in a number of important places. Most notably, when we look at the actual number of texts in our Holy Scriptures, we find that there are 36: Genesis, Exodus, Leviticus, Numbers, Deuteronomy, Joshua, Judges, Samuel, Kings, Isaiah, Jeremiah, Ezekiel, Hoshea, Yoel, Amos, Ovadiah, Yonah, Micah, Nahum, Habakkuk, Tzephaniah, Haggai, Zechariah, Malachi, Psalms, Proverbs, Job, Song of Songs, Ruth, Lamentations, Ecclesiastes, Esther, Daniel, Ezra, Nehemiah, and Chronicles. While we generally group these texts into 24 “books” of the Tanakh for convenience, there are exactly 36 independent works. This reminds us that the Holy Scriptures contain within them the Divine Light, and through the study of these texts we can receive a glimpse of it. Similarly, the Bnei Issachar (Rabbi Tzvi Elimelech Shapiro, c. 1783-1841) points out that there are 36 tractates to the Talmud. Study of Torah—whether Written (Tanakh) or Oral (Talmud)—serves to restore that Divine Light little by little. Naturally, this parallels the idea of the 36 Perfect Tzaddikim. The Sages state that in every generation there are exactly 36 perfectly righteous people alive, and the world only continues to exist in their merit (Sanhedrin 97b). They contain a spark of that Divine Light within them. And on the calendar, too, there is a month in which finding the Or HaGanuz is particularly auspicious. This is the month of Kislev which, the Bnei Issachar points out, is a contraction of kis (כס) and lev (לו), the former meaning “hidden” and the latter having the value of 36. That brings us right back to Chanukah, which begins on the 25th of Kislev. We light one candle on the first day, two on the second, and so on. The total number of candles lit over the course of eight days just happens to be 36. The Chanukah lights are symbolic of that special holy light of Creation. One should not for a moment think that these are just mundane, physical lights. And this is the mystical reason for why Jewish law forbids using the Chanukah lights for any purpose. One should constantly meditate on the fact that the light of the Menorah represents the Or HaGanuz, the light of Creation, the holy light with which Adam and Eve “saw from one end of the universe to the other.” Jewish law also requires one to place their Menorah in a widely visible spot. We must “publicise the miracle” as much as possible, hence the many public Menorah-lighting ceremonies that take place around the world, and the many electronic chanukiahs found outside synagogues, in shopping malls, and on the roofs of cars. We are not just commemorating the Chanukah miracle, but the Jewish mission to bring light into the world (as Isaiah 42:6 famously states). From a mystical perspective, we light 36 candles to remind ourselves that our mission is to rectify the cosmos and reveal the primordial holy light of God. We remind ourselves that we should strive to return to being like the original Adam and Eve, who glowed with this light, and who looked past the deceptive skin to see the pure light within each other. When we learn to recognize each other’s inner glow, then we will merit a return to the luminous Garden of Eden.
<urn:uuid:8d13a52d-7d63-4c45-9968-d15909841aa8>
CC-MAIN-2021-43
https://www.mayimachronim.com/tag/havdallah/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.962318
2,565
3.15625
3
Free Software in international development cooperation This article highlights the advantages of using Free Software in international development cooperation. It is part of a series about the basics of Free Software. International development cooperation is increasingly digitised. Free Software thus is becoming an indispensable fundamental technology that guarantees legally compliant international cooperation and reuse, a technology that enables global scaling with simultaneous local adaptability. In order to tap the full potential of digital development cooperation, the FSFE demands that all software development (co-)financed by taxpayers' money be published as Free Software. Table of contents - (I) Free Software as a cornerstone of international development cooperation - About international development cooperation - Digital resources and dependencies - Problem: proprietary software - Solution: Free Software as a foundation of development cooperation - Status quo: Free Software and international development cooperation - (II) FSFE demands: Public code for publicly financed international development cooperation - (III) Positive development goals by using and developing Free Software - Sustainable handover - Independence and ownership - Growing the local economy - Cost control and transparency - Localisations and translations - Local adaption - Local partnerships - Global legal certainty - Open standards - Security and quality - Knowledge access and transfer - Non-discrimination and equal access (I) Free Software as a cornerstone of international development cooperation About international development cooperation International development cooperation is concerned with the sustainable improvement of global economic, social, ecological and political conditions. International development cooperation strives for the UN sustainability goals and the empowerment of partners: Existing dependencies of development recipients should be reduced and new dependencies avoided at all costs. Any independence resulting in turn requires the support of existing or the establishment of new local structures on site. Central problems should be solved or be made solvable locally, and prosperity should be made accessible and distributed appropriately. Many development policy initiatives and actors therefore rely heavily on working together with various local partner organizations, supporting them and, if necessary, integrating them into appropriate development programs. Digital resources and dependencies The focus of international development cooperation is increasingly shifting to the level of digital cooperation. Whether in agriculture, industrial production, health care or public administration, the development and maintenance of modern social processes is no longer conceivable without software. In this context, the global digital divide is almost identical to the global analog divide in social and political inequality. Traditional donor and recipient constellations remain almost unchanged, but development cooperation is shifting more and more into the digital realm. As with analog development cooperation, digital development policy solutions can only have a lasting effect if existing dependencies are reduced and new dependencies are avoided at all costs. Current dependencies and central problems of the recipient countries arise from the following factors in particular: access restrictions to digital resources, lack of technical expertise, lack of translations and missing modification possibilities, the unequal distribution of digital products and hardware and the resulting global digital differences in power and ownership. At the same time, these are precisely the problems that can only be sustainably solved and taken into account locally and with the involvement of local civil society. And it is precisely these aspects in which Free Software excels over proprietary software and makes the decisive difference. Problem: proprietary software Most of these digital dependencies and problems are a by-product of the use of proprietary software. With proprietary software all rights of reuse and further development as well as modifications are reserved to the manufacturers. The roll-out of proprietary software thereby strengthens - unconsciously or deliberately - the dependencies of users in the developing countries to the currently market-dominating software industry from the present industrial countries. From a development policy perspective and from a sustainability perspective, however, access to hardware, software and empowering knowledge should be as open as possible for everyone. The use of Free Software offers exactly this basis for such an open access approach for anyone in software. In this article the basics and advantages of the use of Free Software in development cooperation will be examined in detail. Solution: Free Software as a foundation of development cooperation eGovernance, eHealth, digital agriculture and other digital services of international development cooperation are based on the use of software. Functional software thus becomes the basic technology of social organization as well as of modern administrative services. Free Software allows development investments once made to be reused around the globe without (further) license costs and without legal or technical restrictions. The simultaneous publication of its source code on public code repositories also enables one's own software development to profit from reusing, improving and republishing by other actors around the globe - the so-called "upstream"1. In terms of international cooperation, the freely licensed source code serves as a basis for organized or self-empowered knowledge multiplication and transfer. Free Software allows the development of digital cornerstones and provides international standards without creating new monopolies and dependencies. In the same way, one's own software development can benefit from existing publications of other Free Software2. Instead of constantly reinventing the wheel, Free Software allows all people to "stand on the shoulders of giants" at the same time. Status quo: Free Software and international development cooperation In 2014, the "Principles for Digital Development" were developed by the Digital Impact Alliance, who belong to the United Nations Foundation. One of the nine principles requires the publication of software, data and standards under free licenses (#6: "Use Open Standards, Open Data, Open Source, and Open Innovation"). The use of free licenses also has a positive effect on most of the other principles, including - Adaptability to local conditions; (#1: "Design With the User") - Scaling across regional borders; (#3: "Design for Scale") - Sustainable maintenance and further development; (#4: "Build for Sustainability") - Reuse and improvement of digital resources; (#7: "Reuse and Improve") - Data protection; (#8: "Address Privacy & Security") The obvious interaction between the principles mentioned above and the underlying development goals, as well as the reusability of existing digital solutions, show that Free Software is an essential part of any sustainable digital development. Consequently, the UNICEF Innovation Fund for example, invests exclusively only in technologies that are Free Software 3. Numerous implementing organizations and donors in development cooperation, IT service providers and international organizations have since adopted the "Principles for Digital Development Cooperation". Other initiatives are orienting themselves towards or referring to these guidelines, for example the "Principles of Donor Alignment for Digital Health". The signatories of both documents thus stand to benefit from the advantages of Free Software in international development cooperation. However, despite all the positive developments in recent years, many national and international development cooperation organizations still rely on the development of proprietary software. Thus, they not only miss out on the advantages of using Free Software. As organizations financed primarily with public funds, they also contradict the demand for linking public funds to Public Software4. This is often done out of ignorance or the reproduction of existing procurement practices. In order to change these dynamics for the better, the FSFE is calling for all taxpayer-(co-)funded software development to be published as Free Software and highlights on this page the advantages of using and developing Free Software in international development cooperation. (II) FSFE demands: Public code for publicly financed international development cooperation Many European actors and initiatives in international development cooperation are supported or completely financed with public funds - by the European Union or its member states. In accordance with the "Public Money? Public Code! campaign, the FSFE demands that in all international development cooperation, any software development (co-)financed with public money be published as Free Software. This includes both internal workflows and software developed by and for local partners. This is the only way that we can unlock the full potential and the positive developmental impact that the use and development of Free Software includes. Especially in international development cooperation, code paid for by the people should be available to the people! (III) Positive development goals by using and developing Free Software The release of software under a Free License5 enables a sustainable business and development model even after a solution has been handed over to the partner organizations, and offers unlocking the full potential of the digital resource: Since there are no license costs, no licensing restrictions or dependencies, Free Software can be reused and scaled without limitations - locally and globally. Independence and ownership Via its license, Free Software offers the unrestricted possibility to further develop existing code and thus to adapt software. These adaptations can be done by users themselves or as a service undertaken by third parties. This allows, for example, local service providers to take over further development, maintenance and support of the software without restrictions. Free software allows maximum independence for service providers, service recipients, and partner organizations and can thus serve to build local IT expertise. Growing the local economy Free Software offers legally safe possibilities for further development of existing code and modifications of software. In particular, the unrestricted possibilities of further development and localization by third parties enable the creation, use and development of digital resources on site. If local players are commissioned for further development or localisation of an existing software, this leads directly to the strengthening of the local economic power and the installation of local competence. Under certain circumstances, the expertise gained in this way can even be exported as a service6. Cost control and transparency Due to the absence of usage restrictions and license fees, a successful Free Software solution can be copied or implemented without limitation. This can benefit the limited budgets in developing countries7. In particular, there is no danger of hidden costs as with proprietary solutions that are offered at a low price at the outset but could impose high follow-up costs or other uncontrollable price structures after their implementation and the resulting dependencies. The freedom to improve and reuse Free Software also enables international development cooperation actors on the donor side to achieve maximum flexibility and scaling of any self-developed IT solution without additional license costs8, 9: Good solutions from one place can be reused at another. Localisations and translations In addition to software modifications, Free Software enables the independent and unlimited translation of existing software and documentation into any local language and adapting to cultural conditions. Such a localization can help to overcome part of the "digital divide" between English and non-English speakers10. Additional local modifications of the software to better fit into cultural conditions enable the inclusion of local companies and ultimately increase the adoption and acceptance of an IT solution by its local users. In addition to the source code and the language, all other content can be adapted to local conditions if the publication under free license is used consistently; for example, currency and measurement units or the visual language in use. Adapting software to cultural conditions can further help to promote understanding and local acceptance. In the same way, the software itself can be adapted for different local purposes, for example to specific business processes or different legal requirements. Local partners can be involved already during early conception stages or in designing local modifications and implementations - as well as later in the translation, training or delivery of the software11. These local partnerships can be very helpful in incrementally introducing technologies, promoting adoption and facilitating the learning curve13. Ideally, a local market for the purchase of technical expertise gets formed and starts growing. Since Free Software may be modified for any purpose, this also applies to commercial use. Free Software promotes local and international competition by allowing existing solutions to be reused or to build services around existing solutions that can be offered locally without having to pay a "producer" any usage fee13. It also prevents the creation of a monopoly. Global legal certainty Mitigate legal issues: All adapted local solutions, modifications and further development of Free Software within the scope of the license are carried out globally in a legally secure area. No permission is required to create and distribute copies of Free Software, unlike proprietary software Free Software offers the best possibilities to ensure cooperative global interoperability through global adaptability and reusability and the use of open standards14. Publicly provided open standards can be integrated by different vendors into their software and thus ensure communication between different services. Security and quality The openness of the source code enables a "many eyes principle". As in science, the possibility of mutual control ensures high quality and often enables security problems to be found and eliminated quickly. Security problems may also be published and solved; users can thus be informed and warned immediately15. Knowledge access and transfer Free Software is accessible everywhere in the world without restriction. The related documentation, training and knowledge exchange are also available globally without limitations. Local expertise can be built up through access to existing knowledge platforms. Non-discrimination and equal access Results of Free Software developments are available for anybody worldwide. This brings us one step closer to the principle of "leaving no one behind" of the Agenda 2030 for Sustainable Development. The article was written by Erik Albers (Free Software Foundation Europe), Nico Lück and Balthas Seibold (both Deutsche Gesellschaft für Internationale Zusammenarbeit, GIZ GmbH). The article reflects the opinion of the authors and does not represent the opinion of GIZ or of other institutions. The demand "Public Money? Public Code!" is a FSFE campaign and not of the authors. - "Upstream" means all contributions by different authors. Usually, these contributions are implemented after a peer-review process as official improvements into the respective software environment. In addition to code contributions, these can also be translations, documentations or other contributions. - In order to unlock the full developmental potential and strategic advantages of using Free Software in international development cooperation, the possibility of migrating existing system architectures and re-licensing past software developments should be considered. However, this article focuses on the demand to publish future software development as Free Software. - Compare https://publiccode.eu/ - Eligible are any licenses authorized as free licenses by the Free Software Foundation (https://www.gnu.org/licenses/license-list.html) or the Open Source Initiative (http://opensource.org/licenses). - Examples for such positive economic developments are DHIS2 and OpenMRS - The UN study "Breaking Barriers - The Potential of Free and Open Source Software for Sustainable Human Development" (PDF) lists case studies about the use of Free Software in different parts of the world. It says that "All projects discussed in this publication state that one of the main reasons for choosing FOSS over proprietary software is that no license fees need to be paid for FOSS." (p.5) - BMZ Toolkit 2.0 – Digitalisierung in der Entwicklungszusammenarbeit (PDF), 4.3.3 “Open Source – Nutzung und Entwicklung freier Software”, p. 170 - Response of the German government on the questions by the Greens, answer 26: "The use of Free Software in public administration can have advantages for developing countries. Depending on the type of software, area of application and number of users, the use of Free Software can above all help to save costs and make IT systems interoperable, thus reducing dependence on providers who use proprietary interfaces and formats." (own translation) - The UN study: “Breaking Barriers - The Potential of Free and Open Source Software for Sustainable Human Development” (PDF) lists multiple case studies in developing countries and within Europe, whose software success and adoption was only due to language adaptions as these projects aim at “getting non-English speaking communities to use computers.” (p. 6). - BMZ Toolkit 2.0 – Digitalisierung in der Entwicklungszusammenarbeit (PDF), 4.3.3 “Open Source – Nutzung und Entwicklung freier Software” - Compare “Free and Open Source Software and Technology for Sustainable Development” (Sowe et al., UNU Press, 2012), s.317: “Partnerships are even more important: partners who together define the problems, design possible solutions, collaborate to implement them and monitor and evaluate the outcome. [...] Introducing technology too fast, without clear goals that are negotiated by all parties involved, will eventually result in its rejection. FOSS technologies for sustainable development should be more evolutionary than revolutionary.” - BMZ Toolkit 2.0 – Digitalisierung in der Entwicklungszusammenarbeit (PDF), 4.3.3 “Open Source – Nutzung und Entwicklung freier Software”, p 170 - Open standards are standards that are accessible for all market participants, that are usable and improvable. For detailed information see: FSFE – Open Standards - BMZ Toolkit 2.0 – Digitalisierung in der Entwicklungszusammenarbeit (PDF), 4.3.3 “Open Source – Nutzung und Entwicklung freier Software” Icon attributions in order of appearance: - Keys in hands by itim2101 - Lightbulb in hand by iconixar - Economy of scale by Kiranshashtry - Money transparency by Flat icons - Man and arrows by ultimatearm - Translations by Freepik - Acceptance by Freepik - Partnership by Nhor Phal - Competition by Freepik - Justice by Freepik - Open Standards by Freepik - Security by Freepik - Book by Freepik - Cooperation by Freepik
<urn:uuid:1857749f-1c68-4cf9-9257-06218bd7d238>
CC-MAIN-2021-43
https://fsfe.org/freesoftware/developmentcooperation/index.ca.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00431.warc.gz
en
0.890363
3,601
2.609375
3
It takes a village to do research. Scientific discoveries from the 23andMe Research Team are made possible both by our three million research participants and by our academic collaborators. Jennifer McCreight, Ph.D., a research communications scientist for 23andMe, sat down with Abraham Palmer, Ph.D., professor of psychiatry and vice chair for basic research at University of California, San Diego School of Medicine, to talk more about our recent collaboration on delay discounting that was published in Nature Neuroscience. Jennifer McCreight: What is delay discounting, and what inspired you to study this particular trait? Abraham Palmer: I have wanted to study delay discounting, which is the tendency to favor small immediate rewards, over larger delayed rewards, since I was an undergraduate. The concept resonates with me because being able to work for a larger delayed reward is so critical to the survival of many animals, including humans. The idea that pigeons, for example, are calculating, in real time, the value of a delayed reward is amazing. We do it too. Some of it clearly is at the conscious level, but I’m convinced that people also have innate differences that are due to genetic factors. So this is a trait that impacts health, but it is also an important concept in microeconomics. From a practical perspective, I’m excited to use these and future results to identify specific genes. Part of my lab works on human genetics, and part of it works on mice and rats. If we can identify genes using human genome-wide association studies (GWAS), we can study them in rodents, where we can also measure delay discounting. That allows us to understand how those genes influence delay discounting, something that is hard or impossible to do in humans. JM: But mice can’t exactly fill out a survey like a 23andMe research participant, can they? AP: Ha, yes, the whole field of behavioral neuroscience is based on elaborate methods that get around the fact that we can’t just ask a mouse how it is feeling, or whether it wants 0.8 ml of water now, or 1.6 ml of water in 72 seconds. If we gave them a paper survey they’d sniff it for a few minutes and then shred it up and make a nest out of it! JM: What are the benefits of using a cohort of humans versus a mouse model? Disadvantages? AP: The problem with people is that they are too damn smart. So they might see these questions as pertaining to current interest rates, or their expectations about the return on their investments, or similar financial issues. Or they might have a cultural bias that it is always better to wait. Those factors obscure the “innate” genetic differences that we are trying to study. But a really nice thing about humans is that they are so damn smart. We can ask them these questions in a minute or two. With rodents, we have to spend a couple of weeks “teaching” them how to answer these questions. Also, rodent studies do use a real reward, typically water or food, and demand for those rewards might change from subject to subject, or over the course of a session, or between sessions. But rodents don’t have bank accounts, and don’t know what the prime interest rate is. They’ve also been raised in very uniform environments, which minimizes variability due to environmental factors. In a paper written by Jerry Richards, we estimated that the heritability of delay discounting was about 60 percent. (A bit of trivia: Jerry is the one who first got me interested in delay discounting, when I was an undergrad and he was a postdoc in Lew Seiden’s lab at The University of Chicago). One of my concerns is that delay discounting in humans may or may not be similar to delay discounting in rodents. There are many differences, including the abstract “reward” offered to humans versus the food or water “reward” offered to rodents. Another issue is the timescale: we’d never be able to train rodents to wait days or weeks or months for a reward, and I’ll bet that they wouldn’t wait that long even if they understood the question. I think that their time horizon is much shorter than ours. So that raises a question: are decisions about abstract rewards months away really equivalent to a behavioral paradigm in which a rat chooses between water now or more water in 72 seconds? Quite frankly, no one knows; there is pharmacological and neuroanatomical evidence to support the model, but I think questions remain. Those are questions I hope to tackle in the coming years, and these results will be really helpful in that effort. JM: What do we generally know about the genetics of behavior and personality traits? How heritable are they compared to other traits or diseases? AP: My postdoc Sandra Sanchez-Roige and I just finished a review of the genetics of personality, so this information is fresh in my mind. Twin studies using various personality traits suggest heritability in the ~40 percent range. SNP heritability is estimated more like 5-18 percent (SNP heritability is expected to be lower). JM: What are the benefits of using GWAS to investigate behavioral traits? Are there any limitations? AP: I think of behavioral genetics as a broad field that includes diseases and also normal variation, which may contribute to disease in extreme cases. Personality provides a continuous (rather than case:control) measure, and can be studied in “normal” populations, yet may provide insights into diseases. The limitations that come to mind include power (sample size) and the obvious role of environmental (including but not limited to cultural) influences. JM: Why is it important to look at an individual process (such as delay discounting), rather than a disease as a whole (such as ADHD)? AP: I wouldn’t say it is an either/or proposition, but I think delay discounting can add a lot to studies of disease phenotypes. For example, ADHD can be dissected into various components; that is the goal of the Research Domain Criteria initiative, which is a big deal at the National Institute of Mental Health. Delay discounting is one component of ADHD, but I expect that some ADHD genes will have a big impact on delay discounting while others will not. If we know which is which, we know how to study the mechanism of those ADHD genes. That means that having data about both ADHD and delay discounting will allow us to better dissect the diseases into processes, neural pathways, and molecular mechanisms. For me, that is the goal of doing GWAS in the first place. JM: What advantages does the size and variety of 23andMe’s dataset offer a researcher? Did it help speed up the work you were doing or allow you to research something you couldn’t otherwise? AP: Well, size matters, a lot. 23andMe was able to collect phenotypes from 23,217 subjects in about four months. That’s fantastic. My lab has none of the infrastructure needed for that scale of data collection. Having genotypes already available was also key, and the research participants were willing to do this work on a volunteer basis, which meant a lot to me. JM: One can imagine that financial security plays a large part in an individual’s ability to delay receiving a reward – do you control for this? AP: We had hoped to include socioeconomic status (SES) as a covariate, but because 23andMe only had that data for a fraction of the subjects we decided not to. If we had to exclude research participant for which we had no SES data, it would have reduced our sample size by about 25 percent. However, one advantage of doing this project with 23andMe was that we knew that research participants were reasonably affluent; for example, they could afford to pay to have themselves genotyped, which is a sign that they are not living “hand to mouth.” In other cohorts/populations, this is a major concern. For example, someone who is a drug addict might urgently need the smaller amount of money that is available now because they need to buy more drugs. Moreover, Carl Hart from Columbia once pointed out to me that delay discounting may also measure how much the subject trusts the researcher to actually give them money later. If you have learned that you can’t trust other people to make good on their commitments, or if you don’t find the scientists trustworthy, you might assume there was a degree of risk in waiting for a delayed reward. So delay discounting may measure different things, depending on the population used. I saw 23andMe as an advantage because these factors were less likely to be in play, but also possibly a limitation because highly impulsive people might not be well represented because they might be less likely to join 23andMe in the first place. JM: How accurate are self reported measures of delay discounting compared to direct measures of the behavior? Do people overestimate their ability to wait for greater reward because of how “impulsivity” is perceived? AP: That is a great question. When we were designing this project, I wanted to include some tasks that are intended to measure impulsivity. These can be implemented as Java applets, and would have allowed us to measure behavior directly. I still hope to do that study one day. We considered trying to test behavior by telling the research participants that one of their decisions would have real consequences; that is, they would actually get $10 now or $20 in three months. Other papers have suggested that this has little effect on the answers people give, which made us think we could forgo giving any real rewards. In hindsight, I’m glad we did, because giving ten dollars (or more) to 23,217 people would have been expensive! JM: Your most significant SNP was on the X chromosome — do you think this is why women tend to show more delay discounting compared to men? AP: I wouldn’t read too much into that. Although it was the most significant finding, that locus has a very small overall effect. I was surprised when we saw that women in this cohort put less value on future rewards. I actually panicked when I first saw that, because I had assumed (and thought I knew) that men would discount more. So Sandra and I had this panicked evening where we thought we’d made some critical error in the processing of the data, but then we realized that our results were consistent with prior studies. It is true that men are more impulsive when other measures/definitions are used, but delay discounting is different, for whatever reason. So all that panic was for nothing! JM: What are the next steps for your research? AP: This is a down payment. I’m hoping that by this time next year I’ll have at least 100K genotyped subjects who have answered these questions. We are working with at least half a dozen groups already to collect that data, and we are very open to more collaborations. To be honest, I probably won’t be satisfied until I have a million. Another next step is to translate these results into rodents studies, as we just discussed. A third direction is to get more measures of impulsivity, hopefully including behavioral measures. And of course all of this requires endless grant writing, which I mostly enjoy, but I’ve been financing this project out of discretionary funds so far. Now I need to reload, and that means grants of possibly philanthropy. JM: Where do you see the field of psychological genomics going in the future? AP: I ask people that question all the time – what will we be talking about in five years. I don’t even ask about 10 years in the future because it is laughable; no one can predict that far in the future, the field is developing so quickly. I think digital mental health is a big deal. One aspect of digital mental health is collecting “big data” from people to monitor, predict, and possibly improve aspects of mental health. That big data, especially when it is from genotyped research participants, will be a huge resource. I suspect we could reframe questions about impulsivity by measuring aspects of behavior using smart phones or watches or credit card activity or whatever. Another huge area is the use of electronic medical records in genotyped subjects. I’ll bet that polygenic prediction of delay discounting will predict all sorts of good and bad health outcomes; we are already looking at that with Lea Davis and Nancy Cox and others at Vanderbilt University. Of course every year more and more people will be whole genome sequenced, which will allow us to examine rare variants and their impact on all phenotypes. Model systems, both using whole animals and cellular systems, will be extremely important in realizing the promise of molecular insights from GWAS. In terms of how this will impact patients, I hope to see new and better drugs developed, and we all hope that drug treatments will be personalized based on an individual’s DNA. JM: Who is the mastermind behind the science limericks on your lab webpage?AP: Aren’t those cute? I gave a talk years ago at University of Illinois Urbana-Champaign — I think Gene Robinson was the one who invited me — and they had a tradition that the audience would write limericks about the seminar. When I finished taking questions, one of the graduate students got up and read them. I was so impressed I got copies.
<urn:uuid:e1191fc0-13b5-4382-90aa-85167f0cb6f7>
CC-MAIN-2021-43
https://blog.23andme.com/23andme-research/abraham-palmer-ucsd/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.971404
2,831
2.703125
3
See the Video. Summary: Electrostatic Discharge or ESD is an important issue in the electronics industry today. The integrated circuits used to create DCC Decoders and other items are sensitive to static electricity, and can be damaged or destroyed by a static electricity discharge. Proper precautions during handling and soldering are very important. Static discharge will shorten the lifespan of electronic devices, leading to erratic operation or failure. - Part of a series on Soldering For vacuum tubes, ElectoStatic Discharge was not an issue, later with transistors it was not considered to be a problem either. MOSFETs (Metal Oxide Semiconductor Field Effect Transistor) arrived in the mid-1960s beginning the MOSFET Revolution. When their failure rates began increasing it was discovered that the build up static electricity during handling damaged the metal oxide layer in the device, causing it to fail. In response to ESD, manufacturers of electronic semiconductors and equipment invested time and money to protect against the effects of static electricity. This is reflected in lower failure rates and increased long term reliability. ESD and Digital Command Control Digital Command Control devices are sensitive to static, especially multifunction decoders. The wonders of miniaturization have created the technology needed for Digital Command Control to be possible. With that advance in technology came a new threat: Electrostatic Discharge, which can easily destroy the transistors and connections within those amazing electronic devices. Manufacturers of DCC Decoders report that many failures are a direct result of damage by ElectroStatic Discharge. Proper handling, along with an ESD Safe workbench and soldering station are important. Cold, dry winter days are especially dangerous with respect to generation of static electricity. Taking precautions to avoid ESD while installing and handling electronic devices goes a long way. Proper ESD Safe soldering equipment will prevent damage to decoders during installation. Even the audio section of a multifunction decoder is at risk. Awareness of ESD and how to prevent it will increase your enjoyment of DCC in the future. What is ElectroStatic Discharge? When dissimilar materials rub together the friction results in a positive (triboelectric) charge on one surface and a negative one on the other. The charge will remain unless it has a path through which it can flow. The resulting charge can remain in place for some time and this is "static electricity". When a path does exist current will flow and the charge will be reduced or dissipated. The levels of voltage and current produced depend of a variety of factors. The size of the person, clothing and other materials, the amount of activity, the object the discharge is made to, and humidity. The factor which affects the voltages produced is the materials being rubbed together. Walking across a carpet can create very large voltages. Even walking on a vinyl floor can create 5 kV of potential. Working at your workbench during a DCC install could easily generate 500V or more. You will only feel a discharge of 5kV or greater, so many of these static charges and discharges would not be noticed. When the discharge occurs a large current can flow (amps!) which is why you notice it. How Does Static Transfer Happen? There are several ways static charges can be transferred to semiconductor devices resulting in ESD damage. Often when it is touched by an item that is charged. A classic example occurs when a decoder is on a work bench and someone walks across the floor, creating a charge, then picks it up. The static charge discharges very quickly to the decoder, with the possibility of damage. Metal screwdrivers are more conductive and will discharge even faster with higher levels of peak current. It is not necessary to touch components to cause damage to them. Plastic cups can carry a charge, and placing one near a device can "induce" a charge into it, and can damage semiconductor devices. Even air or liquids flowing can create a static charge. - Pouring liquids from one container to another can create a static charge, caused by the movement of the liquid. This is extremely dangerous; a discharge can ignite flammable vapours. - The greatest danger to electronic devices is a discharge from the human body or charged material through a significant series resistor. The sudden release of charge into the device can produce high voltages or currents that can result in irreversible transformation and destruction. You may not feel the discharge, but the damage is real. Damage can occur as a result of ESD when the high level of voltage causes breakdown to occur in a component. Smaller components and the downscaling of electronics into integrated circuits (ICs) increases the probability of ESD damage. A discharge can damage internal gates or transistors, and each time the device experiences a discharge, the time to failure decreases. Most manufacturers consider all semiconductors static sensitive devices, and many treat passive components like capacitors and resistors as being static sensitive as well. Surface mount devices (SMD), where their dimensions are smaller than traditional components, are far more susceptible to damage from ESD. Three failure modes related to ESD: - Hard Failure: The device no longer works and must be replaced. These are easy to spot. - Latent Damage: The device is damaged, yet still functions. It can fail at any time in the future. Determining what failed and the root cause is very difficult with latent damage. - Temporary Malfunction: The device just doesn't work correctly, but functions normally later. A hard failure can occur in the future, which is very difficult to predict, and even harder to troubleshoot. There are modellers who will confidently claim that they have never had a problem with ESD. Do not believe them, they have been participating in latent damage or having temporary malfunctions for years, the hard failures haven't happened yet. They have not recognized their malfunctions as being caused by static electricity. What is ESD? Learn More About the Basics of Electrostatic Charge There are a number ways in which you can protect against ESD at a reasonable cost. Not only can ESD cause instant failures, it can introduce latent failures, where the decoder will work on the test track, but fail later in service. There are products available enabling you to take precautions against static and its effects on electronics. Anti-static mats, wrist straps, flooring, anti-static bags and other items. By putting everything and everyone at the same potential, static can be controlled. Even with ESD protection an ESD strike is a permanent danger to electronic device reliability as it can easily find a route to bypass any protection and be directly injected into the device. Creating an ESD Safe Environment - Create an area where the DCC decoders and other electronics will be worked on. - Components and multifunction decoders stored under conditions where they will not experience static discharges. - People who come into contact with static sensitive devices are aware of the precautions they need to take. - Heel strap over the heel of the shoe. (You can buy ESD footwear, but that isn't really necessary for installing a decoder). To avoid static in the area where DCC components and decoders are being handled, the bench surface should be able to dissipate any static buildup which occurs. An anti-static mat on the bench is all you need. A conductive bench is as bad as an insulated bench, as they introduce the risk of shorting out any boards placed on them. The mat will help keep everything at the same potential. Anti-static wrist straps ensure that any charge built up on a person is safely dissipated. This is connected to ground via the lead which incorporates a large value resistor, normally in excess of 1 MOhm. It is possible to connect your wrist strap, the antistatic mat, and any other points together in a junction box, or connect the wrist strap to the bench surface or mat which is properly connected to earth ground. To overcome problems caused by flooring surfaces there are a wide variety of conductive or static dissipative coverings or coatings such as wax. Synthetic fabrics used in clothing and chair coverings can develop very high levels of static during normal motion, even with the use of a wrist strap. A cold dry winter increases the production of static charges. Dry climates and air conditioning also create the ideal environment for static. Maintaining the humidity of your workspace helps reduce static electricity. Increasing the humidity creates a layer of moisture on items in the environment, which aids in dissipating any static build up, as moisture is a good conductor. A relative humidity of 40% or better is very effective. Dry air flowing across a surface can also induce a static charge, which is also reduced by increasing the humidity. ESD Safe Soldering Irons A wide variety of ESD Safe soldering irons are available. The main requirement is that the tip of the soldering iron should be grounded. Recommended resistance tip to ground should be less than five ohms. Thermostatically controlled soldering stations should use a zero-voltage switching system. This prevents large spikes caused by the switching of the thermostat from appearing on the tip of the iron. Verification of Soldering and Desoldering Equipment The above link has instructions on how to check if a soldering iron is ESD safe, and verify if your ESD safe iron still is acceptable. The process is simple and requires an ohmmeter and a resistor. There is a more sophisticated process, which can be done if you have the equipment. If your iron lacks a ground pin on its line cord, it is probably not ESD Safe. ESD Safe Tools Various tools for soldering are available which are ESD safe. Solder suckers are available, as is static safe solder wick. These tools are designed to minimize the creation of static charges through their construction and materials. Whenever an electronic components or assemblies are transported or stored, they should be placed in suitable packing. Static bags are available to keep them protected. These can be purchased, or recycled from computer card or hard drive packaging. Conductive foam is also available. A simple check with the ohmmeter will reveal if it is conductive or not. Normally the resistance will be relatively low, a few hundred ohms at most, but this will obviously depend on the area of the test probes in contact with the foam. When buying a multifunction decoder or other DCC accessory, keep it in its package and avoid handling it until you are ready to install it. Static Grass Applicators Be aware the static grass applicators generate a significant voltage, which is used to induce a charge in the material being applied. This charge can cause damage to DCC boosters and decoders. Take appropriate precautions when using devices which generate a static charge. Disconnecting the DCC power bus is a good start. Short URL to this Page To insert a direct link to this page in an email or on a webpage copy and paste this URL:
<urn:uuid:6978b786-45c2-4d63-b5c0-95e387c46009>
CC-MAIN-2021-43
https://dccwiki.com/ESD
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.934221
2,293
3.53125
4
Tuʻi Tonga Empire Tu'i Tonga Empire • 950 CE • 'Aho'eitu brought his faction to Samoa • the title Tuʻi Tonga was abolished The Tuʻi Tonga Empire, or Tongan Empire, are descriptions sometimes given to Tongan expansionism and projected hegemony in Oceania which began around 950 CE, reaching its peak during the period 1200–1500. It was centred in Tonga on the island of Tongatapu, with its capital at Muʻa. Modern researchers and cultural experts attest to widespread Tongan influence, evidence of transoceanic trade and exchange of material and non-material cultural artefacts. Beginning of Tongan expansionism With the decline of Samoa's Tui Manu'a maritime empire, a new empire rose from the South. In 950 AD, the first Tu'i Tonga 'Aho'eitu started to expand his rule outside of Tonga. According to leading Tongan scholars, including Dr. 'Okusitino Mahina, the Tongan and Samoan oral traditions indicate that the first Tu'i Tonga was the son of their god Tangaloa. As the ancestral homeland of the Tu'i Tonga dynasty and the abode of deities such as Tagaloa 'Eitumatupu'a, Tonga Fusifonua, and Tavatavaimanuka, the Manu'a islands of Samoa were considered sacred by the early Tongan kings. By the time of the 10th Tu’i Tonga Momo, and his successor, Tuʻitātui, the Tu'i Tonga's empire had grown to include much of the former domains of the Tui Fiti and Tui Manu'a, with the Manu'a group as the only exception. To better govern the large territory, the Tu’i Tonga had their throne moved by the lagoon at Lapaha, Tongatapu. The influence of the Tu’i Tonga was renowned throughout the Pacific, and many of the neighbouring islands participated in the widespread trade of resources and new ideas. Under the 10th Tuʻi Tonga, Momo and his son Tuʻitātui (11th Tuʻi Tonga) the empire was at its height of expansion, tributes for the Tu'i Tonga were said to be exacted from all tributary chiefdoms of the empire. This tribute was known as the " 'Inasi " and was conducted annually at Mu'a following the harvest season when all countries that were subject to the Tu'i Tonga must bring a gift for the gods, who was recognized as the Tu'i Tonga. Captain Cook witnessed an Inasi ceremony in 1777, in which he noticed a lot of foreigners in Tonga, especially the darker people from Fiji, Solomon Islands and Vanuatu. The finest mats of Samoa ('ie tōga) are incorrectly translated as "Tongan mats;" the correct meaning is "treasured cloth" ("ie" = cloth, "tōga" = female goods, in opposition to "oloa" = male goods). Many fine mats came into the possession of the Tongan royal families through chiefly marriages with Samoan noblewomen, such as Tohu'ia, the mother of the first Tu'i Kanokupolu, Ngata, who came from Safata, 'Upolu, Samoa. These mats, including the Maneafaingaa and Tasiaeafe, are considered the crown jewels of the current Tupou line (which is derived matrilineally from Samoa). The success of the Empire was largely based upon the Imperial Navy. The most common vessels were long-distance double-canoes fitted with triangular sails. The largest canoes of the Tongan kalia type could carry up to 100 men. The most notable of these were the Tongafuesia, ʻĀkiheuho, the Lomipeau, and the Takaʻipōmana. It should be mentioned that the Takaʻipōmana was actually a Samoan kalia; according to Queen Sālote and the Palace Records this was the Samoan double-hulled canoe that brought Tohu'ia Limapō from Sāmoa to wed the Tu'i Ha'atakalaua. The large navy allowed for Tonga to become wealthy with large amounts of trade and tribute flowing into the Royal Treasury. Decline of Tuʻi Tonga and two new dynasties The Tuʻi Tonga decline began due to numerous wars and internal pressure. In the 13th or 14th century, the Samoans had expelled the Tongans from their lands after Tu'i Tonga Talakaifaiki was defeated in battle by the brothers Tuna, Fata and Savea, progenitors of the Malietoa family. In response, the falefā was created as political advisors to the Empire. The falefā officials were initially successful in maintaining some hegemony over other subjected islands but increased dissatisfaction led to the assassination of several rulers in succession. The most notable were, Havea I (19th TT), Havea II (22nd TT), and Takalaua (23rd TT), who were all known for their tyrannical rule. In AD 1535, Takalaua was assassinated by two foreigners while swimming in the lagoon of Mu'a. His successor, Kauʻulufonua I pursued the killers all the way to ʻUvea, where he killed them. Because of so many assassination attempts on the Tu'i Tonga, Kauʻulufonua established a new dynasty called the Ha'a Takalaua in honour of his father and gave his brother, Mo’ungamotu’a, the title of Tu’i Ha’atakalaua. This new dynasty was to deal with the everyday decisions of the empire, while the position of Tu’i Tonga was to be the nation's spiritual leader, though he still controlled the final say in the life or death of his people. The Tu'i Tonga "empire" at this period becomes Samoan in orientation as the Tu'i Tonga kings themselves became ethnic Samoans who married Samoan women and resided in Samoa. Kau'ulufonua's mother was a Samoan from Manu'a, Tu'i Tonga Kau'ulufonua II and Tu'i Tonga Puipuifatu had Samoan mothers and as they married Samoan women the succeeding Tu'i Tonga – Vakafuhu, Tapu'osi, and 'Uluakimata – were allegedly more "Samoan" than "Tongan." In 1610, the 6th Tu'i Ha'a Takalaua, Mo'ungatonga, created the position of Tu’i Kanokupolu for his half-Samoan son, Ngata, which divided regional rule between them, though as time went on the Tu’i Kanokupolu's power became more and more dominant over Tonga. The Tu'i Kanokupolu dynasty oversaw the importation and institution of many Samoan policies and titles and according to Tongan scholars, this "Samoanized" form of government and custom continues today in the modern Kingdom of Tonga Things continued in this manner afterward. The first Europeans arrived in 1616, when the Dutch explorers Willem Schouten and Jacob Le Maire spotted Tongans in a canoe off the coast of Niuatoputapu, followed by Abel Tasman. These visits were brief, however, and did not significantly change the island. The dividing line between the two moieties was the old coastal road named Hala Fonua moa (dry land road). Modern chiefs who derive their authority from the Tuʻi Tonga are still named the Kau Hala ʻUta (inland road people), while those from the Tuʻi Kanokupolu are known as the Kau Hala Lalo (low road people). Concerning the Tuʻi Haʻatakalaua supporters: when this division arose, in the 15th century, they were of course the Kauhalalalo. But when the Tuʻi Kanokupolu had overtaken them they shifted their allegiance to the Kauhalaʻuta. Modern archeology, anthropology and linguistic studies confirm widespread Tongan cultural influence ranging widely through East 'Uvea, Rotuma, Futuna, Samoa and Niue, parts of Micronesia (Kiribati, Pohnpei), Vanuatu and New Caledonia and the Loyalty Islands, and while some academics prefer the term "maritime chiefdom", others argue that, while very different from examples elsewhere, "..."empire" is probably the most convenient term." - see writings of Ata of Kolovai in "O Tama a Aiga" by Morgan Tuimaleali'ifano; writings by Mahina, also coronation edition of Spasifik Magazine, "The Pacific Islands: An Encyclopedia," edited by Lal and Fortune, p. 133etc. - "The Pacific Islands: An Encyclopedia," edited by Lal and Fortune, p. 133 - St. Cartmail, Keith (1997). The art of Tonga. Honolulu, Hawai'i: University of Hawai'i Press. p. 39. ISBN 978-0-8248-1972-9. - the Tongan linguistic analogue is "to'onga," see http://collections.tepapa.govt.nz/objectdetails.aspx?oid=535267&coltype=pacific%20cultures®no=fe011574 - Kie Hingoa 'Named Mats, 'Ie Toga 'Fine Mats' and Other Treasured Textiles of Samoa and Tonga. Journal of the Polynesian Society, Special Issue 108(2), June 1999 - see Songs and Poems of Queen Salote edited by Elizabeth Wood-Ellem - Thomson, Basil (January 1901). "Note Upon the Natives of Savage Island, or Niue" (PDF). The Journal of the Anthropological Institute of Great Britain and Ireland. 31: 137–145. JSTOR 2842790. - "The Pacific Islands: An Encyclopedia," edited by Lal and Fortune, p. 133; Gunson, Niel (1997). "Great Families of Polynesia: Inter-island Links and Marriage Patterns". Journal of Pacific History. 32 (2): 139–179. doi:10.1080/00223349708572835.; "Tongan Society," Edward Gifford; "Tongan Society at the Time of Captain Cook's Visits," Queen Salote, Bott and Tavi - Gunson, Niel (1997). "Great Families of Polynesia: Inter-island Links and Marriage Patterns". Journal of Pacific History. 32 (2): 139–179. doi:10.1080/00223349708572835.; also "Deconstructing the Island Group," Australian National University - Gunson, Niel (1997). "Great Families of Polynesia: Inter-island Links and Marriage Patterns". Journal of Pacific History. 32 (2): 139–179. doi:10.1080/00223349708572835.; "Tongan Society," Edward Gifford; "Tongan Society at the Time of Captain Cook's Visits," Queen Salote, Bott and Tavi - see "Archived copy". Archived from the original on 2006-03-05. Retrieved 2006-03-05.CS1 maint: archived copy as title (link) History of Tonga; 'Okusitino Mahina 2004; and journal articles - Recent Advances in the Archaeology of the Fiji/West-Polynesia Region" Archived 2009-09-18 at the Wayback Machine 2008: Vol 21. University of Otago Studies in Prehistoric Anthropology.] - "Hawaiki, Ancestral Polynesia: An Essay in Historical Anthropology", Patrick Vinton Kirch; Roger C. Green (2001) - "Geraghty, P., 1994. Linguistic evidence for the Tongan empire", Geraghty, P., 1994 in "Language Contact and Change in the Austronesian World: pp.236-39. - "Monumentality in the development of the Tongan maritime chiefdom", Clark, G., Burley, D. and Murray, T. 2008. Antiquity 82(318): 994-1004" - ["Pacific voyaging after the exploration period"], NEICH, R. 2006 in K.R. Howe (ed.) Vaka Moana, voyages of the ancestors: the discovery and settlement of the Pacific: 198-245. Auckland: David Bateman. p230 |Wikimedia Commons has media related to Tu'i Tonga.|
<urn:uuid:3658582d-ea1a-49e9-b9f1-4c1df714be81>
CC-MAIN-2021-43
https://en.wikipedia.org/wiki/Tu%CA%BBi_Tonga_Empire
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00071.warc.gz
en
0.926016
2,750
3.8125
4
Fourier Transform Spectroscopy Fourier transform spectroscopy is a method where one computes an optical spectrum from raw data by applying a Fourier transform algorithm. The method is applied in various techniques for spectroscopy – most often in the context of infrared spectroscopy. The term time domain spectroscopy is also common, because the measured interference signal is measured in the time domain e.g. in the sense that an optical time delay is varied. The operation principle of Fourier transform spectroscopy in its most common form is fairly simple to understand. The investigated electromagnetic radiation (most frequently, infrared light) is sent to an interferometer, normally in the form of a Michelson interferometer. One then measures the optical power at the output of the interferometer as a function of the arm length difference, using some photodetector. That arm length difference is usually manipulated by mechanically moving a mirror (or more conveniently a retroreflector) over some distance. If the optical input to the interferometer were monochromatic, one would obtain a sinusoidal oscillation of the detected power as a function of arm length difference, and the period of that oscillation would be the optical wavelength. If the light is polychromatic, the recorded interferogram will be a superposition of contributions from the different wavelength components. Therefore, it is clear that by applying a Fourier transform to those data one can retrieve the optical spectrum – more precisely, the power spectral density as a function of optical frequency or wavelength. Information on the spectral phase is not obtained. Some corrections need to be applied to the obtained spectrum, as explained below. For a mathematically founded understanding, consider that the interferogram signal, resulting from the superposition of two optical electric fields with a certain time delay can be expressed as follows: where I(τ) can be the intensity of the interference signal or alternatively a photocurrent. That signal can be decomposed into a constant and a τ-dependent part; the latter is: This is essentially just the autocorrelation of the electric field. According to the Wiener–Kinchine theorem, the Fourier transform of that is the intensity spectrum of the electric field, i.e., the optical spectrum. The explained operation principle can easily be adapted for absorption spectroscopy. One can record an optical spectrum with and without a specimen inserted into the beam path – before or after the interferometer – and compare the computed spectral intensities to obtain the absorption of the sample in a wide range of wavelengths. More precisely, one obtains the loss of spectral intensity caused by the sample, which may not only be caused by absorption of the specimen but also by surface reflections, for example. Note that there are also other, less common forms of Fourier transform spectroscopy. For example, terahertz waveforms can be recorded in the time domain with an optical sampling technique based on a photoconductive antenna (see the article on terahertz detectors). One can then apply a Fourier transform to obtain the optical spectrum of a terahertz pulse, in that case also obtaining the spectral phase. Various Practical Aspects Required Spatial Range and Resolution The obtained spectral resolution is limited by the maximum optical path length difference. This is easy to see considering the properties of discrete Fourier transforms, or simply recognizing that the range of path length differences determines the number of oscillation cycles which can be counted. Quantitatively, the resolution in terms of spectroscopic wavenumber is the inverse of the maximum optical path length difference. Simple instruments may work with only a few centimeters of path length difference, achieving spectral resolutions of somewhat better than 1 cm−1, while high precision spectrometers work with much longer path length differences, e.g. several meters. On the other hand, the maximum wavenumber is half the inverse spatial resolution of the measured path length difference. Therefore, a not particularly high spatial resolution is required for instruments working only with relatively long optical wavelengths, while UV instruments are more demanding in that respect. The spatial accuracy, however, should be much higher – see below. For calculations, note that the variation of pass length difference in a Michelson interferometer is twice the amount of movement of a retroreflector. For spectrometers as used in infrared spectroscopy, often uses a very broadband light source for measuring optical properties of samples in a wide wavelength region. The light source should of course have a sufficiently high spectral flux and emit continuously with stable optical properties throughout the interferometer scan. For the near infrared, incandescent lamps are suitable, but there emission is limited to wavelengths below roughly 5 μm by the transmissivity of the bulb glass. For longer wavelength regions, and therefore uses for millimeters not requiring a glass bulb – for example, Nernst glowers based on an electrically heated rod made of zirconium/yttrium ceramics. Silicon carbide rods can even be used up to about 40 μm. Also there are mercury vapor lamps. For the interferometer to work properly, one requires a light beam with high enough spatial coherence. This is because different spatial components of a beam can produce different contributions to the interferogram, effectively washing out the pattern. Ideally, one would have a Gaussian beam from a laser source. In practice, however, one often deals with incoherent sources, where the light has to be spatially filtered, accepting some loss of optical power. However, the possible power throughput is still substantially better than for a grating monochromator as used in other forms of spectroscopy, where light needs to be fed through a narrow optical slit. This is called the Jacquinot advantage, named after Pierre Jacquinot who identified it. The Beam Splitter The optical components of the interferometer should of course properly work over the full spectral region of interest. The most substantial challenge arises from the beam splitter, which would ideally exhibit a 50:50 splitting ratio for all relevant wavelengths. That is not strictly required, but it should at least not lead to highly asymmetric splitting or introduce high power losses e.g. by absorption in a substrate. In infrared spectroscopy, one often uses beam splitters with calcium fluoride (CaF2) substrates for wavelengths up to 8 μm. KBr-based beam splitters with a germanium-based coating can be used up to 25 μm wavelength, but that material is hygroscopic and must therefore be carefully protected against moisture. For the far infrared, one often uses polymer films. Calibration of Arm Length Variations The interferometer arm length difference is usually varied with a motorized drive, which can normally not be trusted to provide sufficiently accurate variations of the position. Therefore, one often simultaneously records a second interferogram, using light from a narrow-linewidth laser with sufficiently stable wavelength. One can then computationally correct the data for any deviations of the movement from a perfectly linear movement. Note that it is not sufficient only to have a positional accuracy which allows one to clearly resolve the oscillations of an interferogram. This is because random position errors also limit the signal-to-noise ratio of the obtained spectra. Therefore, it is essential to realize Fourier transform spectrometers with highly accurate opto-mechanics and an accurate reference interferometer. That also provides a very high wavelength accuracy – better than in dispersive instruments. Calibration of Spectral Power Density A simple Fourier transform applied to the raw data will generally not deliver a calibrated optical spectrum, mostly because the responsivity of the used photodetector and the reflectivity of the beam splitter are wavelength-dependent; further influences can come from other optical elements of the setup. Such influences do not matter in absorption spectroscopy, because one only compares spectra with and without an absorbing sample, and the obtained intensity ratios are not affected; one only requires sufficiently strong signals for all relevant wavelengths. When measuring optical spectra of sources, however, one needs to apply a calibration. It may be done, for example, by comparing with the recorded spectrum of a light source with known spectral shape. In the infrared, one often uses black body radiation for that calibration. In some cases, one may even calibrate a spectrometer for obtaining absolute values of the power spectral density. This is often not easy, however, for example because of influences of the required spatial filtering of the input beam (see above). Discrete Fourier transforms can quite easily and efficiently be computed, using a Fast Fourier Transform (FFT) algorithm. In the simplest form, such an algorithm works with a number of data points which is a power of 2. Even on a relatively simple microprocessor, the FFT computation usually takes much less time than the acquisition of the raw data. Interference-based methods of spectroscopy have been used already in the early days of optics, for example by Hippolyte Fizeau, who resolved the doublet of the yellow sodium fluorescence line in the 19th} century (→ Fizeau interferometers). However, computations of optical spectra based on Fast Fourier transform have been implemented only from the middle of the 20th century on, when computers became available; first commercial devices appeared in the 1960s. Reducing the Sensitivity to Mechanical Noise For the kind of interferometer as explained above, the sensitivity to mechanical noise (vibrations and shocks or inaccuracies of an optical delay line) is quite high. That sensitivity can be massively reduced by using a common-path interferometer based on birefringence. This can be realized, for example, with a simple optical beam path where two polarization components finally interfere at a polarizer. The optical delay between the two polarization components can be adjusted by moving a wedged birefringent crystal . Because that does not only greatly reduce the sensitivity to vibrations, but also allows very accurate scanning of the delay range, it is particularly suitable for Fourier transform spectroscopy in relatively short wavelength regions. Applications of Fourier Transform Spectroscopy The method of Fourier transform spectroscopy is most frequently used in conjunction with infrared light – for the following reasons: - Particularly in the far infrared, it is difficult to realize focal plane arrays as required for conventional spectrographs, for example. It is thus preferable to use a method where only a simple photodetector is required. - Due to the limited sensitivity of infrared detectors (particularly at very long wavelengths), it is important to use the light efficiently. It is thus beneficial to avoid excessive power losses at the input slit of a monochromator (Jacquinot advantage, see above). Besides, one also enjoys the Fellgett advantage (named after a Peter Berners Fellgett, the pioneer of the method): if the measurement noise is dominated by detector noise (e.g. thermal electronic noise) rather than by shot noise, the achievable signal-to-noise ratio is substantially better for the Fourier transform method than for scanning the spectrum with a tunable monochromator, where only a tiny part of the optical spectrum is utilized at any time. This is particularly true in cases where a high spectral resolution is required. - The Fourier transform method is even somewhat simpler to implement in the infrared, because the required spatial resolution is lower than for visible and ultraviolet light. The main application of the method is in devices for measuring either optical spectra of light sources or wavelength-dependent properties of materials, such as the transmissivity (e.g. reduced by absorption lines) or the reflectivity. The principle of Fourier transform spectroscopy is also applied in wavemeters, although those usually deliver only the peak wavelength rather than the full optical spectrum. There are also applications of the principal in technical fields outside photonics, for example in the context of nuclear magnetic resonance imaging and mass spectroscopy Questions and Comments from Users Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest. Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him e.g. via e-mail. By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay. |||P. B. Fellgett, “Theory of infra-red sensitivities and its application to investigations of stellar radiation in the near infra-red” (PhD thesis, 1949)| |||P. B. Fellgett, “On the ultimate sensitivity and practical performance of radiation detectors”, J. Opt. Soc. Am. 39 (11), 970 (1949), doi:10.1364/JOSA.39.000970| |||P. Jacquinot, “New developments in interference spectroscopy”, Rep. Prog. Phys. 23 (1), 267 (1960), doi:10.1088/0034-4885/23/1/305| |||L. Mertz, “Astronomical photoelectric spectrometer”, Astron. J. 71, 749 (1966)| |||M. F. A’Hearn, F. J. Ahern and D. M. Zipoy, “Polarization Fourier spectrometer for astronomy”, Appl. Opt. 13 (5), 1147 (1974), doi:10.1364/AO.13.001147| |||F. Adler et al., “Mid-infrared Fourier transform spectroscopy with a broadband frequency comb”, Opt. Express 18 (21), 21861 (2010), doi:10.1364/OE.18.021861| |||A. Oriana et al., “Scanning Fourier transform spectrometer in the visible range based on birefringent wedges”, J. Opt. Soc. Am. A 33 (7), 1415 (2016), doi:10.1364/JOSAA.33.001415| |||F. Johnston, “In search of space: Fourier-spectroscopy”, Chapter 7 of Shinn, Terry and Joerges, Bernward (Eds), Instrumentation: Between Science, State and Industry, Kluwer Academic (2000), available online| |||S. P. Davis, M. C. Abrams and J. W. Brault, Fourier transform spectrometry, Academic Press, ISBN-13: 978-0120425105 (2001)| |||F. J. J. Clarke et al., “FTIR measurements – standards and accuracy”, Vib. Spectrosc. 30 (1), 25 ( 2002)| |||J. Mandon et al., “Fourier transform spectroscopy with a frequency comb”, Nature Photon. 3, 99 (2009), doi:10.1038/nphoton.2008.293| See also: spectroscopy, Michelson interferometers, optical coherence tomography, white light interferometers and other articles in the categories light detection and characterization, optical metrology, methods
<urn:uuid:09f2deac-3458-4144-8bfd-eba285f5f7cc>
CC-MAIN-2021-43
https://www.rp-photonics.com/fourier_transform_spectroscopy.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00510.warc.gz
en
0.898841
3,287
3.875
4
What is lymphedema? Lymphedema is swelling in one or more extremities that results from impaired flow of the lymphatic system The lymphatic system is a network of specialized vessels (lymph vessels) throughout the body whose purpose is to collect excess lymph fluid with proteins, lipids, and waste products from the tissues. This fluid is then carried to the lymph nodes, which filter waste products and contain infection-fighting cells called lymphocytes. The excess fluid in th…e lymph vessels is eventually returned to the bloodstream. When the lymph vessels are blocked or unable to carry lymph fluid away from the tissues, localized swelling (lymphedema) is the result. Lymphedema most often affects a single arm or leg, but in uncommon situations both limbs are affected Primary lymphedema is the result of an anatomical abnormality of the lymph vessels and is a rare, inherited condition. Secondary lymphedema results from an identifiable damage to or obstruction of normally-functioning lymph vessels and nodes. Worldwide, lymphedema is most commonly caused by filariasis (a parasite infection), but lymphedema most commonly occurs in women who have had breast cancer surgery, particularly when followed by radiation treatment. Mild lymphedema first may be noticed as a feeling of heaviness, tingling, tightness, warmth, or shooting pains in the affected extremity. These symptoms may be present before there is obvious swelling of an arm or leg. Other signs and symptoms of early or mild lymphedema include: - a decreased ability to see or feel the veins or tendons in the extremities, - tightness of jewelry or clothing, - redness of the skin, - asymmetrical appearance of the extremities, - tightness or reduced flexibility in the joints, and slight puffiness of the skin. - As lymphedema progresses to a more moderate to severe state, the swelling of the involved extremity becomes more pronounced. The other symptoms mentioned above also persist with moderate or severe lymphedema What causes lymphedema? Primary lymphedema causes Primary lymphedema is an abnormality of an individual’s lymphatic system and is generally present at birth, although symptoms may not become apparent until later in life. Depending upon the age at which symptoms develop, three forms of primary lymphedema have been described. Most primary lymphedema occurs without any known family history of the condition. Congenital lymphedema is evident at birth, is more common in females, and accounts for about 20% of all cases of primary lymphedema. A subgroup of people with congenital lymphedema has a genetic inheritance (in medical genetics termed “familial sex-linked pattern”), which is termed Milroy disease. Lymphedema praecox is the most common form of primary lymphedema. It is defined as lymphedema that becomes apparent after birth and before age 35 years and symptoms most often develop during puberty. Lymphedema praecox is four times more common in females than in males. Primary lymphedema that becomes evident after 35 years of age is known as Meige disease or lymphedema tarda. It is less common than congenital lymphedema and lymphedema praecox Secondary lymphedema causes Secondary lymphedema develops when a normally-functioning lymphatic system is blocked or damaged. Often breast cancer surgery, particularly when combined with radiation treatment, is the most common cause. This results in one-sided (unilateral) lymphedema of the arm. Any type of surgical procedure that requires removal of regional lymph nodes or lymph vessels can potentially cause lymphedema. Surgical procedures that have been associated with lymphedema include vein stripping, lipectomy, burn scar excision, and peripheral vascular surgery. Damage to lymph nodes and lymph vessels, leading to lymphedema, can also occur due to trauma, burns, radiation, infections, or compression or invasion of lymph nodes by tumors. Worldwide, however, filariasis is the most common cause of lymphedema. Filariasis is the direct infestation of lymph nodes by the parasite Wuchereria bancrofti. The disease is spread among persons by mosquitoes, and affects millions of people in the tropics and subtropics of Asia, Africa, Western Pacific, and parts of Central and South America. Infestation by the parasite damages the lymph system, leading to swelling in the arms, breasts, legs, and, for men, the genital area. The entire leg, arm, or genital area may swell to several times its normal size. Also, the swelling and the decreased function of the lymph system make it difficult for the body to fight infections. Lymphatic filariasis is a leading cause of permanent disability in the world What are the symptoms of lymphedema? The swelling of lymphedema usually occurs in one or both arms or legs, depending upon the extent and localization of damage. Primary lymphedema can occur on one or both sides of the body as well. Lymphedema may be only mildly apparent or debilitating and severe, as in the case of lymphatic filariasis in which an extremity may swell to several times its normal size. It may first be noticed by the affected individual as an asymmetry between… both arms or legs or difficulty fitting into clothing or jewelry. If the swelling becomes pronounced, fatigue due to added weight may occur, along with embarrassment and restriction of daily activities. The long-term accumulation of fluid and proteins in the tissues leads to inflammation and eventual scarring of tissues, leading to a firm, taut swelling that does not retain its displacement when indented with a fingertip (nonpitting edema). The skin in the affected area thickens and may take on a lumpy appearance described as an orange-peel (peau d’orange) effect. The overlying skin can also become scaly and cracked, and secondary bacterial or fungal infections of the skin may develop. Affected areas may feel tender and sore, and loss of mobility or flexibility can occur. Other symptoms can accompany the swelling of lymphedema including: - Warmth, redness, or itching - Tingling or burning pains - Fever and chills - Decreased flexibility in the joints - Aching, pain, and fullness of the involved area - Skin rash - The immune system function is also suppressed in the scarred and swollen areas affected by lymphedema, leading to frequent infections and even a malignant tumor of lymph vessels known as lymphangiosarcoma. - How is lymphedema diagnosed? A thorough medical history and physical examination are done to rule out other causes of limb swelling, such as edema due to congestive heart failure, kidney failure, blood clots, or other conditions. Often, the medical history of surgery or other conditions involving the lymph nodes will point to the cause and establish the diagnosis of lymphedema. If the cause of swelling is not clear, other tests may be carried out to help determine the cause of limb swelling. CT or MRI scans may be useful to help define lymph node architecture or to identify tumors or other abnormalities. Lymphoscintigraphy is a test that involves injecting a tracer dye into lymph vessels and then observing the flow of fluid using imaging technologies. It can illustrate blockages in lymph flow. Doppler ultrasound scans are sound wave tests used to evaluate blood flow, and can help identify a blood clot in the veins (deep venous thrombosis) that may be a cause of limb swelling. What are possible treatments for lymphedema? There is no cure for lymphedema. Treatments are designed to reduce the swelling and control discomfort and other symptoms. Compression treatments can help reduce swelling and prevent scarring and other complications. Examples of compression treatments are: Elastic sleeves or stockings: These must fit properly and provide gradual compression from the end of the extremity toward the trunk. Bandages: Bandages that are wrapped more tightly around the end of the extremity and wrapped more loosely toward the trunk, to encourage lymph flow out of the extremity toward the center of the body. Pneumatic compression devices: These are sleeves or stockings connected to a pump that provides sequential compression from the end of the extremity toward the body. These may be used in the clinic or in the home and are useful in preventing long-term scarring, but they cannot be used in all individuals, such as those with congestive heart failure, deep venous thrombosis, or certain infections. Massage techniques, known as manual lymph drainage, can be useful for some people with lymphedema. Exercises: Exercises that lightly contract and stimulate arm or leg muscles may be prescribed by the doctor or physical therapist to help stimulate lymph flow. Surgical treatments for lymphedema are used to remove excess fluid and tissue in severe cases, but no surgical treatment is able to cure lymphedema. Infections of skin and tissues associated with lymphedema must be promptly and effectively treated with appropriate antibiotics to avoid spread to the bloodstream (sepsis). Patients affected by lymphedema must constantly monitor for infection of the affected area. In affected areas of the world, the drug diethylcarbamazine is used to treat filariasis. What are complications of lymphedema? As noted before, secondary infections of the skin and underlying tissues can complicate lymphedema. Inflammation of the skin and connective tissues, known as cellulitis, and inflammation of the lymphatic vessels (lymphangitis) are common complications of lymphedema. Deep venous thrombosis (formation of blood clots in the deeper veins) is also a known complication of lymphedema. Impairment of functioning in the affected area and cosmetic i…ssues are further complications of lymphedema. Those who have had chronic, long-term lymphedema for more than 10 years have a 10% chance of developing a cancer of the lymphatic vessels known as lymphangiosarcoma. The cancer begins as a reddish or purplish lump visible on the skin and spreads rapidly. This is an aggressive cancer that is treated by amputation of the affected limb. Even with treatment, the prognosis is poor, with less than 10% of patients surviving after 5 years. Can lymphedema be prevented? Primary lymphedema cannot be prevented, but measures can be taken to reduce the risk of developing lymphedema if one is at risk for secondary lymphedema, such as after cancer surgery or radiation treatment. The following steps may help reduce the risk of developing lymphedema in those at risk for secondary lymphedema: Keep the affected arm or leg elevated above the level of the heart, when possible. Avoid tight or constricting garments or jewelry (also avoid the use of blood pressure cuffs on an affected arm). Do not apply a heating pad to the affected area or use hot tubs, steam baths, etc. Keep the body adequately hydrated. Avoid heavy lifting and forceful activity with the affected limb; but normal, light activity is encouraged. Do not carry a heavy purse on an affected arm. Practice thorough and careful skin hygiene Avoid insect bites and sunburns What is the prognosis for lymphedema? Lymphedema cannot be cured, but compression treatments and preventive measures for those at risk for secondary lymphedema can help minimize swelling and associated symptoms. As mentioned above, chronic, long-term edema that persists for many years is associated with an increased risk of developing a rare cancer, lymphangiosarcoma. Massage is a common treatment used to reduce swelling from lymphedema. Our Physios are trained in specific lymphedema drainage techniques. Please contact us for more information.
<urn:uuid:6679f0af-c8ff-4f1a-bb55-024ca9214c44>
CC-MAIN-2021-43
https://aspenphysio.co.uk/lymphedema/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00631.warc.gz
en
0.924414
2,543
3.6875
4
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) |eMedicine||med/3435 med/2922 emerg/509 ped/2006| Rheumatic fever is an inflammatory disease which may develop two to three weeks after a Group A streptococcal infection (such as strep throat [note: scarlet fever does cause glomerulonephritis but it does not cause Rheumatic fever]). It is believed to be caused by antibody cross-reactivity and can involve the heart, joints, skin, and brain. Acute rheumatic fever commonly appears in children ages 5 through 15, with only 20% of first time attacks occurring in adults. Rheumatic fever is common worldwide and responsible for many cases of damaged heart valves. In Western countries, it became fairly rare since the 1960s, probably due to widespread use of antibiotics to treat streptococcus infections. While it is far less common in the United States since the beginning of the 20th century, there have been a few outbreaks since the 1980s. Although the disease seldom occurs, it is serious and has a mortality of 2–5%. Rheumatic fever primarily affects children between ages 5 and 15 years and occurs approximately 20 days after strep throat or scarlet fever. In up to a third of cases, the underlying strep infection may not have caused any symptoms. The rate of development of rheumatic fever in individuals with untreated strep infection is estimated to be 3%. The incidence of recurrence with a subsequent untreated infection is substantially greater (about 50%). The rate of development is far lower in individuals who have received antibiotic treatment. Persons who have suffered a case of rheumatic fever have a tendency to develop flare-ups with repeated strep infections. The recurrence of rheumatic fever is relatively common in the absence of maintenance of low dose antibiotics, especially during the first three to five years after the first episode. Heart complications may be long-term and severe, particularly if valves are involved. Diagnosis: modified Jones criteria T. Duckett Jones, MD, first published these criteria in 1944. They have been periodically revised by the American Heart Association in collaboration with other groups. Two major criteria, or one major and two minor criteria, when there is also evidence of a previous strep infection, support the diagnosis of rheumatic fever. Exceptions are chorea and indolent carditis, each of which by itself can indicate rheumatic fever. The mnemonic JONES is often used to recall the Major Criteria. - Joints (Migratory polyarthritis): a temporary migrating inflammation of the large joints, usually starting in the legs and migrating upwards. - O [imagine heart-shaped O] (carditis): inflammation of the heart muscle which can manifest as congestive heart failure with shortness of breath, pericarditis with a rub, or a new heart murmur. - Nodules (subcutaneous nodules - a form of Aschoff bodies): painless, firm collections of collagen fibers on the back of the wrist, the outside elbow, and the front of the knees. These now occur infrequently. - Erythema marginatum: a long lasting rash that begins on the trunk or arms as macules and spreads outward to form a snakelike ring while clearing in the middle. This rash never starts on the face and is made worse with heat. - Sydenham's chorea (St. Vitus' dance): a characteristic series of rapid movements without purpose of the face and arms. This can occur very late in the disease. an additional way to remember the major criteria is by the mnemonic: C.A.N.C.ER - C: Carditis - A: Arthritis - N: Nodules (sub cutaneous) - C: Chorea - ER: ERythema Marginatum - Fever: temperature elevation - Arthralgia: Joint pain without swelling - Laboratory abnormalities: increased Erythrocyte sedimentation rate, increased C reactive protein, leukocytosis - Electrocardiogram abnormalities: a prolonged PR interval - Evidence of Group A Strep infection: elevated or rising Antistreptolysin O titre, or DNAase, though by the time clinical illness begins, cultures for the streptococci bacterium will be negative. - Previous rheumatic fever or inactive heart disease Other signs and symptoms - Abdominal pain Rheumatic fever is a systemic disease affecting the peri-arteriolar connective tissue and can occur after an untreated Group A Beta hemolytic streptococcal pharyngeal infection. It is believed to be caused by antibody cross-reactivity. This cross-reactivity is a Type II hypersensitivity reaction and is termed molecular mimicry. Usually, self reactive B cells remain anergic in the periphery without T cell co-stimulation. During a Strep. infection activated antigen presenting cells such as macrophages present the bacterial antigen to helper T cells. Helper T cells subsequently activate B cells and induce the production of antibodies against the cell wall of Streptococcus. However the antibodies may also react against the myocardium and joints, producing the symptoms of rheumatic fever. Group A streptococcus pyogenes has a cell wall composed of branched polymers which sometimes contain "M proteins" that are highly antigenic. The antibodies which the immune system generates against the "M proteins" may cross react with cardiac myofiber protein myosin and smooth muscle cells of arteries, inducing cytokine release and tissue destruction. This inflammation occurs through direct attachment of complement and Fc receptor-mediated recruitment of neutrophils and macrophages. Characteristic Aschoff bodies, composed of swollen eosinophilic collagen surrounded by lymphocytes and macrophages can be seen on light microscopy. The larger macrophages may become Aschoff giant cells. Acute rheumatic valvular lesions may also involve a cell-mediated immunity reaction as these lesions predominantly contain T-helper cells and macrophages. In acute RF, these lesions can be found in any layer of the heart and is hence called pancarditis. The inflammation may cause a serofibrinous pericardial exudates described as “bread-and-butter” pericarditis, which usually resolves without sequelae. Involvement of the endocardium typically results in fibrinoid necrosis and verrucae formation along the lines of closure of the left-sided heart valves. Warty projections arise from the deposition, while subendothelial lesions may induce irregular thickenings called MacCallum plaques. Chronic rheumatic heart disease is characterized by repeated inflammation with fibrinous resolution. The cardinal anatomic changes of the valve include leaflet thickening, commissural fusion and shortening and thickening of the tendinous cords. The management of acute rheumatic fever is geared toward the reduction of inflammation with anti-inflammatory medications such as aspirin or corticosteroids. Individuals with positive cultures for strep throat should also be treated with antibiotics. Aspirin is the drug of choice and should be given at high doses of 100mg/kg/day. One should watch for side effects like gastritis, salicylate poisoning etc. Steroids are reserved for cases where there is evidence of involvement of heart. The use of steroids may prevent further scarring of tissue and may prevent development of sequelae such as Mitral stenosis. Monthly injections of Longacting Penicillin must be given for a period of 5 years in patients having one attack of Rheumatic fever. If there is evidence of carditis, the length of Penidure therapy may be up to 40 years. Another important cornerstone in treating rheumatic fever includes the continual use of low dose antibiotics (such as penicillin, sulfadiazine, or erythromycin) to prevent recurrence. Patients with positive cultures for Streptococcus pyogenes should be treated with penicillin as long as allergy is not present. This treatment will not alter the course of the acute disease. Some patients develop significant carditis which manifests as congestive heart failure. This requires the usual treatment for heart failure: diuretics and digoxin. Unlike normal heart failure, rheumatic heart failure responds well to corticosteroids. Prevention of recurrence is achieved by eradicating the acute infection and prophylaxis with antibiotics. The American Heart Association recommends daily or monthly prophylaxis continue long-term, perhaps for life. Nurses also have a role in prevention, primarily in screening school-aged children for sore throats that may be caused by Group A streptococci(especially Group A β Hemolytic Streptococcus pyogenes). - Kumar, Vinay; Abbas, Abul K.; Fausto, Nelson; & Mitchell, Richard N. (2007). Robbins Basic Pathology (8th ed.). Saunders Elsevier. pp. 403-406 ISBN 978-1-4160-2973-1 - Medline Plus Medical Encyclopedia: Rheumatic fever - Porth, Carol (2007). Essentials of pathophysiology: concepts of altered health states, Hagerstown, MD: Lippincott Williams & Wilkins. - Jones TD (1944). The diagnosis of rheumatic fever.. JAMA 126: 481–4. - Ferrieri P (2002). Proceedings of the Jones Criteria workshop. Circulation 106 (19): 2521–3. - Steven J Parrillo, DO, FACOEP, FACEP. eMedicine — Rheumatic Fever. URL accessed on 2007-07-14. - (1992) Guidelines for the diagnosis of rheumatic fever. Jones Criteria, 1992 update. Special Writing Group of the Committee on Rheumatic Fever, Endocarditis, and Kawasaki Disease of the Council on Cardiovascular Disease in the Young of the American Heart Association. JAMA 268 (15): 2069–73. - Saxena, Anita (2000). Diagnosis of rheumatic fever: Current status of Jones criteria and role of echocardiography. Indian Journal of Pediatrics 67 (4): 283–6. - Abbas and Lechtman. Basic Immunology: Functions and Disorders of the Immune System. Elsevier Inc. 2004. - Faé KC, da Silva DD, Oshiro SE, et al (May 2006). Mimicry in recognition of cardiac myosin peptides by heart-intralesional T cell clones from rheumatic heart disease. J. Immunol. 176 (9): 5662–70. - Cotran, Ramzi S.; Kumar, Vinay; Fausto, Nelson; Nelso Fausto; Robbins, Stanley L.; Abbas, Abul K. (2005). Robbins and Cotran pathologic basis of disease, St. Louis, Mo: Elsevier Saunders. - Rheumatic Heart Disease/Rheumatic Fever. American Heart Association. URL accessed on 2008-02-17. - Rheumatic fever information from Seattle Children's Hospital Heart Center Template:Gram-positive bacterial diseases Circulatory system pathology (I, 390-459) Hypertensive heart disease - Hypertensive nephropathy - Secondary hypertension (Renovascular hypertension) |Ischaemic heart disease| Pulmonary embolism - Cor pulmonale Pericarditis - Pericardial effusion - Cardiac tamponade Myocarditis - Cardiomyopathy (Dilated cardiomyopathy, Hypertrophic cardiomyopathy, Loeffler endocarditis, Restrictive cardiomyopathy) - Arrhythmogenic right ventricular dysplasia |Electrical conduction system of the heart Heart block: AV block (First degree, Second degree, Third degree) - Bundle branch block (Left, Right) - Bifascicular block - Trifascicular block |Other heart conditions| Intracranial hemorrhage/cerebral hemorrhage: Extra-axial hemorrhage (Epidural hemorrhage, Subdural hemorrhage, Subarachnoid hemorrhage) - Intra-axial hematoma (Intraventricular hemorrhages, Intraparenchymal hemorrhage) - Anterior spinal artery syndrome - Binswanger's disease - Moyamoya disease Atherosclerosis (Renal artery stenosis) - Aortic dissection/Aortic aneurysm (Abdominal aortic aneurysm) - Aneurysm - Raynaud's phenomenon/Raynaud's disease - Buerger's disease - Arteritis (Aortitis) - Intermittent claudication - Arteriovenous fistula - Hereditary hemorrhagic telangiectasia - Spider angioma |Veins, lymphatic vessels and lymph nodes Thrombosis/Phlebitis/Thrombophlebitis (Deep vein thrombosis, May-Thurner syndrome, Portal vein thrombosis, Venous thrombosis, Budd-Chiari syndrome, Renal vein thrombosis, Paget-Schroetter disease) - Varicose veins/Portacaval anastomosis (Hemorrhoid, Esophageal varices, Varicocele, Gastric varices, Caput medusae) - Superior vena cava syndrome - Lymph(Lymphadenitis, Lymphedema, Lymphangitis) See also congenital (Q20-Q28, 745-747) Template:Hypersensitivity and autoimmune diseases |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:5d485645-5726-4057-9d01-ebe9bab048d0>
CC-MAIN-2021-43
https://psychology.wikia.org/wiki/Rheumatic_fever
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.841292
2,973
3.296875
3
Wind in Chinese Medicine by Tom Bisio Characteristics of Wind 1. Wind Prevails In The Spring However, Wind can occur in any season. 2. Wind Can Be Understood as Any Sudden Climactic Change Wind often accompanies climactic changes. 3. Wind Is Light And Yang Wind tends to rise, disperse, and to move up and out. 4. Wind has the Ability to Penetrate the Skin Wind can penetrate through the pores and lodge in the Cou Li – the interstices between the skin and flesh. Sweating opens the pores and makes it easier for the wind (with cold and damp) to penetrate. 5. Wind Affects the Superficial layers (Exterior) and Upper Parts of the Body Wind commonly penetrates into the superficial layers of the body (muscles and skin) and the upper parts of the body (head and shoulders). This can creates symptoms like: - Head: Headache - Neck: Stiffness; Aching - Aching, Stiff Shoulders & Upper Back - Skin & Face: Bell’s Palsy, Numbness, Itching - Lungs: Congestion; Cough - Nose: Runny Nose; Nasal Obstruction - Muscles: Stiff; Aching 6. Wind Can Penetrate at Specific Acupuncture Points These points are, for the most part located in the upper back and neck area. These points are often used to expel wind pathogens. Some examples are: - GB 20 Feng Chi(“Wind Pond”) - DU 16 Feng Fu (“Wind Mansion”) - BL 12 Reng Men(“Wind Gate”) - GB 31 Feng Shi(“Wind Market”) 7. Wind Gusts & Is Characterized By Rapid Change Wind diseases often have symptoms that migrate or come and go. For example, skin rashes that appear and disappear or migratory arthralgia (pain that moves from joint to joint). Wind’s moves rapidly and suddenly, so Wind Diseases are characterized by conditions that come on with great force and rapidity. Examples are dizziness, strokes (often called “wind-stroke”) or seizures. 8. Wind Is Characterized By Constant Movement This manifests in the body with symptoms such as: - Itching Spasms Wind may also block the normal movement of Qi and cause abnormal rigidity of the limbs, trunk or neck. It may also cause numbness or paresthesia. Conditions like Bell’s Palsy, in which one side of the face droops, are often caused by exposure to Wind, which penetrates the surface and inflames the facial nerve, causing one side of the face to droop and flatten. 9. Wind Is The Most Common Pathogen and Easily Combines With Other Pathogens It is said that: “The Hundred Diseases Develop From Wind.” Wind’s kinetic energy allows it to penetrate the surface layers of the body bringing other pathogens in with it. Thus disease patterns are characterized as Wind-Heat; Wind-Cold; Wind-Damp; etc., illustrating that Wind easily combines with the other External Pathogens. External Pathogens are traditionally known as the Six Qi Or “Six Energies” (6 Devils; 6 Evils; 6 Pernicious Influences) - Summer Heat 10. The Liver Loathes The Wind The liver is responsible for the smooth orderly movement of Qi in the body. This makes it particularly vulnerable to the erratic, gusting movement of Wind. When Wind affects the viscera directly (usually the liver), it is referred to as “Internal Wind.” 11. Internal Wind Is Often Associated with a Liver Qi Imbalance Since the liver stores blood, a deficiency of blood affects the liver. Blood fails to contain Liver Yang, which then rises upward creating Heat and Wind. The sudden irregular movements of Wind tend to upset the smooth flow of Qi promoted by the Liver. This can occur due to a variety of factors. One example is extreme Heat (high fever) “Stirring Up Wind”, in the same way that large fires create drafts of Wind. This is a type of Internal Wind, often referred to as “Liver Wind Stirring”. In severe cases it is characterized by high fever, convulsions, rigidity, and opisthotonus (arching back of head and/or neck) with delirium or coma. Reflections on the Interrelationship of Wind, Heat & Cold The interrelationship of natural phenomena (Wind, Cold, Heat) has been correlated for centuries. James Joule conducted many important experiments in the mid-nineteenth century that showed the inter conversion of kinetic energy, thermal energy, and gravitational energy. He understood that these natural forces that we perceive outside the body are also in it. The motion of air that we call wind arises chiefly from the intense heat of the torrid zone compared with the temperature of the temperate and frigid zones. Here we have an instance of heat being converted into the living force of currents of air. These currents of air, in their progress across the sea, lift up its waves and propel the ships; whilst in passing across the land they shake the trees and disturb every blade of grass. The waves by their violent motion, the ships by their passage through the resisting medium, and the trees by their rubbing of their branches together and the friction of their leaves against themselves and the air, each and all of them generate heat equivalent to the diminution of the living force of the air which they occasion. The heat thus restored may again contribute to raise fresh currents of air; and thus the phenomena may be repeated in endless succession and variety. When we consider our own animal frames, ‘fearfully and wonderfully made,’ we observe in the motion of our limbs a continual conversion of heat into living force which may either be converted back again into heat or employed in producing attraction through space as when a man ascends a mountain. Indeed the phenomena of nature, whether mechanical, chemical, or vital, consist almost entirely in a continual conversion of attraction through space [gravitational force], living force, and heat into one another. What Joules understood was that solar radiation is absorbed by the Earth’s surface. Areas that receive greater amounts of radiation (the equator) warm the air, causing molecules of warm air to expand (i.e.: to move more rapidly and pack less densely) and rise. Surrounding cooler air rushes in to take its place. Warm air eventually cools and sinks, because molecules of cool air move slower and pack more densely together. These convection currents create pressure differences on the Earth’s surface, giving rise to winds. When viewing the body as a landscape that is a microcosm of the natural world around us, it is easy to apply Jules’ observations to the traditional Chinese idea of Wind, cold and heat in the body. It helps us to see how relatively warmer and colder areas inside the body might create pressure differentials that generate “Wind” inside the body. It is also easy to see that these pressures and winds could have interaction with the pressures, winds and temperature differentials in the outside world. The Six Qi as both External and Internal Disease Manifestations Wind diseases are generally thought to be caused by the meteorologic phenomenon of wind. However, the upward stirring of the Liver Yang Qi and Liver transform into wind and give also rise to wind-like symptoms. Extreme internal heat can also “stir up” Wind. These conditions are described as ‘internally generated Wind’. Similarly, Cold can be externally contracted, by exposure to extreme cold or sudden frost, but the presence of Cold may also be a result of weak Yang Qi. This dynamic is circular in nature. Damage to the body’s Yang Qi makes one susceptible to the penetration of external Cold, and the penetration of External Cold blocks the circulation of Yang Qi. Similarly, internal Dampness occurs when Spleen Yang fails to move and transform fluids. This makes the body susceptible to the penetration of external dampness. The penetration of external damp can in turn overload the Spleen Yang, creating even more dampness internally. Dryness also may be internally generated by depletion of Yin or externally contracted through exposure to a dry climate. It is important to keep in mind the Six Qi are paradigms of pathologies that are the product of multiple factors, including the mind and the emotions. Wind, Cold, Damp and Dryness can all transform into Fire. Internal Fire can also be caused by the “Five Minds” – the emotions or mindsets – which can create Yin Yang imbalances. Many complex etiologies can be understood through the paradigm of the Six Qi. It is possible for pathogens that have invaded the body to hide in a latent state, and then manifest when the right combination of circumstances (emotional upset; change of season; dietary indiscretion; etc.) occurs. This idea is advanced in the Huang Di Nei Jing Ling Shu: The Yellow Emperor asks: sometimes people become sick all of a sudden without being aware of an attack by external vicious energies nor of any emotional disturbances; why is that? Is it because of any mysterious causes such as ghosts or gods? Chi Po replies: This is caused by a vicious energy that is already residing in the body without making disturbances and also by the patient’s own emotional disturbances due to unfulfilled desires, both of which work together to give rise to an internal disorder of energy and blood resulting in a struggle between two forces. The hidden causes are so delicate that they are invisible and cannot be heard, and people are inclined to think that some mysterious factors such as ghosts or gods are at play. (Ling Shu Chapter 24) This idea of a latent disease that incubates over period of time may help to explain the development of certain immune-deficiency diseases and the ability of pathogens to change when lodged in the body. Seasonal change can also play a role in bringing on latent energies, which penetrated into the body in an earlier season. For example a winter cold pathogen which remains in the body, may manifest in the Spring as a Heat Pathogen. Bi means “block” or “obstruction.” Most of what is termed arthritis in western medicine falls under Bi Syndrome in Chinese medicine. Bi Syndromes are musculo-skeletal problems caused by the invasion of the Six Qi, which can combine with internal phelgm and stagnant blood to create blockage and stagnation in the joints, ligaments, tendons and muscles, blocking the circulation of blood and Qi in those areas. This results in an arthralgia-like pain, manifesting as stiffness and spasm, numbness, or heaviness. There are three main factors that can contribute to the development or progression of Bi Syndromes. - A weak constitution with insufficient Qi and Blood and weakness of the Wei Qi, which allows pathogens to penetrate into the outer layers of the body and lodge there. The patient is too weak to expel the pathogen and gradually it penetrates deeper, settling into the Sinews and Bones and causing stagnation of the Blood. This gives rise to Painful Obstruction. - Even in individuals with a strong constitution, if the pathogenic factor is strong enough or there are repeated exposures, a Bi Syndrome may develop. Living in a cold area with inadequate protection against cold, sleeping exposed to snow, wind, or fog, exposure to cold or dampness after physical exertion, or while sweating when the pores are open, living in damp areas, and getting caught in the wind and rain with inadequate protection are some examples of this. Often repeated exposure may be work related, such as working in a freezer or icehouse, working in cold water, or working in a boiler room. One unusual example is a patient who was a woodworker. He developed Hot Bi in his arms from using high heat and steam to bend wood. - Phlegm and Stagnant Blood are considered Secondary Pathogens. This is because they are usually a result of dysfunctional processes set in motion by a primary disease agent. For example, traumatic injury often leads to Stagnant Blood and Body Fluids. If Fluids remain stagnant long enough they can congeal into phlegm and lodge in the Jing Luo (Channels & Collaterals) or the joints. Phlegm and Stagnant Blood can block the Jing Luo and penetrate into the bones causing swelling and joint deformation. Zhong Feng (Wind Strike) Zhong Feng refers to a sudden strike of external pathogenic wind combined with internal wind. Wind Strike can correspond to Western conditions like brain hemorrhage, cerebral thrombosis and vessel spasm. In Chinese medicine the pathogenesis of these kinds of conditions can stem from Bi-syndrome. External wind, damp heat and cold can combine with internal wind due to internal imbalances. It is a common problem of old age because deficiencies of Yin naturally develop with aging, giving rise to liver Yang agitation, which can produce Interior Wind. Facial paralysis after a stroke is due to Internal Wind while Bell’s palsy is attributable to External Wind. Wind Strike can be summarized in four words: Wind-Phlegm-Fire-Stasis. All of these may or may not be present, but at least three of them must be present to produce Blowing Wind. They can also be present to different degrees of intensity, which can give rise to many various kinds of Wind Strikes. Wind Strike is a good example of multiple factors combining to create illness. Over work and unalleviated stress deplete Kidney Yin and Jing, which in turn leads to an elevation of Liver Yang, giving rise to heat and internal wind, which in turn may combine with external wind and heat, due to lowered resistance to external pathogens. External pathogens can lead to fever and internal heat. Liver Yang excess and Yin deficiency generating internal wind is often the result of anger and emotional frustration. This can combine with loss of blood or general blood deficiency. Since the liver stores blood, a deficiency of blood affects the liver. Blood fails to contain Liver Yang, which then rises upward crating heat and wind. When wind enters the conduits and the network vessels, internal winds and the wind intruding from the outside excite each other until suddenly phlegm-fire emerges and causes obstructions. Irregular eating habits and a diet that is composed of fatty foods, fried foods and sugars with little nutritional value can compound the problem – this diet weakens the spleen and gives rise to phlegm production, which can in turn block circulation of the channels and collaterals. Drinking alcohol also creates internal heat, and ancient Chinese physicians noticed that if a person sat or lay facing the wind in order to cool the body after excessive drinking of alcohol and eating, wind could penetrate the body causing hemiplegia or facial paralysis. This is because excessive eating and drinking opens the pores and creates a temporary condition of internal heat and obstruction that easily allows wind to penetrate and to combine with the heat and obstruction. Another complicating factor in Wind Stroke is the use of Western medications like painkillers, anti-acids and anti-depressants, which can deplete the body, affect digestion and interfere with the Qi Dynamic. Additional factors are excessive sexual activity in men combined with inadequate rest, and excessive physical activity or work which over-strains the body further exacerbating Qi and blood deficiency. These factors can weakens the kidney and marrow essence, leading to a deficiency, On Matter, Living Force and Heat. J. P. Joules in Joules’ Only General Exposition of the Principle of Conservation of Energy. C. Watson, Pasadena CA: California Institute of Technology http://blog.spu.edu/energyproject/files/2011/07/Watson-1947-Joules-original-energy-paper.pdf Fundamentals of Chinese Medicine. Nigel Wiseman and Andres Ellis (Brookline, MA: Paradigm Publications, 1996) p. 81. The Concept of Wind in Traditional Chinese Medicine. Mehrab Dashtdar, et als (Journal of Pharmaco-Puncture v.19(4); 2016) Dechttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5234349/
<urn:uuid:1df0889e-07fc-49bb-bbe7-990a3c3c00f7>
CC-MAIN-2021-43
https://www.internalartsinternational.com/free/wind-in-chinese-medicine-part-3/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00071.warc.gz
en
0.922567
3,437
3.046875
3
Dr. Frances Houghton ‘So many people accept without question the dogmatic opinion and fantastic claims of the patent medicine vendor, ignorant quacks, or even film stars, in preference to the guarded statements of scientists and the experimental evidence which shows that [these] claims and opinions have no foundation in fact…’ In 1943, the exasperated Medical Director General (MDG) of Britain’s Royal Navy (RN) noted that rumour and disinformation were causing considerable damage to the health of the Service. Securing collective health in the Navy by preventing the spread of infectious disease was vital to the successful prosecution of Britain’s war effort, yet the RN repeatedly found itself confronting the challenges of those who undermined its medical experts’ best endeavours. This post examines the Navy’s efforts to counter ‘anti-vaxxers’ during the Second World War and considers the significance of this little-known history. Reflecting on the Ebola outbreak that tore through West Africa between 2014-16, Rob Boddice argues that ‘historians’ contributions to contemporary vaccine debates are… strikingly relevant, especially when focusing on the nature of anti-vaccinism.’ Historians have traced cycles of fear and resistance to vaccination right back to 1796 when Edward Jenner developed the first vaccine for smallpox. Successive compulsory vaccination legislation throughout the nineteenth century mobilised and crystallised anti-vaccination movements, whilst resistance to British vaccination policy also underpins histories of colonial medicine and imperial subjugation. Some of the perpetual anxieties that surrounded vaccine confidence sprang from clear abuses of domestic and imperial state and medical power, poor medical hygiene, concerns about lost labour productivity and earnings, and social stigmatisation of vaccination. Amid this longer and broader history, the Second World War helps us to understand how wartime medical experts came under ‘friendly’ fire for attempting to control epidemics in the armed forces. The following discussion of this takes its cue from medical historian Paula Larsson, reserving the term ‘anti-vaxxers’ for the figureheads of anti-vaccination movements, who mobilised others against vaccination and whose activities included the distribution of disinformation. As we see, the challenges that these Second World War ‘anti-vaxxers’ posed to the Navy’s medical expertise offer a useful window upon the tenacious cycles of fear, doubt, ignorance, and suspicion surrounding modern vaccination. In February 1941, a row blew up in the House of Commons between William Leach, a Labour MP who was concerned about inoculation practices in the Navy, and AV Alexander, First Lord of the Admiralty. Having studied pre-war statistics on the health of the RN in 1932, 1934, and 1935, Leach insisted that these reports established that inoculated seamen in the early 1940s were subject to significantly greater risks of contracting and dying from typhoid fever (also known as enteric fever) than uninoculated men. Of the 13 cases of typhoid fever reported across the three years mentioned, 11 men had been inoculated. Of the 11 who had been vaccinated against typhoid, 2 were fatal cases. Leach claimed this proved that in the Navy during the early war years, ‘the great majority of the cases of typhoid fever are in the inoculated class and also the bulk of the deaths.’ Although Leach’s figures can, at best, be described as decidedly scanty, for the Navy, problems in challenging his views arose from incomplete record-keeping. The First Lord was forced to fudge his response with some rather clumsy guesswork based on Army records of vaccinated servicemen. The Naval MDG, however, gave the entire matter short shrift, noting that Leach’s claims were grounded in ‘false reasoning’ and woefully limited clinical evidence. Aside from demonstrating the eternal wisdom of keeping an institution’s statistical and record-keeping ducks in a row, there was a much deeper significance to this spat. Fundamentally, Leach’s concerns were connected to a widespread rumour that the RN routinely withheld shore leave from unvaccinated men. He sought to use what he viewed as the Navy’s ‘entire misconception’ of the above facts to prevent so-called ‘unfair deprivation’ of shore leave on foreign stations. Actually, vaccination was not compulsory in the wartime RN (nor in its sister Services). Men were entitled to refuse vaccination or re-vaccination on conscientious grounds without being subject to punishment or penalty for their decision. Nevertheless, official regulations did caution that men who refused vaccination were not to be allowed to land in ports where there was a risk of exposure to diseases such as smallpox. From the Navy’s perspective, this was simply a sensible precaution to safeguard the entire shipboard community. In the eyes of the National Anti-Vaccination League (NAVL), however, the wartime Navy were engaging in sinister efforts to deprive people who were fearful of vaccines of their rightful liberty. The NAVL originated as the Anti-Compulsory Vaccination League in 1866, and its Secretary, Lily Loat, was a prominent anti-vaccination activist of long-standing. Picking up the threads of Leach’s challenge, Loat forwarded to the Admiralty a resolution passed at the League’s annual Conference in May 1941 which formally censured the First Lord for listening to the advice of his medical experts. The NAVL also warned that they deplored the Navy’s practice of rejecting unvaccinated recruits for the Fleet Air Arm (FAA). This charge left the Navy a little puzzled; the Medical Department were only aware of one case in which a direct entry officer cadet undergoing pilot training for the FAA had refused vaccination, and he was given the option of selecting another branch of naval service instead. Overall, argued the RN, this was an unusual case, and the decision to stop the man’s pilot training was very much in his own interests since overseas flying personnel might be forced to land in endemic areas that posed high risk of contracting infectious disease. These debates spilled over into wider accusations from the League that the RN and the other Services were infringing ‘the right of the people to safeguard their health’. Given that ‘liberty’ was ‘in eclipse over so much of the world’, warned Loat, it was more needful than ever to stand up against a ‘medical dictatorship’. The NAVL did not restrict its anti-vaccination activism to the armed forces, objecting also to the proposed establishment of a State Medical Service on the basis that it would ‘limit freedom of ideas’ and be ‘grossly unfair’ to people who objected to compulsory taxation to pay for medical doctrines to which they were opposed (ie., vaccination). Throughout the rest of the war, the NAVL broadcast these ideas through their newsletter, The Vaccination Inquirer and Health Review. This provided a public platform from which to distribute challenges to military and civilian vaccination policies, in addition to publicising cases where it believed the Services had wronged conscientious objectors to vaccination. So what can we take away from all this? Not least, the wartime fight to protect the collective health of the RN underscored the imperative of keeping meticulous medical records; a couple of years into the war, the Navy began to overhaul and modernise its systems of medical record-keeping. The Navy’s problems also demonstrate the importance of being able to counter ‘dogmatic opinion and fantastic claims’ with clear, well-explained facts and anecdotal reassurance about misleading stories in a febrile climate of anxieties about medical intervention. This history also highlights that the voices of anti-vaccinationists who challenged leading medical authorities remained surprisingly prominent in wartime British public and political spheres. As our world marks the grim first anniversary of a year in which pandemic has ravaged the globe, in Britain the ghosts of the Second World War seem to touch our lives more than ever. Partly, of course, the 75th commemorative anniversaries of VE Day and VJ Day provided moments of much-needed distraction from the troubles of the present, but seemingly endless attempts to interpret the present crisis through a lens of the mystical ‘spirit’ of the wartime nation have also managed to frame the Battle of Covid-19 as an offshoot of that great ‘People’s War’ of eight decades ago. To me, as a historian, it is surely significant that even in the midst of the Second World War, we can identify the same fear-drenched language, the same anxious concerns about bodily and personal freedom from state intervention, the same heightened emotional responses to vaccination, and the same vitriol directed against top medical experts as we are currently witnessing in our own generations’ efforts to combat Covid-19. Perhaps, then, one of the more useful legacies of the Second World War in Britain might be to help broaden popular awareness of how ‘anti-vaxx’ discourses and methods threatened even the nation’s ‘finest’ hours – and to encourage reflection about how this might help to overcome the challenges that lie ahead in Britain’s public health endeavours in 2021. The National Archives (TNA), ADM 261/4, ‘Malaria Prevention: A Problem of Discipline’ Rob Boddice, ‘Vaccination, Fear and Historical Relevance’, History Compass, 14:2 (2016), 71-78 (71). Nadja Durbach, Bodily Matters: The Anti-Vaccination Movement in England, 1853-1907 (Durham, NC: Duke University Press, 2005); Deborah Brunton, The Politics of Vaccination: Practice and Policy in England, Wales, Ireland, and Scotland, 1800-1874 (Rochester, NY: University of Rochester Press, 2008); David Arnold, Colonizing the Body: State Medicine and Epidemic Disease in Nineteenth Century India (Berkeley: University of California Press,1993); Sanjoy Bhattacharya, Mark Harrison, and Michael Worboys, Fractured States: Smallpox, Public Health and Vaccination Policy in British India, 1800-1947 (London: Sangam Books Limited, 2005); Niels Brimnes, ‘Variolation, Vaccination and Popular Resistance in Early Colonial South India’, Medical History, 48:2 (2004), 199-228. TNA ADM 1/15661, ‘Vaccination and Inoculation of R.N. Personnel’ TNA ADM 1/15661, ‘Resolutions passed at Annual Conference of the National Anti-Vaccination League’, 22 May 1941.
<urn:uuid:1296cfe5-f3c7-4489-9d0d-1524c19bc7fb>
CC-MAIN-2021-43
https://uomhistory.com/2021/02/23/vaccinating-jack-tar-the-royal-navy-versus-anti-vaxxers-during-the-second-world-war/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00070.warc.gz
en
0.952235
2,235
2.859375
3
Contributions to Variation in Fly Ball Distances by Alan Nathan July 6, 2020 Back in early 2013, I wrote a guest article for Baseball Prospectus entitled “How Far Did That Fly Ball Travel?” In that article, I posed a seemingly simple question: Can we predict the landing point of a fly ball just after it leaves the bat? A more precise way to ask the question is as follows: Suppose the velocity vector of a fly ball just after leaving the bat is known, so that the exit velocity, launch angle, and spray angle are all known. How well does that information determine the landing point? I then proceeded to investigate the question, at least for home runs, with the aid of HITf/x data for the initial velocity vector and the ESPN Home Run Tracker for the landing point and hang time. Using a technique described in the article, that information was used along with a trajectory model to reconstruct the full trajectory and extrapolate it to ground level to determine the fly ball distance. The answer to the question was immediately obvious: The initial velocity vector poorly determines the fly ball distance. This conclusion led naturally to the next question: Why? One obvious reason is variation in atmospheric conditions, especially wind. However, the data revealed that the variation in home run distance for given initial velocity was as large in Tropicana Field, where the atmospheric conditions are expected to be constant, as in the rest of the league. So that was eliminated, at least as the primary culprit. The article then went on to consider variation in two other parameters that play a role in fly ball distance: backspin ωb and drag coefficient CD. Neither of these parameters were directly measured. Rather they were inferred, along with the sidespin ωs, in the procedure used to recreate the full trajectory. The analysis showed the following: For a given value of CD, distance increases as ωb increases. This makes sense, since larger backspin results in greater lift, keeping the ball in the air longer so that it travels farther. For a given value of ωb, distance decreases as CD increases. Again this makes sense, since greater drag is expected to reduce the carry of a fly ball. Interestingly, this was the first appearance in print of a suggestion of a significant ball-to-ball variation in the drag properties of baseballs. There was a moderately strong positive correlation between CD and ωb, suggesting that the drag on a baseball increases with increasing spin, all other things equal. Although this effect is well known for golf balls and had been speculated for baseballs in R. K. Adair’s excellent The Physics of Baseball, to my knowledge this is the first real evidence showing the effect for baseballs. Given that both lift and drag increase with increasing ωb and that they have the opposite effect on distance, it was tentatively concluded that at high enough spin rate there would be no further increase (and perhaps even a decrease) in distance with a further increase in spin. Given the above conclusions, which were based on indirect determinations of spin and drag, I decided that it was necessary to do a dedicated experiment under controlled conditions. That led to the January 2014 Minute Maid Park experiment done by my Washington State collaborators, my student, and me under controlled atmospheric conditions, leading to another article. In the experiment, we had complete control over the initial exit velocity, launch angle, and backspin using a specially designed ball launcher. Further, we utilized the stadium Trackman to obtain the full trajectory for each fly ball, from which we could unambiguously determine the drag coefficient. The experiment confirmed directly all of the conclusions reached indirectly from the previous home run analysis, particularly: A significant ball-to-ball variation in CD for fixed ωb. An increase in CD with ωb. Remarkably little variation in fly ball distance for backspins in the range 2000-3200 rpm. Interestingly, neither of these studies considered the effect of either the spray angle or sidespin ωs on the carry of a fly ball. Both those issues were addressed in my more recent unpublished article, “Why Does a Fly Ball Carry Better to Centerfield?”. From analysis of Statcast data, including exit velocity, launch angle, spray angle, backspin rate, sidespin rate, and distance, the following was found: Fly ball distance depends on ωs, which in turn depends on the spray angle. Therefore, fly ball distance depends on the spray angle. Fly ball distance is largest when ωs=0 and is roughly symmetric about ωs=0. ωs=0 (and therefore fly ball distance is maximum) at a spray angle slightly to the pull side of straightaway center field. As a consequence of the previous bullet point, balls hit at a given spray angle to the pull side (where the magnitude of ωs is smaller) will carry farther than balls hit at the same spray angle on the opposite side (where the magnitude of ωs is greater). These observations make it clear that my earlier work needs to be expanded to include the effects of spray angle and sidespin, leading to the present analysis. The earlier observation was that for fixed exit velocity and launch angle, there was a variation in fly ball distance. The list of possible factors will now include the following: φ∗, ωb, ωs, CD, and measurement noise. The goal will be to quantitatively determine the contribution of each of these to the variation in fly ball distance. Statcast Data The data used in this study are Statcast batted ball trajectories from the 2016-2019 seasons. As in the earlier study, the data include exit velocity v0, launch angle θ, adjusted spray angle φ∗, and landing location. However, unlike the earlier study, the data consist of actual trajectories rather than a model for the trajectories, thereby improving considerably the ability to determine the drag coefficient CD. The data also include the spin rate ω and spin axis, from which a separation of ω into backspin ωb and sidespin ωs rates can be done. (Here it should be noted that while the Trackman radar system, an integral part of Statcast, measures the total spin ω directly, it only infers the spin axis (and therefore ωb and ωs) from the trajectory, using a proprietary algorithm.) Once again, this is an improvement over the previous study in which the spin components were inferred but not measured. Only fly balls were chosen for which the batted ball was tracked to at least 80% of its eventual distance, assuring good precision in determining the full distance by extrapolating the trajectory to ground level. Moreover, only data from Tropicana Field were utilized, thereby assuring stable atmospheric conditions. The data were restricted to exit velocities in the range 94-110 mph and launch angles in the range 25-30 degrees, resulting in a total of 719 fly balls, with distances in the range 320-460 feet. Analysis The analysis reported here is a bit tricky because some of parameters being investigated are coupled to each other. For example, ωs (and to a lesser extent ωb) both depend on φ∗. Moreover, CD depends on the total spin Given these dependencies, the analysis proceeds as follows. The first step will be to determine CD from the rising segment of trajectory using techniques that are described in Appendix C of the report of the MLB Home Run Committee. The next step is to determine the linear dependence of CD on ω, which is shown in Figure 1 and has a slope α=(2.45 ± 0.09) × 10−5 rpm−1. Then for each batted ball event, the spin- independent drag coefficient CD0 is found, where This quantity is approximately normally distributed with standard deviation 0.018 (see Fig. 3), which is interpreted to be the ball-to-ball variation in drag coefficient, with the effect of spin removed. It is one of the factors contributing to the variation in fly ball distance. FIG. 1: Contour plot of drag coefficient CD versus the total spin ω. The red line is a linear fit to the data, with a slope of (2.45 ± 0.09) × 10−5 rpm−1 and with an rms deviation from the data of 0.0181. The next step is to perform a sequence of fits to the distance data using a non-parameteric generalized additive model, with each step in the sequence controlling for an additional parameter. In that manner, the effect of the added parameter on the fit can be determined, for fixed values of all the preceding parameters. In each fit, the launch angle is essentially a fixed parameter given its narrow range at the flat part of the distance-vs-θ distribution. The results of this procedure are given in Table I. Model 1 is the result of examining the fly ball distance while controlling only for exit velocity (and, as previously said, with a constant launch angle). The resulting distribution of distances has a standard deviation of 16.8 feet, very similar to what was found in the earlier analysis of home run distances. Model 2 additionally controls for the adjusted spray angle and reduces the standard deviation to 11.2 feet. Note that there is no physical reason for fly ball distance to depend on spray angle except for the dependence of ωb and ωs on spray angle. Indeed, both Statcast data and physics-based models for the ball-bat collision tell us that ωs is strongly dependent and ωb more weakly dependent on spray angle. Therefore Model 2 should be interpreted as implicitly controlling for the mean values of ωb and ωs for a given φ∗. Models 3 and 4 additionally control for the variation of ωb and ωs, respectively, about their mean values. Together they reduce the standard deviation to 8.3 feet. Finally, Model 5 additionally controls for CD0 and reduces the standard deviation to 5.4 feet. Since there are no further physical parameters that might contribute to distance, the residual 5.4 feet standard deviation is attributed to measurement noise, one source of which might be the extrapolation of the trajectory to ground level. The models’ fits are summarized in Table I below: Table I: Model Fits to Fly Ball Distance for Fixed Launch Angle Model Parameters R2 rms (ft) 1 v0 0.557 16.8 2 v0+φ∗ 0.805 11.2 3 v0+φ∗+ωb 0.850 9.9 4 v0+φ∗+ωb+ωs 0.892 8.3 5 v0+φ∗+ωb+ωs+CD0 0.955 5.4 Model fits to fly ball distance for fixed launch angle, with rms indicating the root-mean- square deviation of the fit from the data. Figure 2 is a plot of the fitted vs. the actual distance for each of the five models and shows the improvement of the fit with each added parameter. For Model 4, which controls for all but the spin-independent drag, the colors indicate CD0 and clearly show the anti-correlation of distance with drag. Figure 3 shows the dependence of distance on drag more explicitly. FIG. 2: Plot of fitted vs. actual distance for four different models, as indicated in each graph, with the dashed green line representing equality. The R-squared for each model is indicated. For model 4, in which all possible variables other than CD0 are included, the colors indicate CD0, with blue the largest, white the midrange, and red the smallest, and clearly shows the anti-correlation of distance with drag. Given the information in Table I, it is straightforward to unravel the contribution of each parameter to the total standard deviation of 16.8 feet, as shown in Table II. The largest single contribution to the spread of distances is the adjusted spray angle, which takes into account the dependence of distance on the mean values ωb and ωs as a function of spray angle. The contributions due to the variation of ωb and ωs about their mean values, to the ball-to-ball variation in CD0, and to the noise are in the range 5-6 feet and make up the remaining contribution. Note particularly that the ball-to-ball variation in drag contributes about 6 feet to the variation of fly ball distances. Table II: Contributions to the Variance in Fly Ball Distance for Fixed Exit Velocity and Launch Angle Parameter rms (ft) Fraction φ∗ 12.7 58% ωb 5.5 11% ωs 5.1 9% CD0 6.1 13% Noise 5.4 9% Total 16.8 100% FIG. 3: Top: Histogram of spin-independent drag coefficients, with mean 0.300 and standard deviation 0.018. Bottom: Distance vs. spin-independent drag coefficient for Model 5, with launch parameters v0=100 mph, θ=27.5 degrees, φ∗=0, ωb=2500 rpm, ωs=0. The graph shows that the fly ball distances decreases by approximately 4 feet for every 0.01 increase in the drag coefficient. Summary So the question posed back in 2013 has finally been answered quantitatively. For a given exit velocity and launch angle, the fly ball distance is determined up to a standard deviation of 16.8 feet. With the benefit of trajectory and spin information from Statcast, it is now possible to examine the parameters that lead to that variation. Once the spray angle is taken into account, the remaining variation of 11 feet comes from four different sources, in roughly equal contributions: Variation of backspin from mean value Variation of sidespin from mean value Ball-to-ball variation in drag Measurement noise Given all the directly measured batted ball launch parameters (exit velocity, launch angle, backspin, and sidespin – it is not necessary to list spray angle, which plays no role once ωb and ωs are taken into account), the fly ball distance is determined up to a standard deviation of about 8 feet (the result of Model 4), with variation in drag and measurement noise being the remaining contributions. I have now satisfied myself that I understand quantitatively those factors contributing to a variation in fly ball distance for given initial conditions. It is now time to put this baby to rest. Acknowledgments I thank Professor Anette (Peko) Hosoi, who did the analysis to obtain the drag coefficients from the trajectory data, and Charles Young for teaching me the wonders of the R analysis software.
<urn:uuid:8d5b4cae-53a8-4a0a-a0af-cd64d1d06d2f>
CC-MAIN-2021-43
https://blogs.fangraphs.com/contributions-to-variation-in-fly-ball-distances/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.928784
3,019
2.796875
3